Page 129«..1020..128129130131..140150..»

Category Archives: Ai

Finalytics.ai Report on Digital Experience at Top 50 Credit Unions Points to Critical Challenges Across the Industry – Business Wire

Posted: June 28, 2021 at 10:37 pm

SAN MATEO, Calif.--(BUSINESS WIRE)--Finalytics.ai, the creator of the first journey orchestration platform purpose-built for community financial institutions, has published a new report on the digital experience at the top 50 credit unions. The report highlights a series of critical challenges credit unions face when it comes to meeting the ever-heightening expectations of their members about digital experiences.

According to Craig McLaughlin, CEO of Finalytics.ai, "Every time members engage with their credit unions through digital channels, any gap between their expectations and the digital experience offered is reinforced. We see these gaps erode loyalty over time causing many members to abandon their credit unions for mega banks, FinTechs and digital-only banks.

The Finalytics.ai report provides a snapshot of key industry trends and a heuristic review and ranking of the user experience at the largest 50 credit unions by asset size in the United States. The report is the result of comprehensive review of each organizations digital presence including their commercial website, online account origination, digital marketing, key features and functionality, SEO optimization, web analytics and overall member experience.

The Finalytics.ai review also ranked the credit unions they surveyed to determine the top five overall and in several categories.

The top-ranked overall leaders are:

"We are strong believers in the power of community which is why are launching our platform initially for credit unions. Deep personalization at scale delivers on the people helping people movement and we are delighted to play a role in helping communities survive and thrive, said McLaughlin. To continue to play this important role, credit unions will need to embrace new technology that allows them to contextually delight their members at scale. We are seeing an increasing number of forward-thinking credit unions respond by adopting tools that unlock the value of the data they already have using the power of machine learning to create a digital experience tailored specifically to the needs of the individual member.

To learn more about the findings in the Finalytics.ai report, download the 2021 CU Digital Experience report.

Finalytics.ai is the first journey orchestration platform purpose-built for community financial institutions. Using proprietary AI to enrich the data that matters most to financial organizations, Finalytics.ai drives more meaningful one-to-one digital experiences across the entire funnel from search to convert. For more information: http://www.finalytics.ai.

See original here:

Finalytics.ai Report on Digital Experience at Top 50 Credit Unions Points to Critical Challenges Across the Industry - Business Wire

Posted in Ai | Comments Off on Finalytics.ai Report on Digital Experience at Top 50 Credit Unions Points to Critical Challenges Across the Industry – Business Wire

Qualcomms Snapdragon 888 Plus ups the CPU and AI performance – The Verge

Posted: at 10:37 pm

Qualcomms best mobile processor, the Snapdragon 888, is getting even more powerful at MWC 2021 with the announcement of the Snapdragon 888 Plus. The upgraded model boasts a boosted clock speed, bumping the Kryo 680 CPU from 2.84GHz up to 2.995GHz (which Qualcomm is optimistically rounding up to 3GHz in its marketing.)

Also getting upgraded is Qualcomms sixth-gen AI Engine. Where the base Snapdragon 888 could perform 26 trillion operations per second (TOPS) for AI tasks, the 888 Plus is approximately 20 percent more powerful, capable of 32 TOPS.

Midyear refreshes for its top chips are nothing new for Qualcomm, which took similar approaches with the Snapdragon 855 Plus and 865 Plus, each serving as more powerful mid-cycle updates for the standard 855 and 865 models. The company also did an additional refresh on the 865 at the beginning of 2021 with the launch of the Snapdragon 870, a sort of Snapdragon 865 Plus-plus that offered additional improvements and a higher clock speed.

It likely wont be long before the Snapdragon 888 Plus starts to show up, either. Qualcomm says that the first phones with the chip should be announced in Q3 2021. Honor has already said that itll be using the new chip in its upcoming Magic 3 flagship, with Motorola, Vivo, Xiaomi, and Asus also planning devices featuring the Snapdragon 888 Plus.

View post:

Qualcomms Snapdragon 888 Plus ups the CPU and AI performance - The Verge

Posted in Ai | Comments Off on Qualcomms Snapdragon 888 Plus ups the CPU and AI performance – The Verge

Native.AI Launches New Consumer Intelligence Platform to Help F&B and CPG Brands Create Products that Customers Love – Business Wire

Posted: at 10:37 pm

NEW YORK--(BUSINESS WIRE)--Native.AI, a real-time market and consumer intelligence provider, today announced a new platform for Food & Beverage (F&B) and Consumer Packaged Goods (CPG) companies that enables brands to uncover, analyze and act on customer feedback to improve product offerings. The company also announced $1.75mm of strategic pre-seed funding from current and former leaders at several top companies and government agencies including Blue Apron, The Kellogg Company, USDA and more.

Natives new AI-powered platform uses Natural Language Processing (NLP) to deliver real-time analysis of consumer feedback from Point of Sale (POS), e-commerce reviews, social media and smart labels. With a better understanding of consumer feedback, brands can confidently assess preferences around taste, price, packaging and other attributes. This allows them to proactively adjust product formulations, supply chains, pricing, packaging and more to increase sales, satisfaction and loyalty.

Consumers are driving market disruption and fueling innovation within the F&B and CPG industry. Without their feedback, it is incredibly difficult to make products that fit customers ever-changing preferences, especially in a cluttered marketplace, said Frank Pica, co-founder and CEO of Native.AI. With 30,000 new CPG products launching every year, the competition is cutthroat. Our intelligence allows brands to understand customers like never before and make real-time decisions on product development and differentiation. Our pre-seed funding by some of the greatest innovators in the F&B space speaks to the strong demand for this level and speed of insight.

The companys rapid growth and momentum in the F&B and CPG industry has gained attention from market-leading founders, executives and investors from Blue Apron, USDA, The Kellogg Company, Bunge, USAKO Group, Oppenheimer & Co., Conagra, Archer Daniels Midland, Grain Millers and Capital Innovators.

The initial addressable market for AI solutions in the CPG industry represents an incredible $7.5B of the total $62B AI market and is growing at a 42% CAGR, said Ilia Papas, founder and former CTO of Blue Apron. Native fills a massive gap by equipping brands with the real-time intelligence they need to innovate and differentiate. The early market demand and customer ROI has been remarkable.

Natives platform gives independent F&B brands access to first-party consumer data which has historically been hoarded by retailers. By enabling brands to take back control of their data, Native is a catalyst for innovation and differentiation in a competitive CPG marketplace. Proven use cases and benefits of Natives new platform include:

Native was founded in 2018 as a real-time consumer intelligence provider for Agrifood brands. After a couple years of success, Pica and Co-Founder & COO Sarah Sanders uncovered untapped demand in the larger CPG market. This insight guided a pivotal product evolution and expansion into the F&B and CPG industry. Today, Native works with Agrifood, CPG, produce and cannabis brands to fuel personalized, high-quality product development and innovation.

For more information on Native.AI, please visit http://www.gonative.ai.

About Native.AI

Native.AI provides a real-time consumer and market intelligence platform that empowers brands to uncover, analyze and act on customer feedback to improve product offerings and drive innovation. Its AI-powered natural language processing (NLP) technology collects real-time data from Point of Sale (POS), e-commerce reviews, social media, smart labels and more to allow brands to assess sentiment around taste, packaging, price and more. The insights equip brands to adjust product formulations, supply chains, pricing, packaging and more to increase sales, satisfaction and loyalty. Founded in 2018, Natives customers span F&B, CPG, agrifood, cannabis and produce industries.

View post:

Native.AI Launches New Consumer Intelligence Platform to Help F&B and CPG Brands Create Products that Customers Love - Business Wire

Posted in Ai | Comments Off on Native.AI Launches New Consumer Intelligence Platform to Help F&B and CPG Brands Create Products that Customers Love – Business Wire

Aon and Zesty.ai gain approval for AI wildfire model in California – Reinsurance News

Posted: at 10:37 pm

Re/insurance broker Aon has decided to extend its strategic alliance with Zesty.ai, a tech firm that uses artificial intelligence to assess climate and non-catastrophe risk, following the approval of a new wildfire risk model in California.

The California Department of Insurance (CDI) recently approved two underwriting and rating filings, including Z-FIRE, Zesty.ais AI-driven predictive model for wildfire risk assessment.

The filings that used the model were independently reviewed and evaluated by Aon, and represent the first AI model ever approved by the CDI.

Z-FIRE is designed to enhance insurers risk selection and rate setting, while helping them to understand the impact of climate change on wildfires and in turn their portfolios.

Equally, Zesty.ai asserts that consumers benefit from better transparency by seeing how renovating and upkeeping their properties can impact their wildfire score.

Initially forming their strategic alliance in 2019, the expanded relationship between Zesty.ai and Aon is focused on the accelerated deployment of AI and Machine Learning (ML) applications that help P&C insurance carriers close the property data gap.

The goal is to improve the carriers risk selection and rating for both personal and commercial lines of business, while helping obtain the full value of these data-driven decisions in their reinsurance transactions.

Aon and Zesty.ai have also broadened their collaboration to develop AI models for additional perils, such as severe convective storms (e.g., hail) and flood.

Were committed to helping clients leverage emerging technologies to advance their strategic initiatives, from increasing the availability of insurance to address the underserved, to allowing for better risk segmentation, ultimately resulting in higher transparency for reinsurance treaties, said George deMenocal, CEO of Aons U.S. Reinsurance Solutions.

Our collaboration with Zesty.ai is part of Aons technological evolution to deliver new products that meet clients needs today and tomorrow, in a transparent and efficient way. The recent approval of Zesty.ais model by the CDI further validates the future potential of AI-driven predictive analytics and the power of these collaborations.

Attila Toth, Founder and CEO of Zesty.ai, also commented: A deep understanding of every property is key to an equitable and stable insurance market. As the need to better measure climate risk increases, we are constantly refining our models and developing new ones and working with Aon greatly accelerates the adoption of these data-driven solutions. Were proud to doubling down on our alliance with Aon to help carriers across North America improve every aspect of the underwriting, rating and reinsurance process.

The rest is here:

Aon and Zesty.ai gain approval for AI wildfire model in California - Reinsurance News

Posted in Ai | Comments Off on Aon and Zesty.ai gain approval for AI wildfire model in California – Reinsurance News

LinkedIns job-matching AI was biased. The companys solution? More AI. – MIT Technology Review

Posted: at 10:37 pm

More and more companies are using AI to recruit and hire new employees, and AI can factor into almost any stage in the hiring process. Covid-19 fueled new demand for these technologies. Both Curious Thing and HireVue, companies specializing in AI-powered interviews, reported a surge in business during the pandemic.

Most job hunts, though, start with a simple search. Job seekers turn to platforms like LinkedIn, Monster, or ZipRecruiter, where they can upload their rsums, browse job postings, and apply to openings.

The goal of these websites is to match qualified candidates with available positions. To organize all these openings and candidates, many platforms employ AI-powered recommendation algorithms. The algorithms, sometimes referred to as matching engines, process information from both the job seeker and the employer to curate a list of recommendations for each.

You typically hear the anecdote that a recruiter spends six seconds looking at your rsum, right? says Derek Kan, vice president of product management at Monster. When we look at the recommendation engine weve built, you can reduce that time down to milliseconds.

Most matching engines are optimized to generate applications, says John Jersin, the former vice president of product management at LinkedIn. These systems base their recommendations on three categories of data: information the user provides directly to the platform; data assigned to the user based on others with similar skill sets, experiences, and interests; and behavioral data, like how often a user responds to messages or interacts with job postings.

In LinkedIns case, these algorithms exclude a persons name, age, gender, and race, because including these characteristics can contribute to bias in automated processes. But Jersins team found that even so, the services algorithms could still detect behavioral patterns exhibited by groups with particular gender identities.

For example, while men are more likely to apply for jobs that require work experience beyond their qualifications, women tend to only go for jobs in which their qualifications match the positions requirements. The algorithm interprets this variation in behavior and adjusts its recommendations in a way that inadvertently disadvantages women.

You might be recommending, for example, more senior jobs to one group of people than another, even if theyre qualified at the same level, Jersin says. Those people might not get exposed to the same opportunities. And thats really the impact that were talking about here.

Men also include more skills on their rsums at a lower degree of proficiency than women, and they often engage more aggressively with recruiters on the platform.

To address such issues, Jersin and his team at LinkedIn built a new AI designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender.

Kan says Monster, which lists 5 to 6 million jobs at any given time, also incorporates behavioral data into its recommendations but doesnt correct for bias in the same way that LinkedIn does. Instead, the marketing team focuses on getting users from diverse backgrounds signed up for the service, and the company then relies on employers to report back and tell Monster whether or not it passed on a representative set of candidates.

Irina Novoselsky, CEO at CareerBuilder, says shes focused on using data the service collects to teach employers how to eliminate bias from their job postings. For example, When a candidate reads a job description with the word rockstar, there is materially a lower percent of women that apply, she says.

See more here:

LinkedIns job-matching AI was biased. The companys solution? More AI. - MIT Technology Review

Posted in Ai | Comments Off on LinkedIns job-matching AI was biased. The companys solution? More AI. – MIT Technology Review

AVER LAUNCHES ADVANCED 4K AI ALL-IN-ONE DISTANCE LEARNING COLLABORATION SOLUTION – eSchool News

Posted: at 10:37 pm

FREMONT, CA June 28, 2021: AVer Information Inc., the award-winning provider of video collaboration solutions and education technology solutions, announced today the launch of the innovative VB130 All-In-One Distance Learning Collaboration Solution. Designed for the evolving K-12 distance learning classroom, the VB130 combines 4K video and selectable 120o or 90o Field of View with clear bult-in audio to give remote students a true in-class experience, while allowing in-class students and their teacher to hear audio from classmates at home.

The VB130 video bar features advanced AI technology combining SmartFrame technology with voice tracking to keep the teacher in camera view while reducing background noise so remote learners do not lose site of their teacher. The 4X zoom focuses on the teacher as he or she moves about the classroom, while USB connectivity allows it to seamlessly integrate with all distance learning platforms such as Zoom, Microsoft Teams, Google Meet and others.

The built-in soundbar provides clear audio for the class while remote learners are participating, while the built-in microphone captures teacher and in-class audio for remote students to clearly hear. Its compact size at under 14 inches wide makes the VB130 extremely portable, easily moving it from class to class, or even brought home if the situation calls for it.

A manual privacy shutter is included to ensure the camera is not broadcasting during classroom downtime or during private conferences while the built-in illumination keeps the class or teacher visible during low-light situations.

The VB130 is the perfect all-in-one audio and video distance learning solution for schools and districts who are moving forward with long-term distance and hybrid learning options for remote students, says Thi Tran, Product Marketing Manager, K-12. The VB130 is also compact and portable, so if school shut-downs ever occur again, it can easily be taken home for remote teaching.

About AVer Information Inc.:

Founded in 2008, AVer is an award-winning provider of education technology and video collaboration camera solutions that improve productivity and enrich learning. From accelerating learning in the classroom to increasing competitive advantage for businesses, AVer solutions leverage the power of technology to help people connect with one another to achieve great things. Our product portfolio includes Professional Grade Artificial Intelligence Enabled Auto Tracking Cameras, Zoom and Microsoft Teams Certified Enterprise Grade USB Cameras, Document Cameras and Mobile Device Charging Solutions. We strive to provide industry leading service and support that exceeds our customers expectations. We are also deeply committed to our community, the environment and employ stringent green processes in all we do. Learn more at averusa.com and follow us @AVerInformation.

eSchool Media staff cover education technology in all its aspectsfrom legislation and litigation, to best practices, to lessons learned and new products. First published in March of 1998 as a monthly print and digital newspaper, eSchool Media provides the news and information necessary to help K-20 decision-makers successfully use technology and innovation to transform schools and colleges and achieve their educational goals.

See the article here:

AVER LAUNCHES ADVANCED 4K AI ALL-IN-ONE DISTANCE LEARNING COLLABORATION SOLUTION - eSchool News

Posted in Ai | Comments Off on AVER LAUNCHES ADVANCED 4K AI ALL-IN-ONE DISTANCE LEARNING COLLABORATION SOLUTION – eSchool News

What is Artificial Intelligence (AI)? | IBM

Posted: June 13, 2021 at 12:44 pm

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartners hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Deep learning is actually comprised of neural networks. Deep in deep learning refers to a neural network comprised of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

More:

What is Artificial Intelligence (AI)? | IBM

Posted in Ai | Comments Off on What is Artificial Intelligence (AI)? | IBM

How to mitigate bias in AI – VentureBeat

Posted: at 12:44 pm

Elevate your enterprise data technology and strategy at Transform 2021.

As the common proverb goes, to err is human. One day, machines may offer workforce solutions that are free from human decision-making mistakes; however, those machines learn through algorithms and systems built by programmers, developers, product managers, and software teams with inherent biases (like all other humans). In other words, to err is also machine.

Artificial intelligence has the potential to improve our lives in countless ways. However, since algorithms often are created by a few people and distributed to many, its incumbent upon the creators to build them in a way that benefits populations and communities equitably. This is much easier said than done no programmer can be expected to hold the full knowledge and awareness necessary to build a bias-free AI model, and further, the data gathered can be biased as a result of the way they are collected and the cultural assumptions behind those empirical methods. Fortunately, when building continuously learning AI systems of the future, there are ways to reduce that bias within models and systems. The first step is about recognition.

Its important to recognize that bias exists in the real world, in all industries and among all humans. The question to ask is not how to make bias go away but how to detect and mitigate such bias. Understanding this helps teams take accountability to ensure that models, systems, and data are incorporating inputs from a diverse set of stakeholders and samples.

With countless ways for bias to seep into algorithms and their applications, the decisions that impact models should not be made in isolation. Purposefully cultivating a workgroup of individuals from diversified backgrounds and ideologies can help inform decisions and designs that foster optimal and equitable outcomes.

Recently, the University of Cambridge conducted an evaluation of over 400 models attempting to detect COVID-19 faster via chest X-rays. The analysis found many algorithms had both severe shortcomings and a high risk of bias. In one instance, a model trained on X-ray images of adult chests was tested on a data set of X-rays from pediatric patients with pneumonia. Although adults experience COVID-19 at a higher rate than children, the model positively identified cases disproportionally. Its likely because the model weighted rib sizes in its analysis, when in fact, the most important diagnostic approach is to examine the diseased area of the lung and rule out other issues like a collapsed lung.

One of the bigger problems in model development is that the datasets rarely are made available due to the sensitive nature of the data, so its often hard to determine how a model is making a decision. This illustrates the importance of transparency and explainability in both how a model is created and its intended use. Having key stakeholders (i.g., clinicians, actuaries, data engineers, data scientists, care managers, ethicists, and advocates) developing a model in a single data view can remove several human biases that have persisted due to the siloed nature of healthcare.

Its also worth noting that diversity extends much further than the people creating algorithms. Fair algorithms test for bias in the underlying data in their models. In the case of the COVID-19 X-ray models, this was the Achilles heel. The data sampled and collected to build models can underrepresent certain groups whose outcomes we want to predict. Efforts must be made to build more complete samples with contributions from underrepresented groups to better represent populations.

Without developing more robust data sets and processes around how data is recorded and ingested, algorithms may amplify psychological or statistical bias from how the data was collected. This will negatively impact each step of the model-building process, such as the training, evaluation, and generalization phases. However, by including more people from different walks of life, the AI models built will have a broader understanding of the world, which will go a long way toward reducing the inherent biases of a single individual or homogeneous group.

It may surprise some engineers and data scientists, but lines of code can create unfairness in many ways. For example, Twitter automatically crops uploaded images to improve user experience, but its engineers received feedback that the platform was incorrectly missing or misidentifying certain faces. After multiple attempts to improve the algorithm, the team ultimately realized that image trimming was a decision best made by people. Choosing the argmax (largest predicted probability) for finally outputting predictions amplifies disparate impact. An enormous number of test data sets, as well as scenario-based testing, are needed to neutralize these concerns.

There will always be gaps in AI models, yet its important to maintain accountability for them and correct them. And fortunately, when teams detect potential biases with a base model that is built and performs sufficiently, existing methods can be used to de-bias the data. Ideally, models shouldnt run without having a proper continuous feedback loop where predicted outputs are reused to train new versions. When working with diverse teams, data, and algorithms, building feedback-aware AI can reduce the innate gaps where bias can sneak in, yet without the diversity of inputs, AI models will just re-learn from its bias.

If individuals and teams are cognizant of the existence of bias, then they have the necessary tools at the data, algorithm, and human levels to build a more responsible AI. The best solution is to be aware that these biases exist and maintain safety nets to address them for each project and model deployment. What tools or approaches do you use to create algorithm fairness in your industry? And most importantly, how do you define the purpose behind each model?

Akshay Sharmaisexecutive vice president of artificial intelligence at digital health company Sharecare.

View post:

How to mitigate bias in AI - VentureBeat

Posted in Ai | Comments Off on How to mitigate bias in AI – VentureBeat

AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine

Posted: at 12:44 pm

Take a hike, Bieber. Step aside, Gaga. And watch out, Sheeran. Artificial intelligence is here and its coming for your jobs.

Thats, at least, what you might think after considering the ever-growing sophistication of AI-generated music.

While the concept of machine-composed music has been around since the 1800s (computing pioneer Ada Lovelace was one of the first to write about the topic), the fantasy has become reality in the past decade, with musicians such as Francois Pachet creating entire albums co-written by AI.

Some have even used AI to create new music from the likes of Amy Winehouse, Mozart and Nirvana, feeding their back catalogue into a neural network.

Even stranger, this July, countries across the world will even compete in the second annual AI Song Contest, a Eurovision-style competition in which all songs must be created with the help of artificial intelligence. (In case youre wondering, the UK scooped more than nul points in 2020, finishing in a respectable 6th place).

But will this technology ever truly become mainstream? Will artificial intelligence, as artist Grimes fears, soon make musicians obsolete?

To answer these questions and more, we sat down withProf Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London. Below he explains how AI music is composed, why this technology wont crush humanity creativity and how robots could soon become part of live performances.

Music AIs use neural networks that are really large sets of bits of computers that try and mimic how the brain works. And you can basically throw lots of music at this neural network and it learns patterns just like how the human brain does by repeatedly being shown things.

Whats tricky about todays neural networks is theyre getting bigger and bigger. And theyre becoming harder and harder for humans to understand what theyre actually doing.

Were getting to a point now where we have these essentially black boxes that we put music into and nice new music comes out. But we dont really understand the details of what its doing.

These neural networks also consume a lot of energy. If youre trying to train AI to analyse the last 20 years of pop music, for instance, youre chucking all that data in there and then using a lot of electricity to do the analysis and to generate a new song. At some point, were going to have to question whether the environmental impact is worth this new music.

Im a sceptic on this. A computer may be able to make hundreds of tracks easily, but there is still likely still a human selecting which ones they think are nice or enjoyable.

Theres a little bit of smoke and mirrors going on with AI music at the moment. You can throw in Amy Winehouses back catalogue into an AI and a load of music will come out. But somebody has to go and edit that. They have to decide which parts they like and which parts the AI needs to work on a bit more.

The problem is that were trying to train the AI to make music that we like, but were not allowing it to make music that it likes. Maybe the computer likes a different kind of music than we do. Maybe the future would just be all the AIs listening to music together without humans.

Im also kind of sceptic on that one as well. AI can generate lyrics that are interesting and have an interesting narrative flow. But lyrics for songs are typically based on peoples life experiences, whats happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs dont do that.

Im a little bit sceptical that an AI would have that life experience to be able to communicate something meaningful to people.

Read more:

This is where I think the big shift will be mash-ups between different kinds of musical styles. Theres research at the moment that takes the content of one kind of music and putting it in the style of another kind of music, exploring maybe three or four different genres at once.

While its difficult to try these mash-ups in a studio with real musicians, an AI can easily try a million different combinations of genres.

People say this with every introduction of new technology into music. With the invention of the gramophone, for example, everybody was worried, saying it would be terrible and the end of music. But of course, it wasnt. It was just a different way of consuming music.

AI might allow more people to make music because its now much easier to make a professional sounding single using just even your phone than it was 10 or 20 years ago.

A woman interacts with an AI music conductor during the 2020 Internet Conference in Wuzhen, Zhejiang Province of China. Getty

At the moment, AI is like a tool. But in the near future, it could be more of a co-creator. Maybe it could help you out by suggesting some basslines, or give you some ideas for different lyrics that you might want to use based on the genres that you like.

I think the co-creation between the AI and the human as equal creative partners will be the really valuable part of this.

AI can create a pretty convincing human voice simulation these days. But the real question is why you would want it to sound like a human anyway. Why shouldnt the AI sound like an AI, whatever that is? Thats whats really interesting to me.

I think were way too fixated on getting the machines to sound like humans. It would be much more interesting to explore how it would make its own voice if it had the choice.

I love musical robots. A robot that can play music has been a dream for so many for over a century. And in the last maybe five or 10 years, its really started to come together where youve got the AI that can respond in real-time and youve got robots that can actually move in very sort of human and emotional ways.

The fun thing is not just the music that theyre making, but its the gestures that go with the music. They can nod their heads or tap their feet to the beat. People are now building robots that you can play with in real-time in a sort of band like situation.

Whats really interesting to me is that this combination of technology has come together where we can really feel like its a real living thing that were playing music with.

Yeah, for sure. I think thatd be great! It will be interesting to see what an audience makes of it. At the moment its quite fun to play as a musician with a robot. But is it really fun watching robots perform? Maybe it is. Just look at Daft Punk!

Nick Bryan-Kinns is director of the Media and Arts Technology Centre at Queen Mary University of London, and professor of Interaction Design. He is also a co-investigator at the UKRI Centre for Doctoral Training in AI for Music, and a senior member of the Association for Computing Machinery.

Read more about the science of music:

Go here to read the rest:

AI is about to shake up music forever but not in the way you think - BBC Science Focus Magazine

Posted in Ai | Comments Off on AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine

AI can now convincingly mimic cybersecurity experts and medical researchers – The Next Web

Posted: at 12:44 pm

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation flagged and unflagged has been aimed at the general public. Imagine the possibility of misinformation information that is false or misleading in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as facultymembers doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that its possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that theres too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.

Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writers block.Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.

Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.

Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.

We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.

An example of AI-generated cybersecurity misinformation. The Conversation, CC BY-ND

This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.

A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.

An example of AI-generated health care misinformation. The Conversation, CC BY-ND

The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.

Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.

We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.

Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.

Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit peoples credulity, especially if the information is not from reputable news sources or published scientific work.

This article byPriyanka Ranade, PhD Student in Computer Science and Electrical Engineering, University of Maryland, Baltimore County; Anupam Joshi, Professor of Computer Science & Electrical Engineering, University of Maryland, Baltimore County, and Tim Finin, Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County,is republished from The Conversation under a Creative Commons license. Read the original article.

Read the original post:

AI can now convincingly mimic cybersecurity experts and medical researchers - The Next Web

Posted in Ai | Comments Off on AI can now convincingly mimic cybersecurity experts and medical researchers – The Next Web

Page 129«..1020..128129130131..140150..»