Great Resignation: How to Be Successful in Attracting Top AI and ML Talent – EisnerAmper

It has always been challenging to find the top artificial intelligence (AI) and machine learning (ML) talent -- and todays environment has only heightened the difficulty. Non-tech companies have increased their demand for these workers, even as tech powerhouses like Google, Facebook and Amazon seek to hire thousands. Companies must broaden the funnel of potential candidates by making themselves more appealing to potential top talent workers. Despite the high demand for AI and ML workers, essential skills, expertise, and experience are scarce. According to a recent Gartner's 2021-2023 Emerging Technology Roadmap Survey that cited talent availability as the main adoption risk factor, it's no surprise that AI and ML professionals are in high demand at companies utilizing (or seeking to utilize) these emerging technologies, regardless of whether they're just getting started or have a lot of expertise. Its essential for firms, both tech and non-tech, to be creative in their approach.

Here are three tips for recruiting AI and ML talent:

Find where talented AI and ML engineers and data scientists hang out. For many companies, at first, this can be difficult to identify, but through resources like Meetup.com, firms can find groups where engineers and data scientists congregate. Meetup.com is a platform where groups of users focused on a certain topic get together and organize events and is a site which has been used to build professional tech community groups. Firms can find dozens of AI and ML networking communities in all large cities. Its important for firms hiring managers to be involved in networking within these groups and socialize, letting other users know why your firm is the best place to work!

Invest in creating partnerships with top tech universities for recruitment. An example of this is collaborating with data science graduate programs at local universities. From there, you have the first pick of the top talent straight out of the universities. This can provide an unlimited technology talent pipeline and connect you with the best student picks.

When interviewing the talent, its important to paint a clear picture of a culture of digital innovation and share why their work is worth it. This shows the candidates that workplaces are passionate about helping transform their clients' businesses using emerging technologies. For example, give references of AI and ML use cases that team members have contributed which provides value to the firm and clients.

As you invest in AI talent promotion and development, collaborate with your human resources team members to personalize an approach to implement AI skills at work to meet the changing expectations that the industry faces. This will lead to the promotion of internal team members and provide them the advancement of AI skills and training needed to take on roles like data scientist, data engineer, ML engineer, and business intelligence analysts.

Finally, an open innovation culture attracts top tech talent, regardless of the individuals race, gender, or background. It shows firms are passionate about the solutions they build that drive a fantastic client experience. When it comes to recruiting AI and ML talent, firms should no longer try to compete with the big tech companies like Amazon, Microsoft, Facebook, Google, and IBM. However, a more viable approach is to collaborate with leading technology companies, allowing teams to work on best-in-class AI and automation solutions from these big tech companies. With the digital revolution that COVID-19 has kickstarted, there is an opportunity for all companies to establish a strong reputation for digital excellence through the recruitment of an open, innovative, and diverse new workforce. As your reputation gets better, your opportunity to attract top AI and ML talent will be more significant.

Read more here:

Great Resignation: How to Be Successful in Attracting Top AI and ML Talent - EisnerAmper

Google to update business hours with Artificial Intelligence (AI) – Techiexpert.com – TechiExpert.com

Google has announced how it is trying to update business hours on Google Maps with the help of Artificial Intelligence, for instance, its restaurant-calling Duplex technology. The company claims to update the information in Maps once it becomes confident enough in the AIs prediction of what a businesss hours should be.

Its challenging to keep Google Maps the latest with a business owners working hours. During the pandemic, Google had gone through this new unknown issue. The hours of operation became unforeseeable, and they havent changed much. Resultantly, Google pronounced to use AI to update company hours.

Google drafted a machine learning algorithm that identifies whether the company hours are stated accurately in advance or not. It brings it to an end by recognizing patterns such as when the store is busiest, photographs of the storefront regarding the duration, opening and closing hours, and more. Then it is considered if the Google My Business profile should be updated with the actual hours or not. It helps to update and upgrade the information on the GMB page and analyses if it is different from the obtained data.

Google follows the numerous parameters that AI contemplates while making decisions about improvements in a blog post. To determine how likely the hours are wrong, it looks at when the company profile was latest and Popular Times data.

As per Googles records, if its AI thinks that the hours should be edited, it looks at even more data. Itll collect data from the companys official website and even scrape street view photographs to determine when the company is at its work. Google declared it will double-check the AIs predictions with real people, such as Google Maps users and business owners, and will even use Duplex in some countries to ask businesses about their hours directly.

What is Googles Duplex conversational technology, and how does it exert?

The technology focuses on collecting the right information through interactions. The AI voice dials the saved number and records their conversation. Furthermore, it makes an effort to take very little input from the owner. Google will be avoiding the door-to-door approach by using this technology to assimilate working hours.

Apart from this, Google will recruit AI to change speed restrictions on various pathways. While using Google Maps for navigation, this is a safety feature that reflects speed limitations. Google will perform so by collaborating with third-party image providers, and they will utilize photographs to verify the speed limit signal on a particular road.

As a result, the speed limits will be updated to offer great security for Google Maps users. Toll plaza charges will now be displayed on Google Maps. You can now find out the amount of toll, all thanks to new upgrades, and it can also guide highways to avoid toll booths if available.

See the rest here:

Google to update business hours with Artificial Intelligence (AI) - Techiexpert.com - TechiExpert.com

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

Further Reading

Here is the original post:

What Is Artificial Intelligence (AI)? | PCMag

Wimbledon to Use IBM’s Watson AI for Highlights, Analytics, Helping Fans – Bloomberg

International Business Machines Corp.'s Watson is seen in the immersion room during an event at the company's headquarters in New York.

The Wimbledon tennis tournament, which starts Monday, will use IBM's artificial intelligence agent Watson to help direct fans to the most exciting matches, automatically generate video highlight reels and guide guests through the grounds of the All England Lawn Tennis Club.

A voice-activated digital assistantcalled "Fred," named after British tennis great Fred Perry, will help those attending Wimbledon find their way around. Visitors can ask Fred for directions to the nearest strawberry stand, how to buy a Wimbledon towel or who is playing right now on Centre Court. Fred will also help visitors find other activities -- such as the children's play area -- they might want to check out while at the Club. The assistant is powered by Watson's natural language processing ability.

AnotherIBM technology will help fans find matches that are likely to be the most exciting to watch by analyzing player statistics. IBM and AELTC have jointly developed a new metric called competitive margin, which is the differential between the opposing players' ratios of forced to unforced errors. If there is little margin between them, the match is likely to be a close-fought contest. The new technologies were unveiled by IBM and AELTCon Tuesday.

Watsonand International Business Machine Corp.s $18 billion ``cognitive computing''group are a rare bright spot at the company, which has faced a years-long slump in sales and earnings. New York-based IBMis counting on the unit as its long-term growth driver. IBM has calledWatsona connector across all its services and software and has announced a large number of deals to build outWatson-related projects.

IBM has sponsored Wimbledon since 1990 and in recent years has used Wimbledon as a testbed for new uses of Watson.

The systems on display at AELTC, called SlamTracker with Cognitive Keysto the Match, will also give fans insights into the game, highlighting what kinds of tactics each player is likely to use against that particular opponent. It will also predict, at any given moment, which player is most likely to prevail based on the state of the game and their past performance.

"What we are trying to do is surface things in a more digestible way for the fans," Sam Seddon, IBM's Wimbledon Client and Programme Executive, said. "We are trying to lift up the insights and say this is important, and if you are interested in which way this match is going to go, focus on this point."

Alexandra Willis,head of communications, content and digitalfor AELTC, said Wimbledon is trying to "move beyond data andstatistics actually into stories and this idea ofmaking itmore applicable,approachable to more people."

The most important business stories of the day.

Get Bloomberg's daily newsletter.

IBM will also be using artificial intelligence to automatically compile highlight reels for matches taking place on six of Wimbledon's courts. This system will look at everything from the importance of a point to the game's outcome,the noise of the crowd reacting to that point, the volume and sentiment of social media posts and even facial analysis of the players themselves, to determine the best portions of video to include in a highlight reel for that game.

Willis said the technology will be able to put together highlight videos in less than 30 minutes -- compared to 45 minutes to an hour for the human editors the Club has used previously -- and that this will free up valuable staff time to spend on other criticaltasks.

To provoke social media discussion among fans, Seddon said Watson would make "some quite provocative" statements on the theme of what traits are most important for a Wimbledon champion. The statements will be based on Watson's analysis of 53.7 million Wimbledon tennis data points since 1990 and an analysis of more than 11 million words of press coverage of the tournament going back to 1995.

Using this data, Watson analyzed players across six broad factors -- passion, performance under pressure, serve effectiveness, stamina, how well the player either adapted their normal playing style to an opponent or was able to force an opponent to conform to their tactics, and the ability to return serves. It will then use these factors to make an argument about which factors are most important, which AELTC and IBM hope will prompt vigorous debate on social media channels, the two organizations said.

Mick Desmond, commercial and media director for AELTC, said that the club was pursuing these digital strategies ultimately to grow the tournament's audience, particularly online and in new markets like China.

"Disruption is all around us and certainly we take nothing for granted," Desmond said. "What we want to do is not only tell the stories of the existing great players, but start to tell the stories ofthe younger players coming through as we make them the future stars."

For its part, Seddon said that IBM tries to take technologies it pioneers at Wimbledon and bring them to a wider group of business customers. Some of the insights from previous years,into which videos attract the most attention from Wimbledon fans, for instance, fed into work IBM later did for AMC Networks International to help predict which television programs will get the highest ratings, Seddon said.

AELTC's use of Watson is just part of a host of new technologies it is pioneering at this year's tournament, including 360 degree video and augmented reality from thepractice courts. Fans watching practice matches will be able to point their phones at players on the court and get insights into who they are, their past performance and why they might be interesting to keep an eye on.

Go here to read the rest:

Wimbledon to Use IBM's Watson AI for Highlights, Analytics, Helping Fans - Bloomberg

Japanese government wants to use AI to play cupid for its citizens – CNET

In Japan, not only can you have artificial intelligence pick your mate, but you can also have two giant Pikachu mascots standing by as you say I do at your wedding.

Finding the perfect mate can feel like an impossible quest, especially when in-person interaction has come to a screeching halt thanks to COVID lockdowns and quarantines. But if you live in Japan, the government there wants to help you find eternal love -- or at least your future spouse -- by using artificial intelligence.

In a recent effort to find a way to boost Japan's declining birth rate, the Japanese government has been trying to help single heterosexual men and women to find true love so they get married and start families. Currently, the number of annual marriages in Japan has fallen from 800,000 in 2000 to 600,000 in 2019.

From the lab to your inbox. Get the latest science stories from CNET every week.

According to Sora News 24, roughly 25 of Japan's 47 prefectures currently have some sort of government-run matchmaking service for singles where the users plug in their preferences for a potential mate -- which includes age, income and educational level. The dating services then provide a list of other users who meet their criteria.

However, Japan's Cabinet Office now thinks the current dating services aren't advanced enough to help singles make lasting romantic connections. That's where artificial intelligence could come to the rescue.

The new AI dating systems would work by having users answer more specific questions catered to their personal values on a variety of topics.

The users would also have to share more information about their own hobbies and interests, like Pokemon in case you want to have a Pikachu-themed wedding.

Using this more personality-driven service (rather than just using age, income and education level as the main criteria), there is a higher probability the match could lead to marriage.

The government would pay for two-thirds of the costs of introducing and operating the new and improved AI dating systems.

Currently, Japan's Cabinet Office is asking for budget approval of two billion yen (about $19.05 million) for the new AI-enabled dating service, which would then launch at the start of spring.

View original post here:

Japanese government wants to use AI to play cupid for its citizens - CNET

Tableau update uses AI to increase speed to insight – TechCrunch

Tableau was acquired by Salesforce earlier this year for $15.7 billion, but long before that, the company had been working on its fall update, and today it announced several new tools, including a new feature called Explain Data that uses AI to get to insight quickly.

What Explain Data does is it moves users from understanding what happened to why it might have happened by automatically uncovering and explaining whats going on in your data. So what weve done is weve embedded a sophisticated statistical engine in Tableau, that when launched automatically analyzes all the data on behalf of the user, and brings up possible explanations of the most relevant factors that are driving a particular data point, Tableau chief product officer, Francois Ajenstat explained.

He added that what this really means is that it saves users time by automatically doing the analysis for them, and It should help them do better analysis by removing biases and helping them dive deep into the data in an automated fashion.

Image: Tableau

Ajenstat says this is a major improvement, in that, previously users would have do all of this work manually. So a human would have to go through every possible combination, and people would find incredible insights, but it was manually driven. Now with this engine, they are able to essentially drive automation to find those insights automatically for the users, he said.

He says this has two major advantages. First of all, because its AI-driven it can deliver meaningful insight much faster, but also it gives a more rigorous perspective of the data.

In addition, the company announced a new Catalog feature, which provides data bread crumbs with the source of the data, so users can know where the data came from, and whether its relevant or trustworthy.

Finally, the company announced a new server management tool that helps companies with broad Tableau deployment across a large organization to manage those deployments in a more centralized way.

All of these features are available starting today for Tableau customers.

Read the original post:

Tableau update uses AI to increase speed to insight - TechCrunch

How AI Is Impacting Operations At LinkedIn – Forbes

LinkedIn has been at the cutting edge of AI for years and uses AI in many ways users may not be aware of. I recently had the opportunity to talk to Igor Perisic, Chief Data Officer (CDO) and VP of Engineering at LinkedIn to learn more about the evolution of AI at LinkedIn, how its being applied to daily activities, how worldwide data regulations impact the company, and some unique insight into the changing AI-related work landscape and job roles.

Igor Perisic, Chief Data Officer and VP of Engineering at LinkedIn

The Evolution of AI at LinkedIn

Very early on at LinkedIn, data was identified as one of the companys core differentiating factors. Another differentiating factor was a core company value of members first (clarity, consistency, and control of how member data is used) and their vision to create economic opportunity for every member of the global workforce.

As LinkedIn began finding more and more ways to weave AI into their products and services, they also recognized the importance of ensuring all employees were well-equipped to work with AI as needed in their jobs. To that end, they created an internal training program called the AI Academy. Its a program that teaches everyone from software engineers to sales teams about AI at the level most suited to them, in order for them to be prepared to work with these technologies.

One of the very first AI projects was the People You May Know (PYMK) recommendations. Essentially, this is an algorithm that recommends to members other people that they may know on the platform and helps them build their networks. It is a recommendation system that is still central to their products, although now it is much more sophisticated than it was in those early days. PYMK as a data product began around 2006. It was started by folks that would eventually be known as one of the first data science teams in the tech industry. Back in those early days, no one referred to PYMK as an AI project, as the term AI was not yet a back in favor buzz word.

The other significant project which we started around the same time was of course search ranking, which was a classic AI problem at that time due to the emergence of Google and competition in the search engine space.

How AI is applied to daily activities

At LinkedIn, Igor says that we compare AI to oxygenit permeates everything we do. For example, for our members, it helps recommend job opportunities, organizes their feed, ensures that the notifications they receive are timely and informative, and suggests LinkedIn Learning content to help them learn new skills. With respect to LinkedIns enterprise products, he says AI helps salespeople reach members that have an interest in their products, marketers serve relevant sponsored content, and recruiters identify and reach out to new talent pools. The benefits of AI at Linkedin also operate in the background, from helping protect members from fraudulent and harmful content to routing internet connections to ensure the best possible site speed for our members.

Ensuring member safety on the platform is something that we take very seriously. Being a social network with a very strong professional intent, its important to act quickly in identifying and preventing abuse. Because abuse and threats are constantly changing, AI is certainly at the core of these efforts. LinkedIn has found machine learning very helpful in detecting inappropriate profiles.

Without AI, many of their products and services would simply not function. The economic graph they use to represent the global economy is simply too large and too nuanced to be understood without it.

AI is literally enhancing every experience. Starting from the notifications our members are getting about relevant items. But, probably, one of the most prominent ways through which our members experience AI is in the feed, which sorts and ranks a heterogeneous inventory of activities (posts, news, videos, articles, etc.). To ensure relevance in the feed, its important that the algorithms consider the different nuances of content recommendations and members preferences.

One interesting example Igor shares is that at the start of 2018, they discovered an uneven distribution of engagement in the feedgains in viral actions were accrued by the top 1% of power users, and the majority of creators were increasingly receiving zero feedback. The feed model was simply doing as it was told: sharing broad-interest, viral content that would generate lots of engagement. However, he says they realized that this optimization wasnt necessarily the most beneficial for all members. To combat the negative ecosystem effect that the AI had created, they incorporated creator-side optimization in their feed relevance objective function to help their creators with smaller audiences. With this update, the ranking algorithms began taking into consideration the value that would result for both viewer and creator in surfacing a specific item. For the viewer,they wanted to surface relevant content based on their preferences, and for the creator, they wanted to encourage high-quality content and help them reach their audiences. Igor says by tweaking our models to optimize for more than just viral sharing moments, our feed changed into a healthy mix of content from influencers as well as direct connections, which then improved engagement for both viewers and creators..

How worldwide data regulations impact LinkedIn

In recent years regions around the world have started to put in place laws around how companies are able to store and use user data. Laws such as the EUs General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) are intended to enhance privacy rights and consumer protection. For some companies, becoming compliant meant having to totally chance how they approach data. Luckily for LinkedIn, data was always considered an asset to the company and approached with respect as one of the companys core differentiating factors.

Even before GDPR, Igor says LinkedIn had an internal framework they call the 3Csclarity, consistency, and control. He says We believed then and still do today that we owed it to our members to provide clarity about what we do with their data, to be consistent in only doing as we say, and to give our members control over their data:. In that context, LinkedIn approached GDPR as an opportunity to reinforce their commitment to data privacy for all members globally. For example, LinkedIn extended GDPR Data Subject Rights to all members globally. They continue to be thoughtful in how they approach the use of members data throughout LinkedIn and in AI, and in how they review and update processes, to ensure privacy by design. Acting in the best interest of members continues to be LinkedIns north star, and they always felt that its their joint responsibility across the organization to protect members data.

The changing AI work landscape

As a very large professional social network, LinkedIn has the unique opportunity to see insights about changing job roles, popular positions, and regional popularity that other companies might not have as deep insights into. At the end of last year, LinkedIn released their third annual Emerging Jobs Report to identify the most rapidly growing jobs. AI specialist emerged as the #1 emerging job of that list, showing 74% annual growth over the past 4 years. Its especially exciting to see this growth beyond the tech industry. In 2017, they found that the education sector had the second-highest numbers of core AI skills added by members, showing that AIs growth is correlated with more research in the field.

More recently, amid the economic downturn caused by the pandemic, LinkedIn is still observing that the AI job market continues to grow. When normalized against overall job postings, AI jobs increased 8.3% in the ten weeks after the COVID-19 outbreak in the U.S. Even though AI job listings are growing slower than they did before the pandemic, and despite an overall slowdown in demand for talent, employers still appear to be open to hiring AI specialists.

Whats interesting about the field of AI is that LinkedIn is seeing an entire ecosystem of technical roles that support different stages of the AI lifecycle. If you go back to the Emerging Jobs Report at the end of last year, AI specialist roles (people who build and train models, etc.) are up, but that so-called AI-adjacent jobs are also on the rise. This means that youre seeing more demand for data scientists, data engineers, and cloud engineers. Youre also seeing this demand growing across multiple industries, not just the technology sector. It is across the entire spectrum.

Future Impact of AI

At the end of the day, AI is a tool, and its greatest potential lies in how it will augment human intelligence and how it will enable people to achieve more. LinkedIns current AI tools depend greatly on human input and can never fully be automated.

Igor strongly believes that the future of AI is in applications and especially how we leverage that tool to make us all smarter and to enable us to do more. To do so, AI needs to be much more accessible to a wider set of individuals than just AI experts. AI needs to become more of a plug-and-play, almost a point-and-click interface. Hes seeing the major cloud players get into this space, developing tools that help lower the barrier of entry into AI. Once AI is application-driven, it opens up human creativity to develop really cool and interesting use cases.

In that context, AI technologies are really fascinating across the entire spectrum; from algorithmic and mathematical developments to hardware and AI systems. Just think about the ingenuity researchers have shown in attempting to make their deep neural nets simply converge. In the AI landscape, it seems that there are treasures behind every bush or under every rock.

Follow this link:

How AI Is Impacting Operations At LinkedIn - Forbes

Facebook kills AI that invented its own language because English was slow – PC Gamer

Some wonderful things are in development because of advances made in artificial intelligence and machine learning technologies. At the same time, there is perhaps an uncomfortable fear that machines may rise up and turn against humans. Usually the scenario is brought up in a joking matter, but it was no laughing matter to researchers at Facebook who shut down an AI they invented after it taught itself a new language, Digital Journal reports.

The AI was trained in English but apparently had grown fed up with the various nuances and inconsistencies. Rather than continue down that path, it developed a system of code words to make communication more efficient.

What spooked the researchers is that the phrases used by the AI seemed like gibberish and were unintelligible to them, but made perfect sense to AI agents. This allowed the AI agents to communicate with one another without the researchers knowing what information was being shared.

During one exchange, two bots named Bob and Alice abandoned English grammar rules and started communicating using the made up language. Bob kicked things off by saying, "I can i i everything else," which prompted Alice to respond, "balls have zero to me to me to me..." The conversation went on in that manner.

The researchers believe the exchange represents more than just a bunch of nonsense, which is what it appears to be on the surface. They note that repeating words and phrases such as "i" and "to me" are indicative of how AI works. In this particular conversation, they believe the bots were discussion how many of each item they should take.

AI technologies use a "reward" system in which they expect of a course of action to have a "benefit."

"There was no reward to sticking to English language," Dhruv Batra, a research scientists from Georgia Tech who was at Facebook AI Research (FAIR), told Fast Co. Design. "Agents will drift off understandable language and invent codewords for themselves. Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."

Facebook ultimately determined that it wanted its bots to speak in plain English, in part because the interest is in making bots that can talk with people. However, researchers at Facebook also admitted that they can't truly understand languages invented by AI.

Here is the original post:

Facebook kills AI that invented its own language because English was slow - PC Gamer

Why Elon Musk Is Wrong About AI – Fortune

Theres a growing debate about the impact that artificial intelligence will have on the future, with two tech luminaries themselvesTesla ( tsla ) CEO Elon Musk and Facebook ( fb ) CEO Mark Zuckerbergas figureheads representing glass-half-empty vs. half-full perspectives, respectively. Last week, Musk commented that AI is an existential risk for human civilization . Zuckerberg retorted that comments like this are pretty irresponsible, to which Musk tweeted a retort that Zuckerbergs understanding of the subject is limited. While these comments refer to sweeping impacts, many are debating one specific area where we are already seeing the effects of AI: jobs.

As humans, were trained to watch for threats to our survival and predict tragedies. Jobs are intrinsically linked to our survival, as theyre the way most of us earn income and are therefore able to provide for our basic needs. However, many are predicting that with the advent of AI, we will see the rise of a useless classpeople who are not just unemployed, but are unemployable.

This is a chilling and pessimistic view of the future. If the last century of incredible advances in digital technologies leads to the creation of a useless class of people who have nothing better to do than play virtual-reality video games all day, thats a tragedy for civilization. If thats what happens, we will look back at Musks remarks and say they were accurate. But AI itself is not a thing; it is a series of combined technologies that humans are creating and guiding the impacts of, including impacts on work.

In particular, those of us in the technology industry have an obligation to shape the future of AI and robotics to help create better and more productive jobs. We can leverage AI to ensure that opportunity is more equally distributed around the country and around the world, rather than concentrated in small pockets of urban wealth and opportunity.

Investing energy in the vigilant watch over the future of work is wise because only one thing is sure: Jobs will change. However, buying into doom and gloom is not wise, in my opinion. There is time to shape our future and make it a positive one. Everyone in society has an obligation to ensure that people are educated for a future in which AI touches every aspect of work. But its up to those of us who build technology to ensure that it augments human workers, not replaces them.

This is an area where Silicon Valley culture has fallen short, with its obsessive focus on eliminating labor costs. However, there are indications that people in technology are starting to think differently about their obligations toward humanity , and to design their products accordingly.

When it comes to dirty, dangerous, and demeaning work , automation can save lives and increase human dignity. There are already signs that this fourth industrial revolution will increase gross domestic product and overall productivity, just as the previous three have done, and it could also increase the flexibility and geographic diversity of work. If this is what we can expect from robots and automation, bring it on .

Its true that technology has enormous power to eliminate jobs. In 1900, more than 40% of the population worked in agriculture, but by 2000, that was down to 2% , thanks to the efficiencies introduced by farming machines, as economist David Autor points out . Similarly, self-driving vehicle technologies may eventually make millions of truck drivers, taxi drivers, and other driving occupations obsolete. People who do those jobs now will need to find new work.

On the other hand, automation can result in a net increase of jobs. The number of bank tellers in the U.S. has doubled since the introduction of the ATM . And while farm machinery decimated the market for agricultural jobs, overall participation in the U.S. workforce grew steadily throughout the 20th century . In every major transition to date, weve wound up with more jobs, not fewer.

There is evidence that this is happening now. Indeed, nonfarm private employment has risen for 87 months in a row and unemployment levels are at record lows, in a sign that Internet technologies have not in fact destroyed jobs. Meanwhile, in the past year, about one-third of U.S. companies have started deploying artificial intelligence . This enormous transition is already beginning.

In the future, AI can help augment peoples work regardless of where they live. For instance, AI-enhanced medical diagnoses may bring the power of supercomputers and the worlds best medical centers into the hands of local family doctors. AI-powered news algorithms can improve our knowledge of world events and help fight fake news. AI can increase the productivity of computer programmers wherever they live, not just in Silicon Valley.

One reason the last century resulted in so many new jobs is because of the early 20th-century movement to extend mandatory schooling through high school, providing education for people who no longer had farm jobs to look forward to. That decision ensured that we had millions of literate, well-educated people ready to take on the jobs that the second half of the 20th century needed.

We need to do the same now. Only this time, we need to jettison our outdated, 19th-century model of classroom education, and embrace new approaches more suited to our rapidly changing times. Individuals should position themselves for a lifetime of learning since the skills demanded by the workplace are changing more rapidly than ever. Traditional college degrees no longer lead to stable long-term employment opportunitiesfresh training on new skills is much more impactful. Companies should also be prepared to retrain people when they replace them with machines. And we need more public-private education partnerships that combine contributions from both business and government.

Yes, we need safety nets to help people through these massive transitions, but instead of merely investing in social safety nets, we need to address the root causes.

Those of us in technology need to guide it to augment humans, not replace them. And companies and society as a whole need to invest in education to ensure we and our children are ready for jobs we cant even imagine yet.

If we do that, as our ancestors did at the beginning of the 20th century, we can help ensure that AI will usher in an era of opportunity and wealth for all.

Stephane Kasriel is CEO of Upwork.

More:

Why Elon Musk Is Wrong About AI - Fortune

This could lead to the next big breakthrough in common sense AI – MIT Technology Review

AI models that can parse both language and visual input also have very practical uses. If we want to build robotic assistants, for example, they need computer vision to navigate the world and language to communicate about it to humans.

But combining both types of AI is easier said than done. It isnt as simple as stapling together an existing language model with an existing object recognition system. It requires training a new model from scratch with a data set that includes text and images, otherwise known as a visual-language data set.

The most common approach for curating such a data set is to compile a collection of images with descriptive captions. A picture like the one below, for example, would be captioned An orange cat sits in the suitcase ready to be packed. This differs from typical image data sets, which would label the same picture with only one noun, like cat. A visual-language data set can therefore teach an AI model not just how to recognize objects but how they relate to and act on one other, using verbs and prepositions.

But you can see why this data curation process would take forever. This is why the visual-language data sets that exist are so puny. A popular text-only data set like English Wikipedia (which indeed includes nearly all the English-language Wikipedia entries) might contain nearly 3 billion words. A visual-language data set like Microsoft Common Objects in Context, or MS COCO, contains only 7 million. Its simply not enough data to train an AI model for anything useful.

Vokenization gets around this problem, using unsupervised learning methods to scale the tiny amount ofdata in MS COCO to the size of English Wikipedia. The resultant visual-language model outperforms state-of-the-art models in some of the hardest tests used to evaluate AI language comprehension today.

You dont beat state of the art on these tests by just trying a little bit, says Thomas Wolf, the cofounder and chief science officer of the natural-language processing startup Hugging Face, who was not part of the research. This is not a toy test. This is why this is super exciting.

Lets first sort out some terminology. What on earth is a voken?

In AI speak, the words that are used to train language models are known as tokens. So the UNC researchers decided to call the image associated with each token in their visual-language model a voken. Vokenizer is what they call the algorithm that finds vokens for each token, and vokenization is what they call the whole process.

The point of this isnt just to show how much AI researchers love making up words. (They really do.) It also helps break down the basic idea behind vokenization. Instead of starting with an image data set and manually writing sentences to serve as captionsa very slow processthe UNC researchers started with a language data set and used unsupervised learning to match each word with a relevant image (more on this later). This is a highly scalable process.

The unsupervised learning technique, here, is ultimately the contribution of the paper. How do you actually find a relevant image for each word?

Lets go back for a moment to GPT-3. GPT-3 is part of a family of language models known as transformers, which represented a major breakthrough in applying unsupervised learning to natural-language processing when the first one was introduced in 2017. Transformers learn the patterns of human language by observing how words are used in context and then creating a mathematical representation of each word, known as a word embedding, based on that context. The embedding for the word cat might show, for example, that it is frequently used around the words meow and orange but less often around the words bark or blue.

This is how transformers approximate the meanings of words, and how GPT-3 can write such human-like sentences. It relies in part on these embeddings to tell it how to assemble words into sentences, and sentences into paragraphs.

Theres a parallel technique that can also be used for images. Instead of scanning text for word usage patterns, it scans images for visual patterns. It tabulates how often a cat, say, appears on a bed versus on a tree, and creates a cat embedding with this contextual information.

The insight of the UNC researchers was that they should use both embedding techniques on MS COCO. They converted the images into visual embeddings and the captions into word embeddings. Whats really neat about these embeddings is that they can then be graphed in a three-dimensional space, and you can literally see how they are related to one another. Visual embeddings that are closely related to word embeddings will appear closer in the graph. In other words, the visual cat embedding should (in theory) overlap with the text-based cat embedding. Pretty cool.

You can see where this is going. Once the embeddings are all graphed and compared and related to one another, its easy to start matching images (vokens) with words (tokens). And remember, because the images and words are matched based on their embeddings, theyre also matched based on context. This is useful when one word can have totally different meanings. The technique successfully handles that by finding different vokens for each instance of the word.

For example:

Read more:

This could lead to the next big breakthrough in common sense AI - MIT Technology Review

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) – Forbes

Technical skills and data literacy are obviously important in this age of AI, big data, and automation. But that doesn't mean we should ignore the human side of work skills in areas that robots can't do so well. I believe these softer skills will become even more critical for success as the nature of work evolves, and as machines take on more of the easily automated aspects of work. In other words, the work of humans is going to become altogether more, well, human.

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI)

With this in mind, what skills should employees be looking to cultivate going forward? Here are nine soft skills that I think are going to become even more precious to employers in the future.

1. Creativity

Robots and machines can do many things, but they struggle to compete with humans when it comes to our ability to create, imagine, invent, and dream. With all the new technology coming our way, the workplaces of the future will require new ways of thinking making creative thinking and human creativity an important asset.

2. Analytical (critical) thinking

As well as creative thinking, the ability to think analytically will be all the more precious, particularly as we navigate the changing nature of the workplace and the changing division of labor between humans and machines. That's because people with critical thinking skills can come up with innovative ideas, solve complex problems and weigh up the pros and cons of various solutions all using logic and reasoning, rather than relying on gut instinct or emotion.

3. Emotional intelligence

Also known as EQ (as in, emotional IQ), emotional intelligence describes a person's ability to be aware of, control, and express their own emotions and be aware of the emotions of others. So when we talk about someone who shows empathy and works well with others, were describing someone with a high EQ. Given that machines cant easily replicate humans ability to connect with other humans, it makes sense that those with high EQs will be in even greater demand in the workplace.

4. Interpersonal communication skills

Related to EQ, the ability to successfully exchange information between people will be a vital skill, meaning employees must hone their ability to communicate effectively with other people using the right tone of voice and body language in order to deliver their message clearly.

5. Active learning with a growth mindset

Someone with a growth mindset understands that their abilities can be developed and that building skills leads to higher achievement. They're willing to take on new challenges, learn from their mistakes, and actively seek to expand their knowledge. Such people will be much in demand in the workplace of the future because, thanks to AI and other rapidly advancing technologies, skills will become outdated even faster than they do today.

6. Judgement and decision making

We already know that computers are capable of processing information better than the human brain, but ultimately, it's humans who are responsible for making the business-critical decisions in an organization. It's humans who have to take into account the implications of their decisions in terms of the business and the people who work in it. Decision-making skills will, therefore, remain important. But there's no doubt that the nature of human decision making will evolve specifically, technology will take care of more menial and mundane decisions, leaving humans to focus on higher-level, more complex decisions.

7. Leadership skills

The workplaces of the future will look quite different from today's hierarchical organizations. Project-based teams, remote teams, and fluid organizational structures will probably become more commonplace. But that won't diminish the importance of good leadership. Even within project teams, individuals will still need to take on leadership roles to tackle issues and develop solutions so common leadership traits like being inspiring and helping others become the best versions of themselves will remain critical.

8. Diversity and cultural intelligence

Workplaces are becoming more diverse and open, so employees will need to be able to respect, understand, and adapt to others who might have different ways of perceiving the world. This will obviously improve how people interact within the company, but I think it will also make the businesss services and products more inclusive, too.

9. Embracing change

Even for me, the pace of change right now is startling, particularly when it comes to AI. This means people will have to be agile and cultivate the ability to embrace and even celebrate change. Employees will need to be flexible and adapt to shifting workplaces, expectations, and required skillsets. And, crucially, they'll need to see change not as a burden but as an opportunity to grow.

Bottom line: we needn't be intimated by AI. The human brain is incredible. It's far more complex and more powerful than any AI in existence. So rather than fearing AI and automation and the changes this will bring to workplaces, we should all be looking to harness our unique human capabilities and cultivate these softer skills skills that will become all the more important for the future of work.

AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

Read this article:

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) - Forbes

This is What Happens When You Teach an AI to Name Guinea Pigs – Gizmodo

This Guinea Pig is named Hanger Dan. (Image Courtesy of Portland Guinea Pig Rescue)

As literally every sci-fi movie has predicted, were becoming increasingly reliant on artificial intelligence. AI can already compose music, play Ms. Pac-Manlike a pro, nonethelessand even manage a hotel. But its never been used solely for the purpose of naming small, fluffy guinea pigsuntil now.

Earlier this week, research scientist Janelle Shane got a fantastically unusual request from the Portland Guinea Pig Rescue, asking if she could build a neural network for guinea pig names. The rescue facility needs to generate a large number of names quickly, as they frequently take in animals from hoarding situations. Portland Guinea Pig Rescue gave Shane a list of classic names, like Snickers or Pumpkin, in addition to just about every other name they could find on the internet. The rest is history.

I used Andrej Karpathys char-rnn, an open-source neural network framework for torch (written in Lua), Shane told Gizmodo. I gave the neural network the list of 600+ guinea pig names that the Portland Guinea Pig Rescue assembled for me, and let it train itself to produce more names like the ones on its list. It gradually formed its own internal rules about which letters and letter combinations are the most quintessentially guinea pig.

It took Shane just a few minutes to train the system. I had to tweak some of the training parameters to get the right mix of creativity versus keeping in line with the original dataset, she explained. Too loose a fit and they didnt sound like guinea pigs; too tight a fit and the neural network would only copy names verbatim from the training data.

Behold the newly named floofs, in all their glory:

And of course, my favorite:

Overall, Shanes AI did a damn good job. Some of its cutest names were Splanky, Gooper, and Spockers. There were a few hilarious missteps, tooButty Brlomy, Boooy, and Bho8otteeddeeceul were the best of the worst.

I am a big fan of Fufby and Fuzzable and Snifkin, partially because theyre so quintessentially guinea pig, Shane said. The neural network really picked up the spirit of the guinea pig names.

You canand shouldcheck out all the adoptable guinea pigs here, via Portland Guinea Pig Rescue.

Read more from the original source:

This is What Happens When You Teach an AI to Name Guinea Pigs - Gizmodo

Importance of AI in the business quest for data-driven operations – TechTarget

The volume of data generated worldwide is soaring, with research firm IDC predicting that by 2025 the global datasphere will reach 175 zettabytes, up an astounding 430% from 33 zettabytes in 2018.

"There's a huge amount of data that companies have been able to capture, internal and external data, structured and unstructured data. And it has become very important for organizations to use all the data available to make data-driven decisions," said Madhu Bhattacharyya, managing director and global leader of Protiviti's enterprise data and analytics practice.

Any enterprise that wants to make use of its data stores must harness the power of artificial intelligence. The importance of AI in the business quest for data-driven decision-making is twofold: AI technologies are required to digest these massive data sets; and AI needs vast stores of data in order to get better at making accurate predictions. "In that way, the use of AI is going to give an organization a competitive edge," Bhattacharyya said.

From enabling businesses to deliver smoother customer experiences to helping them establish new business lines, AI's role in business is akin to the strategic value of electricity in the early 20th century, when electrification transformed industries like transportation and manufacturing and created new ones, like mass communications.

"AI is strategic because the scale, scope, complexity and the dynamism in business today is so extreme that humans can no longer manage it without artificial intelligence. AI is a competitive necessity that business has to deploy," said Chris Brahm, a partner and director at Bain & Co., and leader of the firm's global advanced analytics practice.

Much of AI's strategic value is based in the technology's ability to quickly identify patterns in data, even subtle or rapidly shifting ones, and then to learn how to adjust processes and procedures to produce the best outcome based on the information it uncovered.

As such, AI is being used to identify and deliver even more efficiencies in the automated business processes that operate businesses. It's being used to analyze vast volumes of data to create more personalized experiences for customers. And it's sorting large data sets to identify and perform tasks that it is trained to handle -- and then shift the tasks that need creativity and ingenuity to human workers to complete, thereby boosting organizational productivity.

"AI is very important to the enterprise in two main ways, namely automation and augmentation. Automation allows companies to scale their operation without the need to add more headcounts, while augmentation increases productivity and optimizes internal resources," said Lian Jye Su, a principal analyst at ABI Research.

AI can produce significant productivity gains for organizations by handling mundane, repetitive tasks and performing them at an exponentially higher scale, pace and accuracy than humans can. This leaves employees to focus on more of the business's higher-value functions, thereby layering efficiency gains on top of the productivity boost that the technology delivers.

When described as such, AI seems identical to automation technologies such as robotic process automation (RPA). There is a significant difference between the two types of technology. With RPA, workers use identified steps in a targeted business process to configure the RPA software, tasks that the software bots then perform as they're programmed to do.

On the other hand, AI uses data to generate the most efficient process and then, when combined with some automation software such as RPA, will perform the process to top efficiency. AI then can continue to refine its approach, as it identifies more efficiencies to bring to the process.

If highly efficient automation is one of the biggest values that AI delivers, the other is its capability to provide on-the-job support for human workers.

"AI makes it easier for the human to interact with the information," said Seth Earley, author of The AI-Powered Enterprise and CEO of Earley Information Science.

The ability of AI to analyze data and then draw conclusions from it aids and augments a long list of varied tasks performed by humans. AI can assist doctors in making medical diagnoses. It can take in customer data and other information to suggest to retail associates which sales pitches to make. It can analyze that same data together with the customer's voice to identify the customer's emotional level for call center workers and provide ways to adjust the interaction to reach the optimal outcome.

The importance of AI in business functions like finance and security is growing. AI can sort through reams of financial and industrywide statistics along with economic, consumer and specific customer data to help insurance companies, banks and the like in their underwriting procedures. AI can take automated action against cyber threats by analyzing IT systems, security tools and information about known threats, alert internal cybersecurity teams to new problems, and prioritize the threats that need human attention.

Just as AI can surpass the automation capabilities of RPA, AI also goes beyond the data-driven insights produced with current technologies such as business intelligence tools. While both data analytics technologies and AI analyze data, AI utilizes its intelligence components to draw conclusions, make recommendations and then guide human workers through processes, adjusting its recommendations as a process unfolds and as it takes in new information in real time. That, in turn, allows the AI to continuously learn and refine its conclusions and improve its recommendations over its entire lifecycle.

"What AI is doing is processing information throughout the organization; and it's speeding that flow so we can react more quickly, be more agile and meet needs more effectively," Earley said.

But efficiency and productivity gains delivered by AI-powered automation and augmentation is only part of the strategic importance of AI in business operations.

More significant, experts said, is the fact that AI gives organizations the ability to compete in a marketplace where customers, employees and partners increasingly expect the speed and personalization that the automation and augmentation deliver.

"AI is strategically important because it's building the capabilities that our customers demand and that our competitors will have," Earley said, saying that AI is the "digital machinery" that delivers the results that all those stakeholders want.

AI's role in using data to automate and enhance human work creates (and will continue to drive) cost-saving opportunities, improved sales and new revenue streams.

"Data is becoming overwhelming," said Karen Panetta, a fellow with the technical professional organization IEEE and Tufts University professor of electrical and computer engineering, "so if you're not going to use these new AI technologies, you'll be left behind in every aspect -- in understanding customers, new design methods, in efficiency and in every other area."

Read this article:

Importance of AI in the business quest for data-driven operations - TechTarget

Staying ahead of the artificial intelligence curve with help from MIT – MIT News

In August, the young artificial intelligence process automation company Intelenz, Inc. announced its first U.S. patent, an AI-enabled software-as-a-service application for automating repetitive activities, improving process execution, and reducing operating costs. For company co-founder Renzo Zagni, the patent is a powerful testament to the value of his MIT educational experience.

Over the course of his two-decade career at Oracle, Zagni worked his way from database administrator to vice president of Enterprise Applications-IT. After spending seven years in his final role, he was ready to take on a new challenge by starting his own company.

From employee to entrepreneur

Zagni launched Intelenz in 2017 with a goal of keeping his company on the cutting edge. Doing so required that he stay up to date on the latest machine learning knowledge and techniques. At first, that meant exploring new concepts on his own. But to get to the next level, he realized he needed a little more formal education. Thats when he turned to MIT.

When I discovered that I could take courses at MIT, I thought, What better place to learn about artificial intelligence and machine learning? he says. Access to MIT faculty was something that I simply couldnt pass up.

Zagnienrolled in MIT Professional Educations Professional Certificate Program in Machine Learning and Artificial Intelligence, traveling from California to Cambridge, Massachusetts, to attend accelerated courses on the MIT campus.

As he continued to build his startup, one key to demystifying machine learning came from MIT Professor Regina Barzilay, a Delta Electronics professor in the Department of Electrical Engineering and Computer Science and a member of MITs Computer Science and Artificial Intelligence Laboratory. Professor Barzilay used real-life examples in a way that helped us quickly understand very complex concepts behind machine learning and AI, Zagni says. And her passion and vision to use thepower of machine learning to help win the fight against cancer was commendable and inspired us all.

The insights Zagni gained from Barzilay and other machine learning/AI faculty members helped him shape Intelenz early products and continue to influence his companys product development today most recently, in his patented technology, the "Service Tickets Early Warning System. The technology is an important representation of Intelenz ability to develop AI models aimed at automating and improving business processes at the enterprise level.

We had a problem we wanted to solve and knew that artificial intelligence and machine learning could possibly address it. And MIT gave me the tools and the methodologies to translate these needs into a machine learning model that ended up becoming a patent, Zagni says.

Driving machine learning with innovation

As an entrepreneur looking to push the boundaries of information technology,Zagni wasnt content to simply use existing solutions; innovation became a key goal very early in the process.

For professionals like me who work in information technology, innovation and artificial intelligence go hand-in-hand, Zagni says.

While completing machine learning courses at MIT, Zagni simultaneously enrolled in MIT Professional Educations Professional Certificate Program in Innovation and Technology. Combining his new AI knowledge with the latest approaches in innovation was a game-changer.

During my first year with MIT, I was putting together the Intelenz team, hiring developers, and completing designs. What I learned in the innovation courses helped us a lot, Zagni says. For instance, Blake Kotellys Mastering Innovation and Design Thinking course made a huge difference in how we develop our solutions and engage our customers. And our customers love the design-thinking approach.

Looking forward

While his progress at Intelenz is exciting, Zagni is anything but done. As he continues to develop his organization and its AI-enabled offerings, hes looking ahead to additional opportunities for growth.

Were already looking for the next technology that is going to allow us to disrupt the market, Zagni says. Were hearing a lot about quantum computing and other technology innovations. Its very important for us to stay on top of them if we want to remain competitive.

He remains committed to lifelong learning, and says he will definitely be looking to future MIT courses and he recommends other professionals in his field do the same.

Being part of the MIT ecosystem has really put me ahead of the curve by providing access to the latest information, tools, and methodologies, Zagni says. And on top of that, the faculty are very helpful and truly want to see participants succeed.

Continue reading here:

Staying ahead of the artificial intelligence curve with help from MIT - MIT News

How AI will help you create better ads – VentureBeat

Programmatic advertising companies have mainly focused on who to show ads to and when to show them, but until now they have focused very little on what messages to show. Usually, these decisions are limited to:

Perhaps surprisingly, Facebook and Google AdWords currently provide more opportunities for creative optimization, due to constraints on the creative their native-like formats expect. Title, body, landing page, and sometimes image are the structured fields. By removing arbitrary design creativity, ironically, these formats encourage much more automated experimentation among the individual content elements. Even in these formats, however, it is still uncommon for the content to be individually personalized, unless it is just recommending products based on retargeting.

But what if your marketing platform could predict which messages would have the most impact on each consumer, on an individually personalized basis, and automatically assemble or select those messages? What if such an approach could show lift in results between 2x and 4x versus just using the best single creative? And finally, what if it could tell you when there are lots of consumers for whom the best-fit message is not yet available in your library, so you can prioritize new creative briefs for your design team?

Im convinced that in the future, the strongest predictive marketing platforms will employ this AI-based approach, known as predictive creative.

As with the native formats described above, predictive creative will providea more structured understanding of the elements that make up a creative message, including the background, colors, imagery, and call to action. Equally important is a similarly structured breakdown of these elements into their attributes that may independently affect the influence of the ad on each consumer.

For example, does the ad showany people? Men, women, or both? How old are they? Does it show a product? Is it in isolation or in use? Is there a call to action?

Which of the following terms describes the ad or its emotional content andimpact: happy, funny, calm, exciting, clever, fancy, adventurous, family, aggressive, value, need, safe, trustworthy, quality?

By understanding this much about the creatives they build, marketers have the chance to learn which characteristics drive better performance. And, when coupled with the data available in predictive marketing platforms, machine learning can predict the likely response of each individual to a well-understood ad even more accurately. This expands what is humanly possible, by combining the creativity of marketers to design effective messages with the power of big data and machine learning to individually deliver those messages to their most receptive audience.

The most powerful result, however, is that this kind of data can help direct marketers to create new ads with themes and elements that were missing from their campaign before, without trying to design a million different combinations.

The key to this is leveraging the data we observe about how a customer will respond across many different brands. Were betting that the kind of data we are capturing here is abstract enough from the details of any brand campaign that most advertisers will be comfortable opting in to sharing this kind of analytics with each other in order to benefit from the aggregated data about customers.

This approach can work equally well with video or display advertising, on desktop, mobile, or social channels. And it is applicable to both brand and direct response goals, as long as there is some way to measure the impact of the campaign on individual consumers, such as watching a video to completion, expressing brand favorability or awareness in a survey, or interacting with an ad, whether or not it generates a click-through. Finally, it can help with both personalization (showing each ad to the right people who will be influenced by them) and contextualization (showing each ad on the right site or app where it will have an increased effect).

Continue reading here:

How AI will help you create better ads - VentureBeat

Pentagon AI Efforts Disorganized: RAND Breaking Defense – Defense industry news, analysis and commentary – Breaking Defense

DoD CIO Dana Deasy (left) and the director of the Joint AI Center, Lt. Gen. Jack Shanahan (right), speak to reporters.

WASHINGTON: A congressionally mandated study warns the Defense Departments current efforts to harness artificial intelligence are significantly challenged by shortfalls in organization, planning, data, and talent, and testing, setting the stage for changes in the next defense policy and spending bills.

The problems RAND identified include a major mismatch between the sweeping responsibilities assigned to the year-old Joint Artificial Intelligence Center and its authority to achieve them, making it exceedingly difficult for the JAIC to succeed. To solve the problem, the RAND reports central recommendation is to strengthen the JAIC a recommendation Congress is now certain to at least consider next year as it drafts the 2021 defense bill.

A word of caution. Its the Joint AI Center director, Lt. Gen. Jack Shanahan, who hired RAND to do the report in the first place. (Congress required him to submit a report, but it didnt dictate who should write it). While RAND is highly respected for its independent, in-depth scholarship, its not known for challenging the fundamental premises of the questions the Defense Department asks.

How much AI spending is there for JAIC to coordinate, anyway? Thats actually a tricky question. The long-delayed 2020 appropriations bill last night includes unspecified significant investments [in] artificial intelligence, but weve not seen a specific figure. Any number would be an estimate anyway. AI spending is scattered across the Defense Department under a host of different terms and is often buried in larger projects.

The RAND report includes an annex digging through the 2020 budget, but its not available to the public. The only figure the public version gives for AI-specific activity is $15 million 0.002 percent of the DoD budget. But that doesnt include any AI work done as part of a larger program, such as a weapons system, cloud computing contract, or business software. DoD budgets do not account for AI when it is a small part of a larger platform, RAND says, making it hard to track overall spending.

Strengthening the Joint AI Center

There is evidence to support that DoD has taken the right approach in establishing the JAIC as a centralized focal point for DoDs AI strategy, says the RAND study, released this morning, [but] DoD failed to provide the JAIC with visibility, authorities, and resource commitments, making it exceedingly difficult for the JAIC to succeed in its assigned mandate.

Now, the RAND report doesnt include one recent reform that postdates its drafting. In October, Deputy Defense Secretary David Norquist officially designated the JAIC director as senior official with primary responsibilities for the coordination of activities related to the development and demonstration of AI and machine learning, working in tandem with R&D undersecretary Mike Griffins technical director for AI research and development. Its far from clear what this new role actually involves, but JAIC and the R&D shop are supposed to provide an implementation plan by April 2nd.

Even this doesnt give JAIC any authority to control AI spending across the military. The AI center can only provide guidance to the services, not direction.

One option RAND recommends to shore up the Joint AI Center, part of the Office of the Secretary of Defense, is to give JAIC new legal authorities over budgeting and personnel in the four armed services. But the report admits this would require Congress to pass legislation increasing the power of OSD over the services acquisition programs, reversing the Hills recent efforts to decentralize authority back to the service chiefs.

Brig. Gen. Matthew Easley, director of the Armys Artificial Intelligence Task Force

RANDs alternative plan, cut down to fit within the limits of the current law, would strengthen the existing AI efforts in each of the services of which the Armys AI Task Force is the most developed and bring their chiefs together on a DoD-wide council, chaired but not controlled by the JAIC director.

In either case, RAND recommends JAIC and the services replace their current vague aspirations with clear five-year plans, complete with unambiguous measures of success or failure to judge them against. It also urges Defense Department leadership and the JAIC itself to figure out what the AI Centers mission really is and make that clear to a confused workforce.

In 102 interviews conducted between April and August 59 officials from DoD, nine from other federal agencies, 25 from industry, and nine academics we noted a lack of clarity among our interviewees on the JAICs mandate, roles, and activities[,] how it fits within the broader DoD ecosystem and how it connects to the services and their efforts, RAND said. It points to a lack of clarity about the raison dtre of the JAIC The confusion might not be entirely on the part of the audience. DoD needs to have a clearer view of what it wants the JAIC to be.

Major Recommendations

The RAND reports recommendations go well beyond reorganization. In particular, the report raises major concerns about how the Defense Department handles its data, its human capital, and its test programs to assure AI actually works as advertised. Some key excerpts (emphasis ours) and yes, RAND is pedantic enough to consistently use data as a plural:

Ultimately, the RAND report believes that the Defense Department can make major advances in AI, but it has to be realistic about how long that will take. Business-style enterprise applications like finance, personnel, and data management will be feasible much sooner than operational AI capable of handling the chaos and ambiguity of actual combat. As a rule of thumb, RAND says, investments made starting today can be expected to yield at-scale deployment in the near term for enterprise AI, in the middle term for most mission-support AI, and in the long term for most operational AI.

Shanahans boss, Pentagon Chief Information Officer Dana Deasy, welcomed RANDs report as a thorough and thoughtful critique to be considered along with recent recommendations from the Defense Innovation Board and theNational Security Commission on AI.

Go here to read the rest:

Pentagon AI Efforts Disorganized: RAND Breaking Defense - Defense industry news, analysis and commentary - Breaking Defense

How AI is changing the customer experience – MIT Technology Review

AI is rapidly transforming the way that companies interact with their customers. MIT Technology Review Insights survey of 1,004 business leaders, The global AI agenda, found that customer service is the most active department for AI deployment today. By 2022, it will remain the leading area of AI use in companies (say 73% of respondents), followed by sales and marketing (59%), a part of the business that just a third of surveyed executives had tapped into as of 2019.

In recent years, companies have invested in customer service AI primarily to improve efficiency, by decreasing call processing and complaint resolution times. Organizations known as leaders in the customer experience field have also looked toward AI to increase intimacyto bring a deeper level of customer understanding, drive customization, and create personalized journeys.

Genesys, a software company with solutions for contact centers, voice, chat, and messaging, works with thousands of organizations all over the world. The goal across each one of these 70 billion annual interactions, says CEO Tony Bates, is to delight someone in the moment and create an end-to-end experience that makes all of us as individuals feel unique.

Experience is the ultimate differentiator, he says, and one that is leveling the playing field between larger, traditional businesses and new, tech-driven market entrantsproduct, pricing, and branding levers are ineffective without an experience that feels truly personalized. Every time I interact with a business, I should feel better after that interaction than I felt before.

In sales and marketing processes, part of the personalization involves predictive engagementknowing when and how to interact with the customer. This depends on who the customer is, what stage of the buying cycle they are at, what they are buying, and their personal preferences for communication. It also requires intelligence in understanding where the customer is getting stuck and helping them navigate those points.

Marketing segmentation models of the past will be subject to increasing crossover, as older generations become more digitally skilled. The idea that you can create personas, and then use them to target or serve someone, is over in my opinion, says Bates. The best place to learn about someone is at the businesss front door [website or call center] and not at the backdoor, like a CRM or database.

The survey data shows that for industries with large customer bases such as travel and hospitality, consumer goods and retail, and IT and telecommunications, customer care and personalization of products and services are among the most important AI use cases. In the travel and hospitality sector, nearly two-thirds of respondents cite customer care as the leading application.

The goal of a personalized approach should be to deliver a service that empathizes with the customer. For customer service organizations measured on efficiency metrics, a change in mindset will be requiredsome customers consider a 30-minute phone conversation as a truly great experience. But on the flip side, I should be able to use AI to offset that with quick transactions or even use conversational AI and bots to work on the efficiency side, says Bates.

With vast transaction data sets available, Genesys is exploring how they could be used to improve experiences in the future. We do think that there is a need to share information across these large data sets, says Bates. If we can do this in an anonymized way, in a safe and secure way, we can continue to make much more personalized experiences. This would allow companies to join different parts of a customer journey together to create more interconnected experiences.

This isnt a straightforward transition for most organizations, as the majority of businesses are structured in silosthey havent even been sharing the data they do have, he adds. Another requirement is for technology vendors to work more closely together, enabling their enterprise customers to deliver great experiences. To help build this connectivity, Genesys is part of industry alliances like CIM (Cloud Innovation Model), with tech leaders Amazon Web Services and Salesforce. CIM aims to provide common standards and source code to make it easier for organizations to connect data across multiple cloud platforms and disparate systems, connecting technologies such as point-of-sale systems, digital marketing platforms, contact centers, CRM systems, and more.

Data sharing has the potential to unlock new value for many industries. In the public sector, the concept of open data is well known. Publicly available data sets on transport, jobs and the economy, security, and health, among many others, allow developers to create new tools and services, thus solving community problems. In the private sector there are also emerging examples of data sharing, such as logistics partners sharing data to increase supply chain visibility, telecommunications companies sharing data with banks in cases of suspected fraud, and pharmaceutical companies sharing drug research data that they can each use to train AI algorithms.

In the future, companies might also consider sharing data with organizations in their own or adjacent industries, if it were to lead to supply chain efficiencies, improved product development, or enhanced customer experiences, according to the MIT Technology Review Insights survey. Of the 11 industries covered in the study, respondents from the consumer goods and retail sector proved the most enthusiastic about data sharing, with nearly a quarter describing themselves as very willing to share data, and a further 57% being somewhat willing.

Other industries can learn from financial services, says Bates, where regulators have given consumers greater control over their data to provide portability between banks, fintechs, and other players, in order to access a wider range of services. I think the next big wave is that notion of a digital profile where you and I can control what we do and dont want to shareI would be willing to share a little bit more if I got a much better experience.

See the rest here:

How AI is changing the customer experience - MIT Technology Review

India will see breakthrough application of AI – Economic Times

India will see breakthrough application of artificial intelligence in various areas including the National Language Translation Mission, said Infosys chairman Nandan Nilekani.

Nilekani said this during a fireside chat with Ajay Sawhney, secretary in the Ministry of Electronics and Information Technology (MeitY) and Debjani Ghosh, president of IT industry body Nasscom.

The interaction was organised by INDIAai, a national AI portal set up by MeitY, National E-Governance Division and Nasscom.

India is unique in the fact that it has such a large number of languages, all co-mingling, and most Indians speak two to three languages and so on. Creating the worlds best language capability, whether its speech, text to speech, whether its language to language, I think India is well placed to show the world how to do it, he added.

Sawhney, speaking on the National AI Mission - on which MeitY is working jointly with the NITI Aayog - said that the core research would give not just length but a tremendous amount of depth in coverage in terms of technology, across various areas/sectors of application.

Sawhney also spoke about the creation of a national public digital platform for healthcare, which knits together all healthcare providers on one platform.

Read the original:

India will see breakthrough application of AI - Economic Times

AI is targeting some of the world’s biggest problems: homelessness, terrorism, and extinction – VentureBeat

Making AI models at the University of Southern California (USC) Center for AI in Society does not involve a clean, sorted dataset. Sometimes it means interviewing homeless youth in Los Angeles to map human social networks. Sometimes it involves going to Uganda for better conservation of endangered species.

With AI, we are able to reach 70 percent of the youth population in the pilot, compared to about 25 percent in the standard techniques. So AI algorithms are able to reach far more youth in terms of spreading HIV information compared to traditional methods, saidMilind Tambe, a professor at the USC Viterbi School of Engineering and cofounder of the Center for AI in Society. If I were doing AI normally I might get data from the outside and I would analyze the data, produce algorithms, and so forth, but I wouldnt go to a homeless shelter.

The pilot project will next be expanding to serve 1,000 youth. Other projects currently being taken on by the Center for AI in Society include gang prevention, wildlife conservation with computer vision, and predictive models to improve cybersecurity, prevent suicide, and help homeless youth find housing.

The center has also developed and deployed algorithms for federal agencies such as the U.S. Coast Guard, Air Marshals Service, and Transportation and Security Administration (TSA).

Tambe was one a handful ofauthors of a forward-looking report that examines how AI will evolve and affect business, government, and society between the present and 2030. Commissioned by Stanford University as part of The AI 100 Project, the study found that AI aimed at solving social problems has traditionally lacked investment because it produces no profitable commercial applications. The report prescribes making AI for low resource projects a higher priority and offering AI researchers incentives, but Tambe also believes an entirely new discipline may need to be developed.

[These projects] bring up completely new kinds of AI problems because working with low resource communities, data is sparse, as opposed to being plentiful. When you talk about big data, thats not what were doing here. Whether its wildlife conservation or working with homeless youth, were talking incomplete data and theres no capacity to actually produce that massive clean big data that you can do deep learning on, he said.

Were trying to develop novel AI science as well as novel social science, co-director Eric Rice told VentureBeat in a phone interview. Were not just trying to be data scientists who take advantage of publicly available datasets or social scientists that take advantage of out-of-the-box machine learning tools that are pretty much readily available through canned software packages. What were really trying to build is new science on both sides.

The USC Center for AI in Society is a collaboration between computer science and social science schools at USC, an ambitious initiative created to cross-pollinate ideas between the two disciplines in order to solve some of the worlds biggest problems.

Created in 2013, the program focuses on problems found in the 12 Grand Challenges of social work and the United Nations Sustainable Development Goals.

The 12 Grand Challenges of Social Work was created last year by social workers and espouses goals like ensuring healthy development for all youth, eradicating social isolation, stopping family violence, and ending homelessness.

The Sustainable Development Goals were adopted by U.N. member nations in 2015 and focus on implementing measures to address priorities like access to quality education, gender equity, and the end of poverty and hunger by 2030.

This is the first collaboration as far as we are aware between AI and social work in a center. So were really collaborating across schools in terms of engineering and AI and social work, and its bringing up completely new sets of challenges to the core in terms of problems that the AI community has tackled, Tambe told VentureBeat in a phone interview. Spreading HIV information amongst homeless youth or trying to reduce substance abuse or matching homeless youth to homes, these are challenges that generally have not been tackled within the AI community.

The two schools work together because sometimes an AI data scientist may not understand a social issue if they dont see it emerge in a dataset, and social workers may sometimes fail to understand that an algorithm could significantly impact a social issue.

While there was some initial difficulty in understanding the different vocabularies social scientists and data scientists use, the collaboration leads to completely new kinds of discovery that wouldnt have been possible if either of us were working alone, Tambe said.

Social work tends to be less precise and engineering is very focused, so theres this dance were in, Rice said. Were adding more muddiness to the model and theyre insisting that we are more crisp in our argument, so theres a nice generative aspect to that kind of back and forth.

Here is the original post:

AI is targeting some of the world's biggest problems: homelessness, terrorism, and extinction - VentureBeat

Coronavirus: how the pandemic has exposed AIs limitations – The Conversation UK

It should have been artificial intelligences moment in the sun. With billions of dollars of investment in recent years, AI has been touted as a solution to every conceivable problem. So when the COVID-19 pandemic arrived, a multitude of AI models were immediately put to work.

Some hunted for new compounds that could be used to develop a vaccine, or attempted to improve diagnosis. Some tracked the evolution of the disease, or generated predictions for patient outcomes. Some modelled the number of cases expected given different policy choices, or tracked similarities and differences between regions.

The results, to date, have been largely disappointing. Very few of these projects have had any operational impact hardly living up to the hype or the billions in investment. At the same time, the pandemic highlighted the fragility of many AI models. From entertainment recommendation systems to fraud detection and inventory management the crisis has seen AI systems go awry as they struggled to adapt to sudden collective shifts in behaviour.

The unlikely hero emerging from the ashes of this pandemic is instead the crowd. Crowds of scientists around the world sharing data and insights faster than ever before. Crowds of local makers manufacturing PPE for hospitals failed by supply chains. Crowds of ordinary people organising through mutual aid groups to look after each other.

COVID-19 has reminded us of just how quickly humans can adapt existing knowledge, skills and behaviours to entirely new situations something that highly-specialised AI systems just cant do. At least yet.

We now face the daunting challenge of recovering from the worst economic contraction on record, with societys fault lines and inequalities more visible than ever. At the same time, another crisis climate change looms on the horizon.

At Nesta, we believe that the solution to these complex problems is to bring together the distinct capabilities of both crowd intelligence and machine intelligence to create new systems of collective intelligence.

In 2019, we funded 12 experiments to help advance knowledge on how new combinations of machine and crowd intelligence could help solve pressing social issues. We have much to learn from the findings as we begin the task of rebuilding from the devastation of COVID-19.

In one of the experiments, researchers from the Istituto di Scienze e Tecnologie della Cognizione in Rome studied the use of an AI system designed to reduce social biases in collective decision-making. The AI, which held back information from the group members on what others thought early on, encouraged participants to spend more time evaluating the options by themselves.

The system succeeded in reducing the tendency of people to follow the herd by failing to hear diverse or minority views, or challenge assumptions all of which are criticisms that have been levelled at the British governments scientific advisory committees throughout the pandemic.

In another experiment, the AI Lab at Brussels University asked people to delegate decisions to AI agents they could choose to represent them. They found that participants were more likely to choose their agents with long-term collective goals in mind, rather than short-term goals that maximised individual benefit.

Making personal sacrifices for the common good is something that humans usually struggle with, though the British public did surprise scientists with its willingness to adopt new social-distancing behaviours to halt COVID-19. As countries around the world attempt to kickstart their flagging economies, will people be similarly willing to act for the common good and accept the trade-offs needed to cut carbon emissions, too?

COVID-19 may have knocked Brexit off the front pages for the last few months, but the UKs democracy will be tested in the coming months by the need to steer a divided nation through tough choices in the wake of Britains departure from the EU and an economic recession.

In a third experiment, a technology company called Unanimous AI partnered with Imperial College, London to run an experiment on a new way of voting, using AI algorithms inspired by swarms of bees. Their swarming approach allows participants to see consensus emerging during the decision-making process and converge on a decision together in real-time helping people to find collectively acceptable solutions. People were consistently happier with the results generated through this method of voting than those produced by majority vote.

In each of these experiments, weve glimpsed what could be possible if we get the relationship between AI and crowd intelligence right. Weve also seen how widely held assumptions about the negative effects of artificial intelligence have been challenged. When used carefully, perhaps AI could lead to longer-term thinking and help us confront, rather than entrench, social biases.

Alongside our partners, the Omidyar Network, Wellcome, Cloudera Foundation and UNDP, we are investing in growing the field of collective-intelligence design. As efforts to rebuild our societies after coronavirus begin, were calling on others to join us. We need academic institutions to set up dedicated research programmes, more collaboration between disciplines, and investors to launch large-scale funding opportunities for collective intelligence R&D focused on social impact. Our list of recommendations is the best place to get started.

In the meantime, well continue to experiment with novel combinations of crowd and machine intelligence, including launching the next round of our grants programme this autumn. The world is changing fast and its time for the direction of AI development to change, too.

Excerpt from:

Coronavirus: how the pandemic has exposed AIs limitations - The Conversation UK