Importance of AI in the business quest for data-driven operations – TechTarget

The volume of data generated worldwide is soaring, with research firm IDC predicting that by 2025 the global datasphere will reach 175 zettabytes, up an astounding 430% from 33 zettabytes in 2018.

"There's a huge amount of data that companies have been able to capture, internal and external data, structured and unstructured data. And it has become very important for organizations to use all the data available to make data-driven decisions," said Madhu Bhattacharyya, managing director and global leader of Protiviti's enterprise data and analytics practice.

Any enterprise that wants to make use of its data stores must harness the power of artificial intelligence. The importance of AI in the business quest for data-driven decision-making is twofold: AI technologies are required to digest these massive data sets; and AI needs vast stores of data in order to get better at making accurate predictions. "In that way, the use of AI is going to give an organization a competitive edge," Bhattacharyya said.

From enabling businesses to deliver smoother customer experiences to helping them establish new business lines, AI's role in business is akin to the strategic value of electricity in the early 20th century, when electrification transformed industries like transportation and manufacturing and created new ones, like mass communications.

"AI is strategic because the scale, scope, complexity and the dynamism in business today is so extreme that humans can no longer manage it without artificial intelligence. AI is a competitive necessity that business has to deploy," said Chris Brahm, a partner and director at Bain & Co., and leader of the firm's global advanced analytics practice.

Much of AI's strategic value is based in the technology's ability to quickly identify patterns in data, even subtle or rapidly shifting ones, and then to learn how to adjust processes and procedures to produce the best outcome based on the information it uncovered.

As such, AI is being used to identify and deliver even more efficiencies in the automated business processes that operate businesses. It's being used to analyze vast volumes of data to create more personalized experiences for customers. And it's sorting large data sets to identify and perform tasks that it is trained to handle -- and then shift the tasks that need creativity and ingenuity to human workers to complete, thereby boosting organizational productivity.

"AI is very important to the enterprise in two main ways, namely automation and augmentation. Automation allows companies to scale their operation without the need to add more headcounts, while augmentation increases productivity and optimizes internal resources," said Lian Jye Su, a principal analyst at ABI Research.

AI can produce significant productivity gains for organizations by handling mundane, repetitive tasks and performing them at an exponentially higher scale, pace and accuracy than humans can. This leaves employees to focus on more of the business's higher-value functions, thereby layering efficiency gains on top of the productivity boost that the technology delivers.

When described as such, AI seems identical to automation technologies such as robotic process automation (RPA). There is a significant difference between the two types of technology. With RPA, workers use identified steps in a targeted business process to configure the RPA software, tasks that the software bots then perform as they're programmed to do.

On the other hand, AI uses data to generate the most efficient process and then, when combined with some automation software such as RPA, will perform the process to top efficiency. AI then can continue to refine its approach, as it identifies more efficiencies to bring to the process.

If highly efficient automation is one of the biggest values that AI delivers, the other is its capability to provide on-the-job support for human workers.

"AI makes it easier for the human to interact with the information," said Seth Earley, author of The AI-Powered Enterprise and CEO of Earley Information Science.

The ability of AI to analyze data and then draw conclusions from it aids and augments a long list of varied tasks performed by humans. AI can assist doctors in making medical diagnoses. It can take in customer data and other information to suggest to retail associates which sales pitches to make. It can analyze that same data together with the customer's voice to identify the customer's emotional level for call center workers and provide ways to adjust the interaction to reach the optimal outcome.

The importance of AI in business functions like finance and security is growing. AI can sort through reams of financial and industrywide statistics along with economic, consumer and specific customer data to help insurance companies, banks and the like in their underwriting procedures. AI can take automated action against cyber threats by analyzing IT systems, security tools and information about known threats, alert internal cybersecurity teams to new problems, and prioritize the threats that need human attention.

Just as AI can surpass the automation capabilities of RPA, AI also goes beyond the data-driven insights produced with current technologies such as business intelligence tools. While both data analytics technologies and AI analyze data, AI utilizes its intelligence components to draw conclusions, make recommendations and then guide human workers through processes, adjusting its recommendations as a process unfolds and as it takes in new information in real time. That, in turn, allows the AI to continuously learn and refine its conclusions and improve its recommendations over its entire lifecycle.

"What AI is doing is processing information throughout the organization; and it's speeding that flow so we can react more quickly, be more agile and meet needs more effectively," Earley said.

But efficiency and productivity gains delivered by AI-powered automation and augmentation is only part of the strategic importance of AI in business operations.

More significant, experts said, is the fact that AI gives organizations the ability to compete in a marketplace where customers, employees and partners increasingly expect the speed and personalization that the automation and augmentation deliver.

"AI is strategically important because it's building the capabilities that our customers demand and that our competitors will have," Earley said, saying that AI is the "digital machinery" that delivers the results that all those stakeholders want.

AI's role in using data to automate and enhance human work creates (and will continue to drive) cost-saving opportunities, improved sales and new revenue streams.

"Data is becoming overwhelming," said Karen Panetta, a fellow with the technical professional organization IEEE and Tufts University professor of electrical and computer engineering, "so if you're not going to use these new AI technologies, you'll be left behind in every aspect -- in understanding customers, new design methods, in efficiency and in every other area."

Read this article:

Importance of AI in the business quest for data-driven operations - TechTarget

Staying ahead of the artificial intelligence curve with help from MIT – MIT News

In August, the young artificial intelligence process automation company Intelenz, Inc. announced its first U.S. patent, an AI-enabled software-as-a-service application for automating repetitive activities, improving process execution, and reducing operating costs. For company co-founder Renzo Zagni, the patent is a powerful testament to the value of his MIT educational experience.

Over the course of his two-decade career at Oracle, Zagni worked his way from database administrator to vice president of Enterprise Applications-IT. After spending seven years in his final role, he was ready to take on a new challenge by starting his own company.

From employee to entrepreneur

Zagni launched Intelenz in 2017 with a goal of keeping his company on the cutting edge. Doing so required that he stay up to date on the latest machine learning knowledge and techniques. At first, that meant exploring new concepts on his own. But to get to the next level, he realized he needed a little more formal education. Thats when he turned to MIT.

When I discovered that I could take courses at MIT, I thought, What better place to learn about artificial intelligence and machine learning? he says. Access to MIT faculty was something that I simply couldnt pass up.

Zagnienrolled in MIT Professional Educations Professional Certificate Program in Machine Learning and Artificial Intelligence, traveling from California to Cambridge, Massachusetts, to attend accelerated courses on the MIT campus.

As he continued to build his startup, one key to demystifying machine learning came from MIT Professor Regina Barzilay, a Delta Electronics professor in the Department of Electrical Engineering and Computer Science and a member of MITs Computer Science and Artificial Intelligence Laboratory. Professor Barzilay used real-life examples in a way that helped us quickly understand very complex concepts behind machine learning and AI, Zagni says. And her passion and vision to use thepower of machine learning to help win the fight against cancer was commendable and inspired us all.

The insights Zagni gained from Barzilay and other machine learning/AI faculty members helped him shape Intelenz early products and continue to influence his companys product development today most recently, in his patented technology, the "Service Tickets Early Warning System. The technology is an important representation of Intelenz ability to develop AI models aimed at automating and improving business processes at the enterprise level.

We had a problem we wanted to solve and knew that artificial intelligence and machine learning could possibly address it. And MIT gave me the tools and the methodologies to translate these needs into a machine learning model that ended up becoming a patent, Zagni says.

Driving machine learning with innovation

As an entrepreneur looking to push the boundaries of information technology,Zagni wasnt content to simply use existing solutions; innovation became a key goal very early in the process.

For professionals like me who work in information technology, innovation and artificial intelligence go hand-in-hand, Zagni says.

While completing machine learning courses at MIT, Zagni simultaneously enrolled in MIT Professional Educations Professional Certificate Program in Innovation and Technology. Combining his new AI knowledge with the latest approaches in innovation was a game-changer.

During my first year with MIT, I was putting together the Intelenz team, hiring developers, and completing designs. What I learned in the innovation courses helped us a lot, Zagni says. For instance, Blake Kotellys Mastering Innovation and Design Thinking course made a huge difference in how we develop our solutions and engage our customers. And our customers love the design-thinking approach.

Looking forward

While his progress at Intelenz is exciting, Zagni is anything but done. As he continues to develop his organization and its AI-enabled offerings, hes looking ahead to additional opportunities for growth.

Were already looking for the next technology that is going to allow us to disrupt the market, Zagni says. Were hearing a lot about quantum computing and other technology innovations. Its very important for us to stay on top of them if we want to remain competitive.

He remains committed to lifelong learning, and says he will definitely be looking to future MIT courses and he recommends other professionals in his field do the same.

Being part of the MIT ecosystem has really put me ahead of the curve by providing access to the latest information, tools, and methodologies, Zagni says. And on top of that, the faculty are very helpful and truly want to see participants succeed.

Continue reading here:

Staying ahead of the artificial intelligence curve with help from MIT - MIT News

How AI will help you create better ads – VentureBeat

Programmatic advertising companies have mainly focused on who to show ads to and when to show them, but until now they have focused very little on what messages to show. Usually, these decisions are limited to:

Perhaps surprisingly, Facebook and Google AdWords currently provide more opportunities for creative optimization, due to constraints on the creative their native-like formats expect. Title, body, landing page, and sometimes image are the structured fields. By removing arbitrary design creativity, ironically, these formats encourage much more automated experimentation among the individual content elements. Even in these formats, however, it is still uncommon for the content to be individually personalized, unless it is just recommending products based on retargeting.

But what if your marketing platform could predict which messages would have the most impact on each consumer, on an individually personalized basis, and automatically assemble or select those messages? What if such an approach could show lift in results between 2x and 4x versus just using the best single creative? And finally, what if it could tell you when there are lots of consumers for whom the best-fit message is not yet available in your library, so you can prioritize new creative briefs for your design team?

Im convinced that in the future, the strongest predictive marketing platforms will employ this AI-based approach, known as predictive creative.

As with the native formats described above, predictive creative will providea more structured understanding of the elements that make up a creative message, including the background, colors, imagery, and call to action. Equally important is a similarly structured breakdown of these elements into their attributes that may independently affect the influence of the ad on each consumer.

For example, does the ad showany people? Men, women, or both? How old are they? Does it show a product? Is it in isolation or in use? Is there a call to action?

Which of the following terms describes the ad or its emotional content andimpact: happy, funny, calm, exciting, clever, fancy, adventurous, family, aggressive, value, need, safe, trustworthy, quality?

By understanding this much about the creatives they build, marketers have the chance to learn which characteristics drive better performance. And, when coupled with the data available in predictive marketing platforms, machine learning can predict the likely response of each individual to a well-understood ad even more accurately. This expands what is humanly possible, by combining the creativity of marketers to design effective messages with the power of big data and machine learning to individually deliver those messages to their most receptive audience.

The most powerful result, however, is that this kind of data can help direct marketers to create new ads with themes and elements that were missing from their campaign before, without trying to design a million different combinations.

The key to this is leveraging the data we observe about how a customer will respond across many different brands. Were betting that the kind of data we are capturing here is abstract enough from the details of any brand campaign that most advertisers will be comfortable opting in to sharing this kind of analytics with each other in order to benefit from the aggregated data about customers.

This approach can work equally well with video or display advertising, on desktop, mobile, or social channels. And it is applicable to both brand and direct response goals, as long as there is some way to measure the impact of the campaign on individual consumers, such as watching a video to completion, expressing brand favorability or awareness in a survey, or interacting with an ad, whether or not it generates a click-through. Finally, it can help with both personalization (showing each ad to the right people who will be influenced by them) and contextualization (showing each ad on the right site or app where it will have an increased effect).

Continue reading here:

How AI will help you create better ads - VentureBeat

Pentagon AI Efforts Disorganized: RAND Breaking Defense – Defense industry news, analysis and commentary – Breaking Defense

DoD CIO Dana Deasy (left) and the director of the Joint AI Center, Lt. Gen. Jack Shanahan (right), speak to reporters.

WASHINGTON: A congressionally mandated study warns the Defense Departments current efforts to harness artificial intelligence are significantly challenged by shortfalls in organization, planning, data, and talent, and testing, setting the stage for changes in the next defense policy and spending bills.

The problems RAND identified include a major mismatch between the sweeping responsibilities assigned to the year-old Joint Artificial Intelligence Center and its authority to achieve them, making it exceedingly difficult for the JAIC to succeed. To solve the problem, the RAND reports central recommendation is to strengthen the JAIC a recommendation Congress is now certain to at least consider next year as it drafts the 2021 defense bill.

A word of caution. Its the Joint AI Center director, Lt. Gen. Jack Shanahan, who hired RAND to do the report in the first place. (Congress required him to submit a report, but it didnt dictate who should write it). While RAND is highly respected for its independent, in-depth scholarship, its not known for challenging the fundamental premises of the questions the Defense Department asks.

How much AI spending is there for JAIC to coordinate, anyway? Thats actually a tricky question. The long-delayed 2020 appropriations bill last night includes unspecified significant investments [in] artificial intelligence, but weve not seen a specific figure. Any number would be an estimate anyway. AI spending is scattered across the Defense Department under a host of different terms and is often buried in larger projects.

The RAND report includes an annex digging through the 2020 budget, but its not available to the public. The only figure the public version gives for AI-specific activity is $15 million 0.002 percent of the DoD budget. But that doesnt include any AI work done as part of a larger program, such as a weapons system, cloud computing contract, or business software. DoD budgets do not account for AI when it is a small part of a larger platform, RAND says, making it hard to track overall spending.

Strengthening the Joint AI Center

There is evidence to support that DoD has taken the right approach in establishing the JAIC as a centralized focal point for DoDs AI strategy, says the RAND study, released this morning, [but] DoD failed to provide the JAIC with visibility, authorities, and resource commitments, making it exceedingly difficult for the JAIC to succeed in its assigned mandate.

Now, the RAND report doesnt include one recent reform that postdates its drafting. In October, Deputy Defense Secretary David Norquist officially designated the JAIC director as senior official with primary responsibilities for the coordination of activities related to the development and demonstration of AI and machine learning, working in tandem with R&D undersecretary Mike Griffins technical director for AI research and development. Its far from clear what this new role actually involves, but JAIC and the R&D shop are supposed to provide an implementation plan by April 2nd.

Even this doesnt give JAIC any authority to control AI spending across the military. The AI center can only provide guidance to the services, not direction.

One option RAND recommends to shore up the Joint AI Center, part of the Office of the Secretary of Defense, is to give JAIC new legal authorities over budgeting and personnel in the four armed services. But the report admits this would require Congress to pass legislation increasing the power of OSD over the services acquisition programs, reversing the Hills recent efforts to decentralize authority back to the service chiefs.

Brig. Gen. Matthew Easley, director of the Armys Artificial Intelligence Task Force

RANDs alternative plan, cut down to fit within the limits of the current law, would strengthen the existing AI efforts in each of the services of which the Armys AI Task Force is the most developed and bring their chiefs together on a DoD-wide council, chaired but not controlled by the JAIC director.

In either case, RAND recommends JAIC and the services replace their current vague aspirations with clear five-year plans, complete with unambiguous measures of success or failure to judge them against. It also urges Defense Department leadership and the JAIC itself to figure out what the AI Centers mission really is and make that clear to a confused workforce.

In 102 interviews conducted between April and August 59 officials from DoD, nine from other federal agencies, 25 from industry, and nine academics we noted a lack of clarity among our interviewees on the JAICs mandate, roles, and activities[,] how it fits within the broader DoD ecosystem and how it connects to the services and their efforts, RAND said. It points to a lack of clarity about the raison dtre of the JAIC The confusion might not be entirely on the part of the audience. DoD needs to have a clearer view of what it wants the JAIC to be.

Major Recommendations

The RAND reports recommendations go well beyond reorganization. In particular, the report raises major concerns about how the Defense Department handles its data, its human capital, and its test programs to assure AI actually works as advertised. Some key excerpts (emphasis ours) and yes, RAND is pedantic enough to consistently use data as a plural:

Ultimately, the RAND report believes that the Defense Department can make major advances in AI, but it has to be realistic about how long that will take. Business-style enterprise applications like finance, personnel, and data management will be feasible much sooner than operational AI capable of handling the chaos and ambiguity of actual combat. As a rule of thumb, RAND says, investments made starting today can be expected to yield at-scale deployment in the near term for enterprise AI, in the middle term for most mission-support AI, and in the long term for most operational AI.

Shanahans boss, Pentagon Chief Information Officer Dana Deasy, welcomed RANDs report as a thorough and thoughtful critique to be considered along with recent recommendations from the Defense Innovation Board and theNational Security Commission on AI.

Go here to read the rest:

Pentagon AI Efforts Disorganized: RAND Breaking Defense - Defense industry news, analysis and commentary - Breaking Defense

How AI is changing the customer experience – MIT Technology Review

AI is rapidly transforming the way that companies interact with their customers. MIT Technology Review Insights survey of 1,004 business leaders, The global AI agenda, found that customer service is the most active department for AI deployment today. By 2022, it will remain the leading area of AI use in companies (say 73% of respondents), followed by sales and marketing (59%), a part of the business that just a third of surveyed executives had tapped into as of 2019.

In recent years, companies have invested in customer service AI primarily to improve efficiency, by decreasing call processing and complaint resolution times. Organizations known as leaders in the customer experience field have also looked toward AI to increase intimacyto bring a deeper level of customer understanding, drive customization, and create personalized journeys.

Genesys, a software company with solutions for contact centers, voice, chat, and messaging, works with thousands of organizations all over the world. The goal across each one of these 70 billion annual interactions, says CEO Tony Bates, is to delight someone in the moment and create an end-to-end experience that makes all of us as individuals feel unique.

Experience is the ultimate differentiator, he says, and one that is leveling the playing field between larger, traditional businesses and new, tech-driven market entrantsproduct, pricing, and branding levers are ineffective without an experience that feels truly personalized. Every time I interact with a business, I should feel better after that interaction than I felt before.

In sales and marketing processes, part of the personalization involves predictive engagementknowing when and how to interact with the customer. This depends on who the customer is, what stage of the buying cycle they are at, what they are buying, and their personal preferences for communication. It also requires intelligence in understanding where the customer is getting stuck and helping them navigate those points.

Marketing segmentation models of the past will be subject to increasing crossover, as older generations become more digitally skilled. The idea that you can create personas, and then use them to target or serve someone, is over in my opinion, says Bates. The best place to learn about someone is at the businesss front door [website or call center] and not at the backdoor, like a CRM or database.

The survey data shows that for industries with large customer bases such as travel and hospitality, consumer goods and retail, and IT and telecommunications, customer care and personalization of products and services are among the most important AI use cases. In the travel and hospitality sector, nearly two-thirds of respondents cite customer care as the leading application.

The goal of a personalized approach should be to deliver a service that empathizes with the customer. For customer service organizations measured on efficiency metrics, a change in mindset will be requiredsome customers consider a 30-minute phone conversation as a truly great experience. But on the flip side, I should be able to use AI to offset that with quick transactions or even use conversational AI and bots to work on the efficiency side, says Bates.

With vast transaction data sets available, Genesys is exploring how they could be used to improve experiences in the future. We do think that there is a need to share information across these large data sets, says Bates. If we can do this in an anonymized way, in a safe and secure way, we can continue to make much more personalized experiences. This would allow companies to join different parts of a customer journey together to create more interconnected experiences.

This isnt a straightforward transition for most organizations, as the majority of businesses are structured in silosthey havent even been sharing the data they do have, he adds. Another requirement is for technology vendors to work more closely together, enabling their enterprise customers to deliver great experiences. To help build this connectivity, Genesys is part of industry alliances like CIM (Cloud Innovation Model), with tech leaders Amazon Web Services and Salesforce. CIM aims to provide common standards and source code to make it easier for organizations to connect data across multiple cloud platforms and disparate systems, connecting technologies such as point-of-sale systems, digital marketing platforms, contact centers, CRM systems, and more.

Data sharing has the potential to unlock new value for many industries. In the public sector, the concept of open data is well known. Publicly available data sets on transport, jobs and the economy, security, and health, among many others, allow developers to create new tools and services, thus solving community problems. In the private sector there are also emerging examples of data sharing, such as logistics partners sharing data to increase supply chain visibility, telecommunications companies sharing data with banks in cases of suspected fraud, and pharmaceutical companies sharing drug research data that they can each use to train AI algorithms.

In the future, companies might also consider sharing data with organizations in their own or adjacent industries, if it were to lead to supply chain efficiencies, improved product development, or enhanced customer experiences, according to the MIT Technology Review Insights survey. Of the 11 industries covered in the study, respondents from the consumer goods and retail sector proved the most enthusiastic about data sharing, with nearly a quarter describing themselves as very willing to share data, and a further 57% being somewhat willing.

Other industries can learn from financial services, says Bates, where regulators have given consumers greater control over their data to provide portability between banks, fintechs, and other players, in order to access a wider range of services. I think the next big wave is that notion of a digital profile where you and I can control what we do and dont want to shareI would be willing to share a little bit more if I got a much better experience.

See the rest here:

How AI is changing the customer experience - MIT Technology Review

India will see breakthrough application of AI – Economic Times

India will see breakthrough application of artificial intelligence in various areas including the National Language Translation Mission, said Infosys chairman Nandan Nilekani.

Nilekani said this during a fireside chat with Ajay Sawhney, secretary in the Ministry of Electronics and Information Technology (MeitY) and Debjani Ghosh, president of IT industry body Nasscom.

The interaction was organised by INDIAai, a national AI portal set up by MeitY, National E-Governance Division and Nasscom.

India is unique in the fact that it has such a large number of languages, all co-mingling, and most Indians speak two to three languages and so on. Creating the worlds best language capability, whether its speech, text to speech, whether its language to language, I think India is well placed to show the world how to do it, he added.

Sawhney, speaking on the National AI Mission - on which MeitY is working jointly with the NITI Aayog - said that the core research would give not just length but a tremendous amount of depth in coverage in terms of technology, across various areas/sectors of application.

Sawhney also spoke about the creation of a national public digital platform for healthcare, which knits together all healthcare providers on one platform.

Read the original:

India will see breakthrough application of AI - Economic Times

AI is targeting some of the world’s biggest problems: homelessness, terrorism, and extinction – VentureBeat

Making AI models at the University of Southern California (USC) Center for AI in Society does not involve a clean, sorted dataset. Sometimes it means interviewing homeless youth in Los Angeles to map human social networks. Sometimes it involves going to Uganda for better conservation of endangered species.

With AI, we are able to reach 70 percent of the youth population in the pilot, compared to about 25 percent in the standard techniques. So AI algorithms are able to reach far more youth in terms of spreading HIV information compared to traditional methods, saidMilind Tambe, a professor at the USC Viterbi School of Engineering and cofounder of the Center for AI in Society. If I were doing AI normally I might get data from the outside and I would analyze the data, produce algorithms, and so forth, but I wouldnt go to a homeless shelter.

The pilot project will next be expanding to serve 1,000 youth. Other projects currently being taken on by the Center for AI in Society include gang prevention, wildlife conservation with computer vision, and predictive models to improve cybersecurity, prevent suicide, and help homeless youth find housing.

The center has also developed and deployed algorithms for federal agencies such as the U.S. Coast Guard, Air Marshals Service, and Transportation and Security Administration (TSA).

Tambe was one a handful ofauthors of a forward-looking report that examines how AI will evolve and affect business, government, and society between the present and 2030. Commissioned by Stanford University as part of The AI 100 Project, the study found that AI aimed at solving social problems has traditionally lacked investment because it produces no profitable commercial applications. The report prescribes making AI for low resource projects a higher priority and offering AI researchers incentives, but Tambe also believes an entirely new discipline may need to be developed.

[These projects] bring up completely new kinds of AI problems because working with low resource communities, data is sparse, as opposed to being plentiful. When you talk about big data, thats not what were doing here. Whether its wildlife conservation or working with homeless youth, were talking incomplete data and theres no capacity to actually produce that massive clean big data that you can do deep learning on, he said.

Were trying to develop novel AI science as well as novel social science, co-director Eric Rice told VentureBeat in a phone interview. Were not just trying to be data scientists who take advantage of publicly available datasets or social scientists that take advantage of out-of-the-box machine learning tools that are pretty much readily available through canned software packages. What were really trying to build is new science on both sides.

The USC Center for AI in Society is a collaboration between computer science and social science schools at USC, an ambitious initiative created to cross-pollinate ideas between the two disciplines in order to solve some of the worlds biggest problems.

Created in 2013, the program focuses on problems found in the 12 Grand Challenges of social work and the United Nations Sustainable Development Goals.

The 12 Grand Challenges of Social Work was created last year by social workers and espouses goals like ensuring healthy development for all youth, eradicating social isolation, stopping family violence, and ending homelessness.

The Sustainable Development Goals were adopted by U.N. member nations in 2015 and focus on implementing measures to address priorities like access to quality education, gender equity, and the end of poverty and hunger by 2030.

This is the first collaboration as far as we are aware between AI and social work in a center. So were really collaborating across schools in terms of engineering and AI and social work, and its bringing up completely new sets of challenges to the core in terms of problems that the AI community has tackled, Tambe told VentureBeat in a phone interview. Spreading HIV information amongst homeless youth or trying to reduce substance abuse or matching homeless youth to homes, these are challenges that generally have not been tackled within the AI community.

The two schools work together because sometimes an AI data scientist may not understand a social issue if they dont see it emerge in a dataset, and social workers may sometimes fail to understand that an algorithm could significantly impact a social issue.

While there was some initial difficulty in understanding the different vocabularies social scientists and data scientists use, the collaboration leads to completely new kinds of discovery that wouldnt have been possible if either of us were working alone, Tambe said.

Social work tends to be less precise and engineering is very focused, so theres this dance were in, Rice said. Were adding more muddiness to the model and theyre insisting that we are more crisp in our argument, so theres a nice generative aspect to that kind of back and forth.

Here is the original post:

AI is targeting some of the world's biggest problems: homelessness, terrorism, and extinction - VentureBeat

Coronavirus: how the pandemic has exposed AIs limitations – The Conversation UK

It should have been artificial intelligences moment in the sun. With billions of dollars of investment in recent years, AI has been touted as a solution to every conceivable problem. So when the COVID-19 pandemic arrived, a multitude of AI models were immediately put to work.

Some hunted for new compounds that could be used to develop a vaccine, or attempted to improve diagnosis. Some tracked the evolution of the disease, or generated predictions for patient outcomes. Some modelled the number of cases expected given different policy choices, or tracked similarities and differences between regions.

The results, to date, have been largely disappointing. Very few of these projects have had any operational impact hardly living up to the hype or the billions in investment. At the same time, the pandemic highlighted the fragility of many AI models. From entertainment recommendation systems to fraud detection and inventory management the crisis has seen AI systems go awry as they struggled to adapt to sudden collective shifts in behaviour.

The unlikely hero emerging from the ashes of this pandemic is instead the crowd. Crowds of scientists around the world sharing data and insights faster than ever before. Crowds of local makers manufacturing PPE for hospitals failed by supply chains. Crowds of ordinary people organising through mutual aid groups to look after each other.

COVID-19 has reminded us of just how quickly humans can adapt existing knowledge, skills and behaviours to entirely new situations something that highly-specialised AI systems just cant do. At least yet.

We now face the daunting challenge of recovering from the worst economic contraction on record, with societys fault lines and inequalities more visible than ever. At the same time, another crisis climate change looms on the horizon.

At Nesta, we believe that the solution to these complex problems is to bring together the distinct capabilities of both crowd intelligence and machine intelligence to create new systems of collective intelligence.

In 2019, we funded 12 experiments to help advance knowledge on how new combinations of machine and crowd intelligence could help solve pressing social issues. We have much to learn from the findings as we begin the task of rebuilding from the devastation of COVID-19.

In one of the experiments, researchers from the Istituto di Scienze e Tecnologie della Cognizione in Rome studied the use of an AI system designed to reduce social biases in collective decision-making. The AI, which held back information from the group members on what others thought early on, encouraged participants to spend more time evaluating the options by themselves.

The system succeeded in reducing the tendency of people to follow the herd by failing to hear diverse or minority views, or challenge assumptions all of which are criticisms that have been levelled at the British governments scientific advisory committees throughout the pandemic.

In another experiment, the AI Lab at Brussels University asked people to delegate decisions to AI agents they could choose to represent them. They found that participants were more likely to choose their agents with long-term collective goals in mind, rather than short-term goals that maximised individual benefit.

Making personal sacrifices for the common good is something that humans usually struggle with, though the British public did surprise scientists with its willingness to adopt new social-distancing behaviours to halt COVID-19. As countries around the world attempt to kickstart their flagging economies, will people be similarly willing to act for the common good and accept the trade-offs needed to cut carbon emissions, too?

COVID-19 may have knocked Brexit off the front pages for the last few months, but the UKs democracy will be tested in the coming months by the need to steer a divided nation through tough choices in the wake of Britains departure from the EU and an economic recession.

In a third experiment, a technology company called Unanimous AI partnered with Imperial College, London to run an experiment on a new way of voting, using AI algorithms inspired by swarms of bees. Their swarming approach allows participants to see consensus emerging during the decision-making process and converge on a decision together in real-time helping people to find collectively acceptable solutions. People were consistently happier with the results generated through this method of voting than those produced by majority vote.

In each of these experiments, weve glimpsed what could be possible if we get the relationship between AI and crowd intelligence right. Weve also seen how widely held assumptions about the negative effects of artificial intelligence have been challenged. When used carefully, perhaps AI could lead to longer-term thinking and help us confront, rather than entrench, social biases.

Alongside our partners, the Omidyar Network, Wellcome, Cloudera Foundation and UNDP, we are investing in growing the field of collective-intelligence design. As efforts to rebuild our societies after coronavirus begin, were calling on others to join us. We need academic institutions to set up dedicated research programmes, more collaboration between disciplines, and investors to launch large-scale funding opportunities for collective intelligence R&D focused on social impact. Our list of recommendations is the best place to get started.

In the meantime, well continue to experiment with novel combinations of crowd and machine intelligence, including launching the next round of our grants programme this autumn. The world is changing fast and its time for the direction of AI development to change, too.

Excerpt from:

Coronavirus: how the pandemic has exposed AIs limitations - The Conversation UK

Hour One wants synthetic AI characters to be your digital avatars – VentureBeat

If you ever wondered how well populate the metaverse, look no further than Hour One, an Israeli startup that is making replicas of people with AI avatars. These avatars can be a near-perfect visual likeness of you and speak with words fed to them by marketers who want to sell you something. An avatar can speak on your behalf in a digital broadcast when youre at home watching TV.

Such creations feel like a necessary prerequisite of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such asSnow CrashandReady Player One. And the trick: Youll never know if youre talking to a real person or one of Hour Ones synthetic people.

There is definitely interest in the metaverse and we are doing experiments in the gaming space and with photorealism, Hour One business strategy lead Natalie Monbiot said in an interview with VentureBeat. The thing that has fired up the team is this vision of a world which is increasingly virtual and a belief that we will live increasingly virtually.

She added, We already have different versions of ourselves that appear in social media and different social channels. We represent ourselves already in this kind of digital realm. And we believe that our virtual selves will become even more independent. And we can put them to work for us. We can benefit from this as a human race. And you know, that old saying we cant be in two places at once? Well, we believe that that will no longer be true.

Hour One is one more example of the fledgling market for virtual beings. Startups focused on virtual beings have raised more than $320 million to date, according to Edward Saatchi of Fable Studios, speaking at Julys Virtual Beings Summit.

But were a little ahead of ourselves. Metaverse plays are becoming increasingly common as we all realize that there has to be something better than Zoom calls to engage in a digital way. So the Tel Aviv, Israel-based company said it raised $5 million in seed funding this week from Galaxy Interactive via its Galaxy EOS VC Fund, as well as Block.one, Remagine Ventures, Kindred Ventures, and Amaranthine. It will use that money to scale its AI-driven cloud platform and create thousands of new digital characters.

Youve heard of stock photos. Hour One is talking about something similar to stock humans. They can be used to speak any kind of script in a marketing video or give a highly customized message to someone. The goal is to create characters who cross the uncanny valley.

I think that weve crossed the uncanny valley because we have our likeness test, and our videos are actually live and in market and generating results for customers, Monbiot said. I think thats something thats really distinctive about us, even though were such a young company, weve had very positive commercial traction already.

Above: Whos real and whos not?

Image Credit: Hour One

We create synthetic characters based on real people, Monbiot said. We do so for commercials. We take real people and we have this really simple process for converting real people into synthetic characters that resemble them exactly. And once we have the synthetic characters, we can program them to generate all kinds of new content at enormous speed and scale.

The competition in this space will be tough. GamesBeat will be having our own conference, tentatively scheduled for January 26 to January 27, 2020, on topics including the metaverse, and we expect it to be full of interesting companies.

A Samsung spinoff, Neo, caught a lot of attention for creating human AI avatars at CES 2020 in January, and then it promptly caught a lot of bad press for avatars that didnt look as real as expected. But Hour One also started coming out of stealth at the same time with a plan to expand business-to-business human communication. The company showcased its real or synthetic likeness test at CES 2020, challenging people to distinguish between real and synthetic characters generated by its AI.

Hour One is using deep learning and generative adversarial neural networks to make its video characters. The company says it can do this in a highly scalable and cost-effective way. Theyre supposed to look good, and the image on top of this story looks realistic.

But the cost of missing the mark is high. Hour One will have to beat Neo in the race across the uncanny valley. And Genies is coming from another direction, with cartoon-based avatars that represent digital versions of celebrities.

Above: Hour Ones real Natalie Monbiot

Image Credit: Hour One

Hour One is working with companies in the ecommerce, education, automotive, communication, and enterprise sectors, with expanded industry applications expected throughout 2020. The company has about 100 avatars today.

The pitch is that the lower cost per character use means that companies will be able to engage more with their customers on every level, from digital receptionists to friendly salespeople.

These customers can create thousands of videos simply by submitting text to these characters, Monbiot said. It appears as though real people are actually saying those words, but were using AI to make it happen. Were improving communication. Were obviously living in an ever-more virtual existence. And were enabling businesses of all kinds to engage in a more human way.

And if your avatar is speaking on behalf of you somewhere and its generating value, youll get paid for it, Monbiot said even if youre not there. We have a very bright view of the future. If your avatar speaks, you can get paid for that, Monbiot said.So were at the beginning of a new future. And for us, thats a future in which everybody will have a synthetic character. We will have virtual versions of ourselves.

Sam Englebardt, managing director of Galaxy Interactive (and a speaker on the subject of the metaverse at our GamesBeat Summit event), calls the approach an ethical one.Hour One is a business-to-business provider of the best synthetic video tech Ive seen to date, Englebardt said in an email to GamesBeat.

Oren Aharon and Lior Hakim created Hour One in 2019 with a mission of driving the economy of the digital workforce powered by synthetic characters of real-life people. They can use blockchain technology to verify the identity of a digital character and who owns it. If theyre altered or used for deep fakes, Hour One will be able to mark them as altered and notify people what has happened. The team has eight people.

Read the original here:

Hour One wants synthetic AI characters to be your digital avatars - VentureBeat

Google’s new PAIR project wants to rethink how we use AI – CNET

Google's AI program, AlphaGo, went up against -- and defeated -- Chinese Go champion Ke Jie (on the left) at the Future of Go Summit in May in China. The match took place a year after AlphaGo bested Lee Sedol, world number two Go player.

AlphaGo may have defeated humans at board games, but its creators really just want us to be buddies.

In a new project named the People + AI Research Initiative (PAIR), Google's researchers are looking at the relationship between humans and artificial intelligencein the hopes of making the latter more useful to the former, the tech giant announced on its blogon Monday.

The company says it'll rethink AI on three levels: How we can use it as a tool in everyday life, how professionals in all fields can use it to make their jobs easier and how practical AI development can be taught to engineers.

Google isn't the only one making big moves to help develop the nascent field. On Monday, the Ethics and Governance of Artificial Intelligence Fund, helmed by Harvard University's Berkman Klein Center for Internet & Society and the MIT Media Lab, pledged $7.6 million to support the creation of AI that serves public interest. Plus,the tech giant last year partnered with Amazon, Facebook, IBM and Microsoft to create a new not-for-profit called the Partnership on Artificial Intelligence to Benefit People and Society.

Google says, as part of PAIR, it will introduce new open-sourced tools and educational material as well as publish research to help push AI along.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Batteries Not Included: The CNET team reminds us why tech is cool.

Original post:

Google's new PAIR project wants to rethink how we use AI - CNET

Microsoft Says AI-Powered Windows Updates Have Reduced Crashes – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

Microsoft has invested heavily in AI and machine learning, but you wouldnt know it from how little attention it gets compared with Google. Microsoft is using its machine learning technology to address something all long-term Windows users have experienced: faulty updates. Microsoft says that AI can help identify systems that will play nicely with updates, allowing the company to roll new versions out more quickly with fewer crashes.

It seems like we cant get a single Windows update without hearing some stories of how it completely broke one type of system or another. You have to feel for Microsoft a little the Windows ecosystem is maddeningly complex with uncountable hardware variants. Microsoft started using AI to evaluate computers with Windows 10, version 1803 (the April 2018 Update). It measured six PC health stats, assessed update outcomes, and loaded all the data into a machine learning algorithm. This tells Microsoft which computers are least likely to encounter problems with future updates.

By starting with the computers with the best update compatibility, Microsoft can push new features to most users in short order. With most OS rollouts, things move very slowly at first while companies remain vigilant for problems. PCs determined to have likely issues by the AI will get pushed down the update queue while Microsoft zeros in on the bugs.

The ML models seem effective, even if Microsoft didnt bother to label the Y-axis.

The first AI-powered deployment was a success, with adoption rates higher than all previous Windows 10 updates. Microsoft expanded its original six PC metrics to a whopping 35 as of the Windows 1903 rollout (May 2019). The company claims this makes update targeting even more accurate. This does not guarantee perfect updates, though. Microsofts blog post glosses over 1809 update from late 2018. That rollout used AI technology, but you might recall the widespread file deletion bug that caused Microsoft to pause the release. AI might help determine compatibility, but it cant account for unknown bugs like that.

Still, Microsoft is happy with the results from its machine learning deployments. According to the new blog post, systems chosen for updates by the algorithm have fewer than half as many system uninstalls, half as many kernel-mode crashes, and one-fifth as many post-update driver conflicts. Hopefully, you can look forward to fewer Windows update issues going forward, and youll have AI to thank.

Now read:

Read the original here:

Microsoft Says AI-Powered Windows Updates Have Reduced Crashes - ExtremeTech

How AI is helping top tech talent connect with the best opportunities – Fast Company

Despite whats happened to the world economy over the past yearand the continued uncertainty of what lies aheadwhen it comes to hiring top talent it remains a job candidates market. The current focus of the conversation is around the impact of artificial intelligence on hiring practices. However, there are some key considerations that demonstrate the immense amount of choices talent will always have.

Right now, two seemingly contradictory things are happening in business.

Artificial intelligence is growing tremendously. According to statista, AI is growing approximately 54 percent annually and will be one of the next great technological shifts, like the advent of the computer age or the smartphone revolution.

Humans are not becoming less important. Organizations are becoming more sophisticated about measuring the value, impact, and importance of people. In turn, talentor human resources or the management of workforcesis receiving more attention. PwC reports that what used to be a slow-moving corporate technology space is now a $148 billion market of HR cloud solutions to address the needs for the future of work.

Knowledge workers may not be aware of the impact AI is quickly makingand will continue to makein the way large enterprises manage their workforces.

AI is helping companies:

Find the best fit for their skills within their chosen careers. For too long, talent has been managed poorly, often relying on inaccurate job descriptions, uninformed interview processes, and leaning too heavily on writing the best CV. AI holds the promise that people can reach their potential, which benefits their employers as well.

End the emphasis on who you know. Connections and networks have ruled the employment world for decades. AI has the power to change that and home in on peoples capabilities as the basis of hiring and promotion decisions.

Allow people to apply their capabilities to new industries. For so long, people have been laid off through no fault of their own. They worked in declining industries, and have struggled to translate their capabilities to new roles in new industries. AI changes this. Think about a restaurant employee who has worked for 20 years at one place, only to lose a job. They may feel like all they know is food and hospitality. AI, having analyzed so many careers, understands that the person knows how teamwork, inventory, supply chains, budgeting, and other skills can be applied to a different industry. It can match them to a new job based on capabilities and potential.

AI is making an impact in all of the ways companies manage work, from hiring and succession planning to retention and learning. It is replacing Boolean keywords with neural networks that are providing opportunities to employees they never had before.

AI requires massive amounts of data. In some cases, it has been used to examine more than a billion careers, and more than a million skills. By using neural networks, AI can crunch that anonymized data and learn about people and their career potential like never before. The AI can see that if a person worked at a company during a certain set of years, they are very likely to have a certain set of skillswhich that person easily could have left off their CV, thereby missing out on a chance of being selected for a job.

Take two potential job applicants, for example. One worked as a project manager for Google for five years. Another was at Uber during that time. If it has enough data, the AI knows a lot about each of their work histories, and how theyre different, even if they had the same title. It has seen so many people with their capabilities come and go from these companies, that even without a complete CV, it is able to infer skills about people based on their titles, and the time and place they worked.

The AI can also see adjacent skills and other relationships between skills. If a person knows one computer language, the AI knows it can pick up a similar one. Only advanced AI can do this. Only now do we have the technology to truly see someones potential and capability. This offers the promise of a great career to everyone based on what they can do, not who they know.

There are a lot of hot technologies a knowledge worker and an engineer can work on right now. The use of AI to improve peoples careers and to shape the future of talent is here to stay. Providing opportunities to people based on their potential will improve large businesses and every organization going forward.

You can learn more about how to make an impact in this growing field here.

Vinodh Kumar Ravindranath is the head of artificial intelligence at Eightfold AI.

Originally posted here:

How AI is helping top tech talent connect with the best opportunities - Fast Company

AI is now being used to shortlist job applicants in the UK let’s hope it’s not racist – The Next Web

Its oft-repeated that artificial intelligence will be a danger to our jobs. But perhaps in a not-so-surprising twist, AI is also being increasingly used by companies to hire candidates.

According to a report by The Telegraph, AI-based video interviewing software such as thatdeveloped by HireVue are being leveraged by UK companies for the first time to shortlist the best job applicants.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop, the report said.

HireVue, a Utah-based pre-employment assessment AI platform founded in 2004, employs machine learning to evaluate candidate performance in videos by training an AI system centered around 25,000 usable data points. The companys software is used by over 700 companies worldwide, including Intel, Honeywell, Singapore Airlines, and Oracle.

There are lots of subtle cues we subconsciously make sense of think facial expressions or intonation but these are missed when we zone out, the company notes on its website.

The videos record an applicants responses to preset interview questions, which are then analyzed by the software for intonation, body language, and other parameters, looking for matches against traits of previous successful candidates.

Its worth noting that Unilever experimented with HireVue in its recruitment efforts as early as 2017 in the US.

From recommending you what to binge watch over the weekend to booking the cheapest flight for your next vacation, AI and machine learning have quickly emerged as two of the most disruptive forces ever to hit the economy.

The technology is now doing more than ever for both good and bad. Its being deployed in health care; its helping artists synthesize death metal music. On the other hand, its enabling high-tech surveillance, and evenjudging your creditworthiness.

Theyre also scrutinizing your resume, transforming both job seeking and the workplace, and revamping the very means companies look for candidates, get the most out of employees, and retain top talent.

But just as algorithms steadily infiltrate different aspects of your day-to-day lives and make decisions on your behalf, they have also come progressively under scrutiny for being as biased as the humans they sometimes replace.

By letting a computer program make hiring decisions for a company, the prevailing notion is that the process can be made more efficient both by selecting the most qualified people from a deluge of applications and side-stepping human bias to identify top talent from a diverse pool of candidates.

HireVue, however, claims it has removed data points that led to bias in its AI models.

Yet as its widely established, AIs are only as good as the data theyretrained on.Bad data that contain implicit racial, gender, or ideological biases can also creep into the systems, resulting in a phenomenon called disparate impact, wherein some candidates may be unfairly rejected or excluded altogether because they dont fit a certain definition of fairness.

Regulating the use of AI-based tools, then, necessitates the need for algorithmic transparency, bias testing, and assessing them for risks associated with automated discrimination.

But most importantly, it calls for collaboration between engineers, domain experts, and social scientists. This is the key to understanding the trade-offs between different notions of fairness and to help us define which biases are desirable or unacceptable.

Read next: Facebook now blocks Pirate Bay links but you can still bypass the ban

Continued here:

AI is now being used to shortlist job applicants in the UK let's hope it's not racist - The Next Web

4 Things to Consider Before You Start Using AI in Personnel Decisions – Harvard Business Review

Which candidate should we hire? Who should be promoted? How should we choose which people get which shifts? In the hope of making better and fairer decisions about personnel matters such as these, companies have increasingly adopted AI tools only to discover that they may have biases as well. How can we decide whether to keep human managers or go with AI? This article offers four considerations.

The initial promise of artificial intelligence as a broad-based tool for solving business problems has given way to something much more limited but still quite useful: algorithms from data science that make predictions better than we have been able to do so far.

In contrast to standard statistical models that focus on one or two factors already known to be associated with an outcome like job performance, machine-learning algorithms are agnostic about which variables have worked before or why they work. The more the merrier: It throws them all together and produces one model to predict some outcome like who will be a good hire, giving each applicant a single, easy-to-interpret score as to how likely it is that they will perform well in a job.

No doubt because the promise of these algorithms was so great, the recognition of their limitations has also gotten a lot of attention, especially given the fact that if the initial data used to build the model is biased, then the algorithm generated from that data will perpetuate that bias. The best-known examples have been in organizations that discriminated against women in the past where job performance data is also biased, and that means algorithms based on that data will also be biased.

So how should employers proceed as they contemplate adopting AI to make personnel decisions? Here are four considerations:

1. The algorithm may be less biased than the existing practices that generate the data in the first place. Lets not romanticize how poor human judgment is and how disorganized most of our people management practices are now. When we delegate hiring to individual supervisors, for example, it is quite likely that they may each have lots of biases in favor of and against candidates based on attributes that have nothing to do with good performance: Supervisor A may favor candidates who graduated from a particular college because she went there, while Supervisor B may do the reverse because he had a bad experience with some of its graduates. At least algorithms treat everyone with the same attributes equally, albeit not necessarily fairly.

2. We may not have good measures of all of the outcomes we would like to predict, and we may not know how to weight the various factors in making final decisions. For example, what makes for a good employee? They have to accomplish their tasks well, they also should get along with colleagues well, fit in with the culture, stay with us and not quit, and so forth. Focusing on just one aspect where we have measures will lead to a hiring algorithm that selects on that one aspect, often when it does not relate closely to other aspects, such as a salesperson who is great with customers but miserable with co-workers.

Here again, it isnt clear that what we are doing now is any better: An individual supervisor making a promotion decision may be able in theory to consider all those criteria, but each assessment is loaded with bias, and the way they are weighted is arbitrary. We know from rigorous research that the more hiring managers use their own judgment in these matters, the worse their decisions are.

3. The data that AI uses may raise moral issues. Algorithms that predict turnover, for example, now often rely on data from social media sites, such as Facebook postings. We may decide that it is an invasion of privacy to gather such data about our employees, but not using it comes at the price of models that will predict less well.

It may also be the case that an algorithm does a good job overall in predicting something for the average employee but does a poor job for some subset of employees. It might not be surprising, for example, to find that the hiring models that pick new salespeople do not work well at picking engineers. Simply having separate models for each would seem to be the solution. But what if the different groups are men and women or whites and African Americans, as appears to be the case? In those cases, legal constraints prevent us from using different practices and different hiring models for different demographic groups.

4. It is often hard, if not impossible, to explain and justify the criteria behind algorithmic decisions. In most workplaces now, we at least have some accepted criteria for making employment decisions: He got the opportunity because he has been here longer; she was off this weekend because she had that shift last weekend; this is the way we have treated people before. If I dont get the promotion or the shift I want, I can complain to the person who made the decision. He or she has a chance to explain the criterion and may even help me out next time around if the decision did not seem perfectly fair.

When we use algorithms to drive those decisions, we lose the ability to explain to employees how those decisions were made. The algorithm simply pulls together all the available information to construct extremely complicated models that predict past outcomes. It would be highly unlikely if those outcomes corresponded to any principle that we could observe or explain other than to say, The overall model says this will work best. The supervisor cant help explain or address fairness concerns.

Especially where such models do not perform much better than what we are already doing, it is worth asking whether the irritation they will cause employees is worth the benefit. The advantage, say, of just letting the most senior employee get first choice in picking his or her schedule is that this criterion is easily understood, it corresponds with at least some accepted notions of fairness, it is simple to apply, and it may have some longer-term benefits, such as increasing the rewards for sticking around. There may be some point where algorithms will be able to factor in issues like this, but we are nowhere close to that now.

Algorithmic models are arguably no worse than what we are doing now. But their fairness problems are easier to spot because they happen at scale. The way to solve them is to get more and better measures data that is not biased. Doing that would help even if we were not using machine-learning algorithms to make personnel decisions.

More here:

4 Things to Consider Before You Start Using AI in Personnel Decisions - Harvard Business Review

Microsoft Upgrades Azure AI to Analyze Health Records and Streamline Voice App Creation – Voicebot.ai

on July 13, 2020 at 8:00 am

Microsofts artificial intelligence services are now able to mine electronic medical records for new insight and simplify building or improving voice apps after a spate of updates to the Azure AI platform. Azure Cognitive Services provides enterprise-level AI services to companies who want to apply artificial intelligence to their work.

The COVID-19 health crisis has accelerated the use of AI as a doctors assistant in record-keeping. Azure connects doctors notes and conversations with patients to electronic medical records both through the Project EmpowerMD Intelligent Scribe Service and as a host platform for Nuances virtual assistant for doctors after the two companies reached an agreement last fall. Now, Azure can help medical professionals glean new conclusions from that data using Text Analytics for health. Microsoft took the existing Text Analytics feature and trained it on medical data like clinical notes and protocols to let it understand how to find and to share insights from the huge amounts of medical data doctors normally have to pore through manually to find patterns. Though still in previews, Microsoft worked with research groups to create a search engine specifically about COVID-19 using both Text Analytics and Cognitive Search that should help those hunting for treatments for the virus. The updated Text Analytics feature is able to not only analyze facts, but to apply emotional tags to topics in any context, whether healthcare, sales, or another industry.

As the world adjusts to new ways of working and staying connected, we remain committed to providing Azure AI solutions to help organizations invent with purpose, Azure AI corporate vice president Eric Boyd wrote in the announcement. Building on our vision to empower all developers to use AI to achieve more, today were excited to announce expanded capabilities within Azure Cognitive Services.

Microsoft is also opening up the Form Recognizer feature it showcased a little over a year ago to all Azure users. Form Recognizer is designed to use the AI to grasp what a form full of data in tables and non-standard formats means, and to pull out that information for easier analysis. While likely applicable to some of the forms used in healthcare, Microsoft specifically cited financial organizations such as Capgemini Groups Sogeti and Wilson Allen as finding value in the feature for processing loan applications and other fiduciary paperwork.

Azure didnt neglect the voice facet of its AI in the update either. Most notably, it made Custom Commands universally available to developers. Custom Commands simplifies connecting voice apps to devices that can be controlled within straightforward parameters like light levels or the temperature on a thermostat. The AI comes with a wide range of commands it understands in its templates and the ability to switch among different topics and types of requests automatically.

People and organizations continue to look for ways to enrich customer experiences while balancing the transition to digital-led, touch-free operations, Boyd wrote. Advancements in voice technology are empowering developers to create more seamless, natural, voice-enabled experiences for customers to interact with brands. [Custom Commands] brings together Speech to Text for speech recognition, Language Understanding for capturing spoken entities, and voice response with Text to Speech, to accelerate the addition of voice capabilities to your apps with a low-code authoring experience.

Those capabilities include 15 new voices built with Azures Neural Text to Speech tech. The voices are designed to sound natural, using real peoples voices to teach the AI to sound like a human. The voices include a mix of new languages and dialects as well as new voices for languages already used by the AI. The new voices include two kinds of Arabic, Catalan, Cantonese, and Taiwanese Mandarin among others. Its the same technology used by the BBC to build its new Beeb voice assistant, but points to the global enterprises Microsoft hopes will use Azures technology.

Microsoft Adds New Speech Styles and Lyrical Emotion to Azure AI Toolkit

Microsoft Will Bring Nuance Clinical Voice Tech to Azure

Microsoft Adds New Language Options to Power Virtual Agents Platform

Eric Hal Schwartz is a Staff Writer and Podcast Producer for Voicebot.AI. Eric has been a professional writer and editor for more than a dozen years, specializing in the stories of how science and technology intersect with business and society. Eric is based in New York City.

Go here to see the original:

Microsoft Upgrades Azure AI to Analyze Health Records and Streamline Voice App Creation - Voicebot.ai

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place .

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

See the original post here:

What is AI (artificial intelligence)? - Definition from ...

Could a new academy solve the AI talent problem? – Defense Systems

AI & Analytics

Defense technology experts think adding a military academy could be the solution to the U.S. government's tech talent gap.

"The canonical view is that the government cannot hire these people because they will get paid more in private industry," said Eric Schmidt, former Google chief and current chair of the Defense Department's Innovation Advisory Board, during a July 29 Brookings Institution virtual event.

"My experience is that people are patriotic and that you have a large number of people -- and this I think is missed in the dialogue -- a very large number of people who want to serve the country that they love. And the reason that they're not doing it is there's no program that makes sense to them."

Schmidt's comments come as the National Security Commission on Artificial Intelligence, of which he chairs, issued its second quarterly report with recommendations to Congress on how the U.S. government can invest in and implement AI technology.

One key recommendation: A national digital service academy, to act like the civilian equivalent of a military service academy to train technical talent. That institution would be paired with an effort to establish a national reserve digital corps to serve on a rotational basis.

Robert Work, former deputy secretary of defense who is now NSCAI's vice chair, said the academy would bring in people who want to serve in government and would graduate students to serve as full time federal employees at GS-7 to GS-11 pay grade. Members of the digital corps would five years at 38 days a year helping government agencies figure out how to best implement AI.

For the military, the commission wants to focus on creating a clear way to test existing service members' skills and better gauge the abilities of incoming recruits and personnel.

"We think we have a lot of talent inside the military that we just aren't aware of," Work said.

To remedy that, Work said the commission recommends a grading, via a programming proficiency test, to identify government and military workers that have software development experience. The recommendations also include a computational thinking component to the armed services' vocational aptitude battery to better identify incoming talent.

"I suspect that if we can convince the Congress to make this real and the president signs off hopefully then not only will we be successful but we'll discover that we need 10 times more. The people are there and the talent is available," Schmidt said.

Photo credit: Eric Schmidt at a March 2020 meeting of the Defense Innovation Board in Austin, Texas; DOD photo by EJ Hersom.

This article first appeared on FCW, a Defense Systems partner site.

About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [emailprotected], or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.

View original post here:

Could a new academy solve the AI talent problem? - Defense Systems

Cognitive AI and the Power of Intelligent Data Digitalization – Analytics Insight

In a quest to decode what keeps the world moving, enterprises across the world are baffled. It is not precious metals or even cryptocurrency it is data. The adage that data is the new oil holds true and soon, every company in the world will either buy or sell data, and the value of this corporate asset would gain prominence with each passing day.

Data fuels digital transformation that drives a mammoth disruption across all industries. It is the key differentiator, coming at a massive speed characterised by volume, variety, velocity and veracity in a very live environment.

The question remains unanswered, how do enterprises gain the most from this valuable resource? The answer is through Data Digitalization. As the name suggests, data digitization is the process by which physical or manual data files like text, audio, images, video are converted into digital forms. The perks of digitized data are plenty, starting from-

The initial step of data digitalization starts with the identification of data needs based on client requirements. This is based on system analysis, requirement specifications and system design.

After an enterprise assesses its data requirements, the next step is to develop a technology roadmap and send it for approval and testing. After the technology roadmap is approved, the data source chart is developed, which may include a printed format converted into a digital format.

The old images are scanned, while the faded images are recovered using advanced digital correction software. The sound and video data are retrieved through the data capturing software which is ultimately converted in the digital format.

The printed documents are checked for physical accuracy. The process imbibes Optical Character Recognition (OCR) software scanning, where the output is checked manually by proof-readers, subsequently converted into PDF, MS-Word, ASCII and HTML formats.

The takeaways from Data Digitalization

Into Data digitization, the physical documents are uploaded online and scanned to a virtual digital medium, which is then digitized to a high-quality format and structured according to the customers needs.

Enterprises can also opt for cases in which these documents are transferred to an electronic archive which complies with the stringent security requirements and provides an option to manage data round the clock from any computer anywhere in the world using web applications.

In a crux, leveraging the power of cognitive computing algorithms, enterprises can synthesize raw data from various information sources, weigh multiple options to arrive at conclusive answers. To achieve this, cognitive systems encapsulate self-learning models using data mining, pattern recognition and natural language processing (NLP) algorithms.

For enterprises to use data digitalization systems they require vast amounts of structured and unstructured data, fed to machine learning algorithms. Over time, these cognitive systems refine the way they identify patterns, process data to become self-sufficient anticipate new problems and possible alternative solutions for the model.

While AI relies on algorithms for problem-solving, patten identification from the hidden data, cognitive computing systems have the loftier goal of creating models that mimic the brains reasoning process to solve the modern concerns of data digitalization and data adaptability.

The increased resilience towards data and digitalization will change the world of data forever. Is your organisation ready with its own digitalized data pipelines?

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Kamalika Some is an NCFM level 1 certified professional with previous professional stints at Axis Bank and ICICI Bank. An MBA (Finance) and PGP Analytics by Education, Kamalika is passionate to write about Analytics driving technological change.

See the rest here:

Cognitive AI and the Power of Intelligent Data Digitalization - Analytics Insight

How The Federal Governments AI Center Of Excellence Is Impacting Government-Wide Adoption Of AI – Forbes

In February 2019, the President signed Executive Order 13859 on the American AI Initiative, which set up the United States national strategy on artificial intelligence. Even prior to this however, government agencies had been heavily invested in AI and machine learning, applying it to a wide range of applications. While those previous AI implementations might have happened at the discretion of various agencies, this executive order put more of an emphasis on implementing AI more widely within the federal government.

In the context of this wider push for AI, many federal agencies are accelerating their adoption of AI, but are struggling with the best way to put those AI efforts into practice. AI and machine learning can bring about transformational change, but many federal government decision-makers struggle with knowledge, skill-sets, and best practices on how to move forward. To meet this need, the General Service Administration's (GSA) Federal Acquisition Service (FAS) Technology Transformation Services (TTS), and the GSA AI Portfolio and Community of Practice created the GSAs Artificial Intelligence (AI) Center of Excellence (CoE) to support the adoption of AI through direct partnerships, enterprise-level transformation, and discovery work.

GSA

The AI Center of Excellence aims to improve and enhance AI implementations in the government and help agencies on their journey with AI adoption. The relatively small team at the GSA AI CoE is helping to bring about some very impressive changes within the federal government. In this article, Neil Chaudhry, Director, AI Implementations at the AI Center of Excellence within the General Service Administration (GSA) and Krista Kinnard, Director, Artificial Intelligence Center of Excellence at Technology Transformation Services (TTS) at the General Services Administration (GSA) share more about what the CoE is all about, some of the most interesting use cases they have seen so far with the governments adoption of AI, why trustworthy and transparent AI is important to gain citizen trust in AI systems, and what they believe the future holds for AI.

What is the GSA AI Center of Excellence?

Krista Kinnard: GSAs Artificial Intelligence (AI) Center of Excellence (CoE) accelerates adoption of AI to discover insights at machine speed within the federal government. Housed within GSAs Federal Acquisition Service (FAS) Technology Transformation Services (TTS), and coupled with the GSA AI Portfolio and Community of Practice, this collaboration can engage with agencies from information-sharing to execution. As part of the FAS TTS dedication to advancing innovation and IT transformation across the government, the AI CoE supports the adoption of AI through direct partnerships, enterprise-level transformation, and discovery work. Working in a top down approach, the CoE engages at the executive level of the organization to drive success across the enterprise while also supporting discovery and implementation of AI in a consultative type of approach. This also means building strong partnerships with industry. The private sector is quickly producing new and innovative AI enabled technologies that can help solve government challenges. By partnering with industry, we are able to bring the latest innovations to government, helping build a more technologically enabled, future proof method to meet government missions.

How did GSAs AI Center Of Excellence get started?

Krista Kinnard: GSAs CoE program was conceived during conversations with the White House and innovative industry players, looking at government service delivery and the contrasts between the ease and convenience of interacting with private companies and the sometimes challenging interactions with the government. The focus being on a higher level of change and a redesigning of government services, based on access to the best ideas from industry and most updated technology advances in government.

It is a signature initiative of the Administration, designed by the White Houses Office of American Innovation and implemented at GSA in 2017. The CoE approach was established to scale and accelerate IT transformation at federal agencies. The approach leverages a mix of government talent and private sector innovation in partnership while centralizing best practices and expertise into CoE.

The programs goal is to facilitate repeatable and sustainable transformation, scaling and accelerating transformation by continually building on lessons learned from current and prior agencies. Since inception, FAS TTS has formed six CoEs: Artificial Intelligence, Cloud Adoption, Contact Center, Customer Experience, Data & Analytics, and Infrastructure Optimization. These six capability areas are typically the key focus areas needed by an organization when driving IT Modernization and undergoing a digital transformation. The AI CoE was specifically designed in support of the Executive Order on Maintaining American Leadership in Artificial Intelligence.

How is the AI Center of Excellence being applied to improve or otherwise enhance whats happening with AI in the government?

Krista Kinnard: Because AI has become such a technology of interest, the CoE is focused on partnering with federal agencies to identify common challenges both in mission delivery and in mission support that can be enhanced using this technology. We are not interested in developing AI solutions for the sake of technology. We are interested in helping government agencies understand their mission delivery and support challenges so that we can work together to create a clear path to a meaningful AI solution that provides true value to the organization and the people it serves.

Why is it important for the Federal Government to adopt AI?

Neil Chaudhry: Data on USAspending.gov in 2019 showed that the federal government spends over $1.1 trillion on citizen services per year. The American public, conditioned by the private sector, expect better engagement with government agencies. Using AI at scale can help modernize the delivery of services while improving the effectiveness and efficiency of government services.

AI can help in many ways, such as proactively identifying trends in service or projected surges in service requirements. AI is excellent at pattern recognition and can assist federal programs identify anomalous activity or suspected fraud much faster than humans can. AI can speed service delivery by automatically resolving routine claims, thus freeing up federal employees to focus on more complex problems that require a human touch.

Where do you see federal agencies today in their AI adoption?

Neil Chaudhry: It really varies. Overall, we see federal agencies as cautiously optimistic in the adoption of AI. Every large federal agency is executing on one or a combination of proofs of concept, pilot projects, or technology demonstration projects related to AI technologies while other federal agencies with mature data science practices are further along in their AI exploration, for example, implementing Robotic Process Automation, Chatbots, fraud detection tools, and automated translation services. The common thread here is that all agency leaders understand that using AI provides a competitive advantage in the marketplace for delivery of citizen services in a cost effective and impactful manner and are actively supporting AI efforts in their agencies.

How is the Federal government adopting AI compared to private industry?

Neil Chaudhry: Within the AI CoE, we have been able to develop a very broad and very deep perspective on government wide efforts related to AI adoption because we work with many federal agencies at different stages of AI adoption. Right now, most federal agencies are looking to institutionalize AI as an enabling work stream in a sustainable way in this sense, they are very similar to the private sector in terms of AI adoption.

However, the crucial distinction between AI adoption in the private sector and public sector is that the federal government is heavily invested in learning resources like Communities of Practice that focus on sharing use cases, lessons learned, and best practices.

How do you see process automation fitting into the overall AI landscape?

Neil Chaudhry: Process automation is a critical component of applied AI. It is one of the best examples of augmented intelligence in this space right now. Process automation is critical because it is the key to upskilling knowledge workers in the federal workforce. It can take the drudgery out of routine work and free up time for these practitioners to do what they do best come up with innovative solutions to solve ordinary problems in extraordinary ways. It can also reduce the amount of rework on service requests and claims applications due to human error by virtue of built-in error checking that gets smarter as more requests are routed through the AI application.

What are some of the most interesting use cases youve seen so far with the government's adoption of AI?

Krista Kinnard: There are a number of impactful use cases. Broadly, we see a lot that focus on four outcomes: increased speed and efficiency, cost avoidance and cost saving, improved response time, and increased quality and compliance. We see these in a number of applications that enable agencies to provide direct service to the American public in the form of intelligent chat bots and innovative call centers for customer support. We also are starting to see AI making progress in the fraud detection and prevention space to ensure the best use and allocation of government funds. One of the biggest areas weve started to see advancement is in data management. Agencies are using intelligent systems to automate both collection and aggregation of government data, as well as, provide deeper understanding and more targeted analysis.

We have seen that the potential for use of natural language processing (NLP) is huge in government. NLP enables us to read and understand free-form text, like that of a form used to apply for benefits, or document government decisions. So much of government data exists in government forms with open text fields and government documents, like memos and policy documents. NLP can really help to understand the relationships between these data and provide deeper insight for government decision making.

What do you see as critical needs for education and training around AI?

Neil Chaudhry: At its core, AI is operational research and applied statistics analysis on steroids. The AI workforce of the future needs to have a fundamental understanding of statistical concepts, probability concepts, decision science concepts, optimization techniques, queuing theories, and various problem solving methodologies used in the business community. In addition, the AI workforce of the future needs periodic training around ethics, critical thinking, collaboration, and working in diverse teams, to name a few, to effectively understand things like global data sets that are generated by people with different norms and values.

The critical needs for education and training for AI revolve around soft skills, such as flexibility, empathy, and the ability to give and receive feedback, along with critical thinking, reasoning, and decision-making. Any seasoned AI practitioner has experienced instances in AI research where we end up correlating shark bites with ice cream sales. So the ability to seek out subject matter experts, convince an organization to share proprietary datasets, and communicate actionable insights are all critical needs for education and training around AI.

What is the CoE doing to help educate and inform government employees when it comes to AI?

Krista Kinnard: Education and training is a priority for many government agencies. Employees are passionate about serving the mission of their agency and delivering quality service to the American people. At the CoE we aim to empower them through the use of AI as a tool. As such, a critical component of the CoE model is to learn by doing; its experiential learning with real world application with a little coaching as they gain their AI footing. As our technical experts partner with agencies, we engage with their workforce in every step of the process so that when the CoE completes an engagement, the agency has a team of people who know the solution we delivered, can take ownership of its success, and repeat the process for future innovation.

Beyond partnering, the CoE reaches out more broadly to share experiences through the governmentwide AI Community of Practice. The AI Community of Practice supports AI education by creating a space to share lessons learned, best practices, and advice. We regularly host events and webinars and are forming working groups to focus on specific topics within AI. The challenge is that people learn in different ways. Class and certifications can certainly provide a foundation, but learning to apply those skills in a specific context can be difficult. If organizations can create a culture of experimentation where people can learn by doing in a safe and controlled environment, government will be able to build skills around AI adoption. For that, we have established a page on OPMs Open Opportunities platform. Here programs and offices across government can post micro-project opportunities or learning assignments under the Data Savvy Workforce tag. Again, this isnt just for data practitioners. Employees on program teams, acquisition teams, and HR teams can learn how AI could enhance their processes.

How can the Federal Government ensure their AI systems are built with citizen trust in mind?

Krista Kinnard: This point is critically important to the AI CoE. We have deep respect for the people and communities that government agencies serve and the data housed in government systems that helps agencies serve those people and communities. Part of the CoE engagement model is to embed a clear and transparent human-centered approach into all of our AI engagements.

Another critical element of developing trustworthy AI is ensuring all stakeholders have a clear understanding of the problem we are trying to solve and the desired outcome. It sounds simple, but in order to effectively monitor and evaluate the impact of any AI system we build, we have to first truly understand the problem and how we are trying to solve it.

We also emphasize creating infrastructure to support regular and iterative evaluation of data, models and outcomes to assess impact of the AI system both intended and unintended.

Creating trustworthy AI is not just about the data and technology itself. We engage early with the acquisition team to ensure that they are making a smart buy. We engage with the Chief Information Security Officer and the security team early and often since approval and security checks can be a hurdle for implementation. We engage Privacy Officers to ensure AI solutions are in compliance with organizational privacy policies. By bringing in these key stakeholders early in the AI development process, we help embed responsible AI into these solutions from the onset.

What advice do you have for government agencies and public sector organizations looking to adopt AI?

Krista Kinnard: I would offer two pieces of advice. First, start small. Choose a challenge that can be easily scoped and understood with known data. This will help prove the value of this technology. Once the organization becomes more comfortable with all that is involved with building an AI on a smaller scale, it can move towards bigger and more complex projects. The second piece of advice I would offer is to know what AI can do, and what it cannot. This is a powerful technology that is already producing meaningful and valuable results, but it is not magic. Understanding the limitations of AI will help select realistic and attainable AI projects.

Neil Chaudhry: AI is meant to augment the humans in the workforce, not replace them with synthetic media and autonomous robots or chatbots.

I always discuss what a successful AI implementation means to the partner and their frontline staff during our initial meetings because every successful Ai implementation that I have seen or been follows a hierarchy of people over process, process over technology, and technology as the tool used by people to improve organizational processes. As part of my discussions, I always advise the partner agency to think of how the frontline staff will use AI. My experience has shown that if the frontline staff cannot leverage AI in a meaningful way then the AI implementation is not sustainable or actionable.

If a partner is looking to replace people, then their AI adoption strategy will not be sustainable. In addition, if a partner is looking to circumvent or bypass an established law, regulation, policy, or procedure then their AI adoption will also be unsuccessful because it will amplify the biases inherent in the new processes.

The advice I always give my partners who are looking to implement AI is to define a set of sustainable use cases for AI and measure the impact of those use cases against the existing tech debt within their organization. It may be that the agency is ready to implement AI now but waiting a year may allow the agency to implement AI in a cost-effective manner.

Read this article:

How The Federal Governments AI Center Of Excellence Is Impacting Government-Wide Adoption Of AI - Forbes

Microsoft Uses AI to Make Our Eyes Look at the Webcam – PCMag

It doesn't matter where your webcam is positioned, it's always going to be offset from the person you're talking to. The end result is we hardly ever look people in the eye when talking to them over video chat. Microsoft has created an intelligent solution, though.

As spotted by Liliputing, the latest Windows 10 Insider Preview Build (20175) announcement contains details of a new feature Microsoft is calling "Eye Contact." It uses artificial intelligence to "adjust your gaze on video calls so you appear to be looking directly in the camera." So there's no need to remember to look at the webcam instead of the person on your screen, which no one ever does as it's simply not natural.

The one drawback of Eye Contact, at least for now, is the fact it's limited to only working on the Surface Pro X. That's because the Pro X contains Microsoft's SQ1 ARM processor, which it developed in partnership with Qualcomm and includes the "artificial intelligence capabilities" required for the gaze adjustment to work, according to Microsoft. If you do own a Pro X and have access to this latest Windows Preview Build, then the feature can be turned on via the Surface app. After that, it should work with any video app using the webcam.

Considering Intel's x86 processors are capable of handling AI-intensive video games, it seems likely Microsoft will eventually expand the Eye Contact feature to other models of its Surface range and hopefully to Windows 10 in general. After that, we can all look at the people we are talking to on video chat and know that they are also seeing us looking directly at them, albeit thanks to AI.

Further Reading

Webcam Reviews

Read this article:

Microsoft Uses AI to Make Our Eyes Look at the Webcam - PCMag