Hour One wants synthetic AI characters to be your digital avatars – VentureBeat

If you ever wondered how well populate the metaverse, look no further than Hour One, an Israeli startup that is making replicas of people with AI avatars. These avatars can be a near-perfect visual likeness of you and speak with words fed to them by marketers who want to sell you something. An avatar can speak on your behalf in a digital broadcast when youre at home watching TV.

Such creations feel like a necessary prerequisite of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such asSnow CrashandReady Player One. And the trick: Youll never know if youre talking to a real person or one of Hour Ones synthetic people.

There is definitely interest in the metaverse and we are doing experiments in the gaming space and with photorealism, Hour One business strategy lead Natalie Monbiot said in an interview with VentureBeat. The thing that has fired up the team is this vision of a world which is increasingly virtual and a belief that we will live increasingly virtually.

She added, We already have different versions of ourselves that appear in social media and different social channels. We represent ourselves already in this kind of digital realm. And we believe that our virtual selves will become even more independent. And we can put them to work for us. We can benefit from this as a human race. And you know, that old saying we cant be in two places at once? Well, we believe that that will no longer be true.

Hour One is one more example of the fledgling market for virtual beings. Startups focused on virtual beings have raised more than $320 million to date, according to Edward Saatchi of Fable Studios, speaking at Julys Virtual Beings Summit.

But were a little ahead of ourselves. Metaverse plays are becoming increasingly common as we all realize that there has to be something better than Zoom calls to engage in a digital way. So the Tel Aviv, Israel-based company said it raised $5 million in seed funding this week from Galaxy Interactive via its Galaxy EOS VC Fund, as well as Block.one, Remagine Ventures, Kindred Ventures, and Amaranthine. It will use that money to scale its AI-driven cloud platform and create thousands of new digital characters.

Youve heard of stock photos. Hour One is talking about something similar to stock humans. They can be used to speak any kind of script in a marketing video or give a highly customized message to someone. The goal is to create characters who cross the uncanny valley.

I think that weve crossed the uncanny valley because we have our likeness test, and our videos are actually live and in market and generating results for customers, Monbiot said. I think thats something thats really distinctive about us, even though were such a young company, weve had very positive commercial traction already.

Above: Whos real and whos not?

Image Credit: Hour One

We create synthetic characters based on real people, Monbiot said. We do so for commercials. We take real people and we have this really simple process for converting real people into synthetic characters that resemble them exactly. And once we have the synthetic characters, we can program them to generate all kinds of new content at enormous speed and scale.

The competition in this space will be tough. GamesBeat will be having our own conference, tentatively scheduled for January 26 to January 27, 2020, on topics including the metaverse, and we expect it to be full of interesting companies.

A Samsung spinoff, Neo, caught a lot of attention for creating human AI avatars at CES 2020 in January, and then it promptly caught a lot of bad press for avatars that didnt look as real as expected. But Hour One also started coming out of stealth at the same time with a plan to expand business-to-business human communication. The company showcased its real or synthetic likeness test at CES 2020, challenging people to distinguish between real and synthetic characters generated by its AI.

Hour One is using deep learning and generative adversarial neural networks to make its video characters. The company says it can do this in a highly scalable and cost-effective way. Theyre supposed to look good, and the image on top of this story looks realistic.

But the cost of missing the mark is high. Hour One will have to beat Neo in the race across the uncanny valley. And Genies is coming from another direction, with cartoon-based avatars that represent digital versions of celebrities.

Above: Hour Ones real Natalie Monbiot

Image Credit: Hour One

Hour One is working with companies in the ecommerce, education, automotive, communication, and enterprise sectors, with expanded industry applications expected throughout 2020. The company has about 100 avatars today.

The pitch is that the lower cost per character use means that companies will be able to engage more with their customers on every level, from digital receptionists to friendly salespeople.

These customers can create thousands of videos simply by submitting text to these characters, Monbiot said. It appears as though real people are actually saying those words, but were using AI to make it happen. Were improving communication. Were obviously living in an ever-more virtual existence. And were enabling businesses of all kinds to engage in a more human way.

And if your avatar is speaking on behalf of you somewhere and its generating value, youll get paid for it, Monbiot said even if youre not there. We have a very bright view of the future. If your avatar speaks, you can get paid for that, Monbiot said.So were at the beginning of a new future. And for us, thats a future in which everybody will have a synthetic character. We will have virtual versions of ourselves.

Sam Englebardt, managing director of Galaxy Interactive (and a speaker on the subject of the metaverse at our GamesBeat Summit event), calls the approach an ethical one.Hour One is a business-to-business provider of the best synthetic video tech Ive seen to date, Englebardt said in an email to GamesBeat.

Oren Aharon and Lior Hakim created Hour One in 2019 with a mission of driving the economy of the digital workforce powered by synthetic characters of real-life people. They can use blockchain technology to verify the identity of a digital character and who owns it. If theyre altered or used for deep fakes, Hour One will be able to mark them as altered and notify people what has happened. The team has eight people.

Read the original here:

Hour One wants synthetic AI characters to be your digital avatars - VentureBeat

Google’s new PAIR project wants to rethink how we use AI – CNET

Google's AI program, AlphaGo, went up against -- and defeated -- Chinese Go champion Ke Jie (on the left) at the Future of Go Summit in May in China. The match took place a year after AlphaGo bested Lee Sedol, world number two Go player.

AlphaGo may have defeated humans at board games, but its creators really just want us to be buddies.

In a new project named the People + AI Research Initiative (PAIR), Google's researchers are looking at the relationship between humans and artificial intelligencein the hopes of making the latter more useful to the former, the tech giant announced on its blogon Monday.

The company says it'll rethink AI on three levels: How we can use it as a tool in everyday life, how professionals in all fields can use it to make their jobs easier and how practical AI development can be taught to engineers.

Google isn't the only one making big moves to help develop the nascent field. On Monday, the Ethics and Governance of Artificial Intelligence Fund, helmed by Harvard University's Berkman Klein Center for Internet & Society and the MIT Media Lab, pledged $7.6 million to support the creation of AI that serves public interest. Plus,the tech giant last year partnered with Amazon, Facebook, IBM and Microsoft to create a new not-for-profit called the Partnership on Artificial Intelligence to Benefit People and Society.

Google says, as part of PAIR, it will introduce new open-sourced tools and educational material as well as publish research to help push AI along.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Batteries Not Included: The CNET team reminds us why tech is cool.

Original post:

Google's new PAIR project wants to rethink how we use AI - CNET

Microsoft Says AI-Powered Windows Updates Have Reduced Crashes – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

Microsoft has invested heavily in AI and machine learning, but you wouldnt know it from how little attention it gets compared with Google. Microsoft is using its machine learning technology to address something all long-term Windows users have experienced: faulty updates. Microsoft says that AI can help identify systems that will play nicely with updates, allowing the company to roll new versions out more quickly with fewer crashes.

It seems like we cant get a single Windows update without hearing some stories of how it completely broke one type of system or another. You have to feel for Microsoft a little the Windows ecosystem is maddeningly complex with uncountable hardware variants. Microsoft started using AI to evaluate computers with Windows 10, version 1803 (the April 2018 Update). It measured six PC health stats, assessed update outcomes, and loaded all the data into a machine learning algorithm. This tells Microsoft which computers are least likely to encounter problems with future updates.

By starting with the computers with the best update compatibility, Microsoft can push new features to most users in short order. With most OS rollouts, things move very slowly at first while companies remain vigilant for problems. PCs determined to have likely issues by the AI will get pushed down the update queue while Microsoft zeros in on the bugs.

The ML models seem effective, even if Microsoft didnt bother to label the Y-axis.

The first AI-powered deployment was a success, with adoption rates higher than all previous Windows 10 updates. Microsoft expanded its original six PC metrics to a whopping 35 as of the Windows 1903 rollout (May 2019). The company claims this makes update targeting even more accurate. This does not guarantee perfect updates, though. Microsofts blog post glosses over 1809 update from late 2018. That rollout used AI technology, but you might recall the widespread file deletion bug that caused Microsoft to pause the release. AI might help determine compatibility, but it cant account for unknown bugs like that.

Still, Microsoft is happy with the results from its machine learning deployments. According to the new blog post, systems chosen for updates by the algorithm have fewer than half as many system uninstalls, half as many kernel-mode crashes, and one-fifth as many post-update driver conflicts. Hopefully, you can look forward to fewer Windows update issues going forward, and youll have AI to thank.

Now read:

Read the original here:

Microsoft Says AI-Powered Windows Updates Have Reduced Crashes - ExtremeTech

How AI is helping top tech talent connect with the best opportunities – Fast Company

Despite whats happened to the world economy over the past yearand the continued uncertainty of what lies aheadwhen it comes to hiring top talent it remains a job candidates market. The current focus of the conversation is around the impact of artificial intelligence on hiring practices. However, there are some key considerations that demonstrate the immense amount of choices talent will always have.

Right now, two seemingly contradictory things are happening in business.

Artificial intelligence is growing tremendously. According to statista, AI is growing approximately 54 percent annually and will be one of the next great technological shifts, like the advent of the computer age or the smartphone revolution.

Humans are not becoming less important. Organizations are becoming more sophisticated about measuring the value, impact, and importance of people. In turn, talentor human resources or the management of workforcesis receiving more attention. PwC reports that what used to be a slow-moving corporate technology space is now a $148 billion market of HR cloud solutions to address the needs for the future of work.

Knowledge workers may not be aware of the impact AI is quickly makingand will continue to makein the way large enterprises manage their workforces.

AI is helping companies:

Find the best fit for their skills within their chosen careers. For too long, talent has been managed poorly, often relying on inaccurate job descriptions, uninformed interview processes, and leaning too heavily on writing the best CV. AI holds the promise that people can reach their potential, which benefits their employers as well.

End the emphasis on who you know. Connections and networks have ruled the employment world for decades. AI has the power to change that and home in on peoples capabilities as the basis of hiring and promotion decisions.

Allow people to apply their capabilities to new industries. For so long, people have been laid off through no fault of their own. They worked in declining industries, and have struggled to translate their capabilities to new roles in new industries. AI changes this. Think about a restaurant employee who has worked for 20 years at one place, only to lose a job. They may feel like all they know is food and hospitality. AI, having analyzed so many careers, understands that the person knows how teamwork, inventory, supply chains, budgeting, and other skills can be applied to a different industry. It can match them to a new job based on capabilities and potential.

AI is making an impact in all of the ways companies manage work, from hiring and succession planning to retention and learning. It is replacing Boolean keywords with neural networks that are providing opportunities to employees they never had before.

AI requires massive amounts of data. In some cases, it has been used to examine more than a billion careers, and more than a million skills. By using neural networks, AI can crunch that anonymized data and learn about people and their career potential like never before. The AI can see that if a person worked at a company during a certain set of years, they are very likely to have a certain set of skillswhich that person easily could have left off their CV, thereby missing out on a chance of being selected for a job.

Take two potential job applicants, for example. One worked as a project manager for Google for five years. Another was at Uber during that time. If it has enough data, the AI knows a lot about each of their work histories, and how theyre different, even if they had the same title. It has seen so many people with their capabilities come and go from these companies, that even without a complete CV, it is able to infer skills about people based on their titles, and the time and place they worked.

The AI can also see adjacent skills and other relationships between skills. If a person knows one computer language, the AI knows it can pick up a similar one. Only advanced AI can do this. Only now do we have the technology to truly see someones potential and capability. This offers the promise of a great career to everyone based on what they can do, not who they know.

There are a lot of hot technologies a knowledge worker and an engineer can work on right now. The use of AI to improve peoples careers and to shape the future of talent is here to stay. Providing opportunities to people based on their potential will improve large businesses and every organization going forward.

You can learn more about how to make an impact in this growing field here.

Vinodh Kumar Ravindranath is the head of artificial intelligence at Eightfold AI.

Originally posted here:

How AI is helping top tech talent connect with the best opportunities - Fast Company

AI is now being used to shortlist job applicants in the UK let’s hope it’s not racist – The Next Web

Its oft-repeated that artificial intelligence will be a danger to our jobs. But perhaps in a not-so-surprising twist, AI is also being increasingly used by companies to hire candidates.

According to a report by The Telegraph, AI-based video interviewing software such as thatdeveloped by HireVue are being leveraged by UK companies for the first time to shortlist the best job applicants.

Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop, the report said.

HireVue, a Utah-based pre-employment assessment AI platform founded in 2004, employs machine learning to evaluate candidate performance in videos by training an AI system centered around 25,000 usable data points. The companys software is used by over 700 companies worldwide, including Intel, Honeywell, Singapore Airlines, and Oracle.

There are lots of subtle cues we subconsciously make sense of think facial expressions or intonation but these are missed when we zone out, the company notes on its website.

The videos record an applicants responses to preset interview questions, which are then analyzed by the software for intonation, body language, and other parameters, looking for matches against traits of previous successful candidates.

Its worth noting that Unilever experimented with HireVue in its recruitment efforts as early as 2017 in the US.

From recommending you what to binge watch over the weekend to booking the cheapest flight for your next vacation, AI and machine learning have quickly emerged as two of the most disruptive forces ever to hit the economy.

The technology is now doing more than ever for both good and bad. Its being deployed in health care; its helping artists synthesize death metal music. On the other hand, its enabling high-tech surveillance, and evenjudging your creditworthiness.

Theyre also scrutinizing your resume, transforming both job seeking and the workplace, and revamping the very means companies look for candidates, get the most out of employees, and retain top talent.

But just as algorithms steadily infiltrate different aspects of your day-to-day lives and make decisions on your behalf, they have also come progressively under scrutiny for being as biased as the humans they sometimes replace.

By letting a computer program make hiring decisions for a company, the prevailing notion is that the process can be made more efficient both by selecting the most qualified people from a deluge of applications and side-stepping human bias to identify top talent from a diverse pool of candidates.

HireVue, however, claims it has removed data points that led to bias in its AI models.

Yet as its widely established, AIs are only as good as the data theyretrained on.Bad data that contain implicit racial, gender, or ideological biases can also creep into the systems, resulting in a phenomenon called disparate impact, wherein some candidates may be unfairly rejected or excluded altogether because they dont fit a certain definition of fairness.

Regulating the use of AI-based tools, then, necessitates the need for algorithmic transparency, bias testing, and assessing them for risks associated with automated discrimination.

But most importantly, it calls for collaboration between engineers, domain experts, and social scientists. This is the key to understanding the trade-offs between different notions of fairness and to help us define which biases are desirable or unacceptable.

Read next: Facebook now blocks Pirate Bay links but you can still bypass the ban

Continued here:

AI is now being used to shortlist job applicants in the UK let's hope it's not racist - The Next Web

4 Things to Consider Before You Start Using AI in Personnel Decisions – Harvard Business Review

Which candidate should we hire? Who should be promoted? How should we choose which people get which shifts? In the hope of making better and fairer decisions about personnel matters such as these, companies have increasingly adopted AI tools only to discover that they may have biases as well. How can we decide whether to keep human managers or go with AI? This article offers four considerations.

The initial promise of artificial intelligence as a broad-based tool for solving business problems has given way to something much more limited but still quite useful: algorithms from data science that make predictions better than we have been able to do so far.

In contrast to standard statistical models that focus on one or two factors already known to be associated with an outcome like job performance, machine-learning algorithms are agnostic about which variables have worked before or why they work. The more the merrier: It throws them all together and produces one model to predict some outcome like who will be a good hire, giving each applicant a single, easy-to-interpret score as to how likely it is that they will perform well in a job.

No doubt because the promise of these algorithms was so great, the recognition of their limitations has also gotten a lot of attention, especially given the fact that if the initial data used to build the model is biased, then the algorithm generated from that data will perpetuate that bias. The best-known examples have been in organizations that discriminated against women in the past where job performance data is also biased, and that means algorithms based on that data will also be biased.

So how should employers proceed as they contemplate adopting AI to make personnel decisions? Here are four considerations:

1. The algorithm may be less biased than the existing practices that generate the data in the first place. Lets not romanticize how poor human judgment is and how disorganized most of our people management practices are now. When we delegate hiring to individual supervisors, for example, it is quite likely that they may each have lots of biases in favor of and against candidates based on attributes that have nothing to do with good performance: Supervisor A may favor candidates who graduated from a particular college because she went there, while Supervisor B may do the reverse because he had a bad experience with some of its graduates. At least algorithms treat everyone with the same attributes equally, albeit not necessarily fairly.

2. We may not have good measures of all of the outcomes we would like to predict, and we may not know how to weight the various factors in making final decisions. For example, what makes for a good employee? They have to accomplish their tasks well, they also should get along with colleagues well, fit in with the culture, stay with us and not quit, and so forth. Focusing on just one aspect where we have measures will lead to a hiring algorithm that selects on that one aspect, often when it does not relate closely to other aspects, such as a salesperson who is great with customers but miserable with co-workers.

Here again, it isnt clear that what we are doing now is any better: An individual supervisor making a promotion decision may be able in theory to consider all those criteria, but each assessment is loaded with bias, and the way they are weighted is arbitrary. We know from rigorous research that the more hiring managers use their own judgment in these matters, the worse their decisions are.

3. The data that AI uses may raise moral issues. Algorithms that predict turnover, for example, now often rely on data from social media sites, such as Facebook postings. We may decide that it is an invasion of privacy to gather such data about our employees, but not using it comes at the price of models that will predict less well.

It may also be the case that an algorithm does a good job overall in predicting something for the average employee but does a poor job for some subset of employees. It might not be surprising, for example, to find that the hiring models that pick new salespeople do not work well at picking engineers. Simply having separate models for each would seem to be the solution. But what if the different groups are men and women or whites and African Americans, as appears to be the case? In those cases, legal constraints prevent us from using different practices and different hiring models for different demographic groups.

4. It is often hard, if not impossible, to explain and justify the criteria behind algorithmic decisions. In most workplaces now, we at least have some accepted criteria for making employment decisions: He got the opportunity because he has been here longer; she was off this weekend because she had that shift last weekend; this is the way we have treated people before. If I dont get the promotion or the shift I want, I can complain to the person who made the decision. He or she has a chance to explain the criterion and may even help me out next time around if the decision did not seem perfectly fair.

When we use algorithms to drive those decisions, we lose the ability to explain to employees how those decisions were made. The algorithm simply pulls together all the available information to construct extremely complicated models that predict past outcomes. It would be highly unlikely if those outcomes corresponded to any principle that we could observe or explain other than to say, The overall model says this will work best. The supervisor cant help explain or address fairness concerns.

Especially where such models do not perform much better than what we are already doing, it is worth asking whether the irritation they will cause employees is worth the benefit. The advantage, say, of just letting the most senior employee get first choice in picking his or her schedule is that this criterion is easily understood, it corresponds with at least some accepted notions of fairness, it is simple to apply, and it may have some longer-term benefits, such as increasing the rewards for sticking around. There may be some point where algorithms will be able to factor in issues like this, but we are nowhere close to that now.

Algorithmic models are arguably no worse than what we are doing now. But their fairness problems are easier to spot because they happen at scale. The way to solve them is to get more and better measures data that is not biased. Doing that would help even if we were not using machine-learning algorithms to make personnel decisions.

More here:

4 Things to Consider Before You Start Using AI in Personnel Decisions - Harvard Business Review

Microsoft Upgrades Azure AI to Analyze Health Records and Streamline Voice App Creation – Voicebot.ai

on July 13, 2020 at 8:00 am

Microsofts artificial intelligence services are now able to mine electronic medical records for new insight and simplify building or improving voice apps after a spate of updates to the Azure AI platform. Azure Cognitive Services provides enterprise-level AI services to companies who want to apply artificial intelligence to their work.

The COVID-19 health crisis has accelerated the use of AI as a doctors assistant in record-keeping. Azure connects doctors notes and conversations with patients to electronic medical records both through the Project EmpowerMD Intelligent Scribe Service and as a host platform for Nuances virtual assistant for doctors after the two companies reached an agreement last fall. Now, Azure can help medical professionals glean new conclusions from that data using Text Analytics for health. Microsoft took the existing Text Analytics feature and trained it on medical data like clinical notes and protocols to let it understand how to find and to share insights from the huge amounts of medical data doctors normally have to pore through manually to find patterns. Though still in previews, Microsoft worked with research groups to create a search engine specifically about COVID-19 using both Text Analytics and Cognitive Search that should help those hunting for treatments for the virus. The updated Text Analytics feature is able to not only analyze facts, but to apply emotional tags to topics in any context, whether healthcare, sales, or another industry.

As the world adjusts to new ways of working and staying connected, we remain committed to providing Azure AI solutions to help organizations invent with purpose, Azure AI corporate vice president Eric Boyd wrote in the announcement. Building on our vision to empower all developers to use AI to achieve more, today were excited to announce expanded capabilities within Azure Cognitive Services.

Microsoft is also opening up the Form Recognizer feature it showcased a little over a year ago to all Azure users. Form Recognizer is designed to use the AI to grasp what a form full of data in tables and non-standard formats means, and to pull out that information for easier analysis. While likely applicable to some of the forms used in healthcare, Microsoft specifically cited financial organizations such as Capgemini Groups Sogeti and Wilson Allen as finding value in the feature for processing loan applications and other fiduciary paperwork.

Azure didnt neglect the voice facet of its AI in the update either. Most notably, it made Custom Commands universally available to developers. Custom Commands simplifies connecting voice apps to devices that can be controlled within straightforward parameters like light levels or the temperature on a thermostat. The AI comes with a wide range of commands it understands in its templates and the ability to switch among different topics and types of requests automatically.

People and organizations continue to look for ways to enrich customer experiences while balancing the transition to digital-led, touch-free operations, Boyd wrote. Advancements in voice technology are empowering developers to create more seamless, natural, voice-enabled experiences for customers to interact with brands. [Custom Commands] brings together Speech to Text for speech recognition, Language Understanding for capturing spoken entities, and voice response with Text to Speech, to accelerate the addition of voice capabilities to your apps with a low-code authoring experience.

Those capabilities include 15 new voices built with Azures Neural Text to Speech tech. The voices are designed to sound natural, using real peoples voices to teach the AI to sound like a human. The voices include a mix of new languages and dialects as well as new voices for languages already used by the AI. The new voices include two kinds of Arabic, Catalan, Cantonese, and Taiwanese Mandarin among others. Its the same technology used by the BBC to build its new Beeb voice assistant, but points to the global enterprises Microsoft hopes will use Azures technology.

Microsoft Adds New Speech Styles and Lyrical Emotion to Azure AI Toolkit

Microsoft Will Bring Nuance Clinical Voice Tech to Azure

Microsoft Adds New Language Options to Power Virtual Agents Platform

Eric Hal Schwartz is a Staff Writer and Podcast Producer for Voicebot.AI. Eric has been a professional writer and editor for more than a dozen years, specializing in the stories of how science and technology intersect with business and society. Eric is based in New York City.

Go here to see the original:

Microsoft Upgrades Azure AI to Analyze Health Records and Streamline Voice App Creation - Voicebot.ai

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place .

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

See the original post here:

What is AI (artificial intelligence)? - Definition from ...

Could a new academy solve the AI talent problem? – Defense Systems

AI & Analytics

Defense technology experts think adding a military academy could be the solution to the U.S. government's tech talent gap.

"The canonical view is that the government cannot hire these people because they will get paid more in private industry," said Eric Schmidt, former Google chief and current chair of the Defense Department's Innovation Advisory Board, during a July 29 Brookings Institution virtual event.

"My experience is that people are patriotic and that you have a large number of people -- and this I think is missed in the dialogue -- a very large number of people who want to serve the country that they love. And the reason that they're not doing it is there's no program that makes sense to them."

Schmidt's comments come as the National Security Commission on Artificial Intelligence, of which he chairs, issued its second quarterly report with recommendations to Congress on how the U.S. government can invest in and implement AI technology.

One key recommendation: A national digital service academy, to act like the civilian equivalent of a military service academy to train technical talent. That institution would be paired with an effort to establish a national reserve digital corps to serve on a rotational basis.

Robert Work, former deputy secretary of defense who is now NSCAI's vice chair, said the academy would bring in people who want to serve in government and would graduate students to serve as full time federal employees at GS-7 to GS-11 pay grade. Members of the digital corps would five years at 38 days a year helping government agencies figure out how to best implement AI.

For the military, the commission wants to focus on creating a clear way to test existing service members' skills and better gauge the abilities of incoming recruits and personnel.

"We think we have a lot of talent inside the military that we just aren't aware of," Work said.

To remedy that, Work said the commission recommends a grading, via a programming proficiency test, to identify government and military workers that have software development experience. The recommendations also include a computational thinking component to the armed services' vocational aptitude battery to better identify incoming talent.

"I suspect that if we can convince the Congress to make this real and the president signs off hopefully then not only will we be successful but we'll discover that we need 10 times more. The people are there and the talent is available," Schmidt said.

Photo credit: Eric Schmidt at a March 2020 meeting of the Defense Innovation Board in Austin, Texas; DOD photo by EJ Hersom.

This article first appeared on FCW, a Defense Systems partner site.

About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [emailprotected], or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.

View original post here:

Could a new academy solve the AI talent problem? - Defense Systems

Cognitive AI and the Power of Intelligent Data Digitalization – Analytics Insight

In a quest to decode what keeps the world moving, enterprises across the world are baffled. It is not precious metals or even cryptocurrency it is data. The adage that data is the new oil holds true and soon, every company in the world will either buy or sell data, and the value of this corporate asset would gain prominence with each passing day.

Data fuels digital transformation that drives a mammoth disruption across all industries. It is the key differentiator, coming at a massive speed characterised by volume, variety, velocity and veracity in a very live environment.

The question remains unanswered, how do enterprises gain the most from this valuable resource? The answer is through Data Digitalization. As the name suggests, data digitization is the process by which physical or manual data files like text, audio, images, video are converted into digital forms. The perks of digitized data are plenty, starting from-

The initial step of data digitalization starts with the identification of data needs based on client requirements. This is based on system analysis, requirement specifications and system design.

After an enterprise assesses its data requirements, the next step is to develop a technology roadmap and send it for approval and testing. After the technology roadmap is approved, the data source chart is developed, which may include a printed format converted into a digital format.

The old images are scanned, while the faded images are recovered using advanced digital correction software. The sound and video data are retrieved through the data capturing software which is ultimately converted in the digital format.

The printed documents are checked for physical accuracy. The process imbibes Optical Character Recognition (OCR) software scanning, where the output is checked manually by proof-readers, subsequently converted into PDF, MS-Word, ASCII and HTML formats.

The takeaways from Data Digitalization

Into Data digitization, the physical documents are uploaded online and scanned to a virtual digital medium, which is then digitized to a high-quality format and structured according to the customers needs.

Enterprises can also opt for cases in which these documents are transferred to an electronic archive which complies with the stringent security requirements and provides an option to manage data round the clock from any computer anywhere in the world using web applications.

In a crux, leveraging the power of cognitive computing algorithms, enterprises can synthesize raw data from various information sources, weigh multiple options to arrive at conclusive answers. To achieve this, cognitive systems encapsulate self-learning models using data mining, pattern recognition and natural language processing (NLP) algorithms.

For enterprises to use data digitalization systems they require vast amounts of structured and unstructured data, fed to machine learning algorithms. Over time, these cognitive systems refine the way they identify patterns, process data to become self-sufficient anticipate new problems and possible alternative solutions for the model.

While AI relies on algorithms for problem-solving, patten identification from the hidden data, cognitive computing systems have the loftier goal of creating models that mimic the brains reasoning process to solve the modern concerns of data digitalization and data adaptability.

The increased resilience towards data and digitalization will change the world of data forever. Is your organisation ready with its own digitalized data pipelines?

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Kamalika Some is an NCFM level 1 certified professional with previous professional stints at Axis Bank and ICICI Bank. An MBA (Finance) and PGP Analytics by Education, Kamalika is passionate to write about Analytics driving technological change.

See the rest here:

Cognitive AI and the Power of Intelligent Data Digitalization - Analytics Insight

How The Federal Governments AI Center Of Excellence Is Impacting Government-Wide Adoption Of AI – Forbes

In February 2019, the President signed Executive Order 13859 on the American AI Initiative, which set up the United States national strategy on artificial intelligence. Even prior to this however, government agencies had been heavily invested in AI and machine learning, applying it to a wide range of applications. While those previous AI implementations might have happened at the discretion of various agencies, this executive order put more of an emphasis on implementing AI more widely within the federal government.

In the context of this wider push for AI, many federal agencies are accelerating their adoption of AI, but are struggling with the best way to put those AI efforts into practice. AI and machine learning can bring about transformational change, but many federal government decision-makers struggle with knowledge, skill-sets, and best practices on how to move forward. To meet this need, the General Service Administration's (GSA) Federal Acquisition Service (FAS) Technology Transformation Services (TTS), and the GSA AI Portfolio and Community of Practice created the GSAs Artificial Intelligence (AI) Center of Excellence (CoE) to support the adoption of AI through direct partnerships, enterprise-level transformation, and discovery work.

GSA

The AI Center of Excellence aims to improve and enhance AI implementations in the government and help agencies on their journey with AI adoption. The relatively small team at the GSA AI CoE is helping to bring about some very impressive changes within the federal government. In this article, Neil Chaudhry, Director, AI Implementations at the AI Center of Excellence within the General Service Administration (GSA) and Krista Kinnard, Director, Artificial Intelligence Center of Excellence at Technology Transformation Services (TTS) at the General Services Administration (GSA) share more about what the CoE is all about, some of the most interesting use cases they have seen so far with the governments adoption of AI, why trustworthy and transparent AI is important to gain citizen trust in AI systems, and what they believe the future holds for AI.

What is the GSA AI Center of Excellence?

Krista Kinnard: GSAs Artificial Intelligence (AI) Center of Excellence (CoE) accelerates adoption of AI to discover insights at machine speed within the federal government. Housed within GSAs Federal Acquisition Service (FAS) Technology Transformation Services (TTS), and coupled with the GSA AI Portfolio and Community of Practice, this collaboration can engage with agencies from information-sharing to execution. As part of the FAS TTS dedication to advancing innovation and IT transformation across the government, the AI CoE supports the adoption of AI through direct partnerships, enterprise-level transformation, and discovery work. Working in a top down approach, the CoE engages at the executive level of the organization to drive success across the enterprise while also supporting discovery and implementation of AI in a consultative type of approach. This also means building strong partnerships with industry. The private sector is quickly producing new and innovative AI enabled technologies that can help solve government challenges. By partnering with industry, we are able to bring the latest innovations to government, helping build a more technologically enabled, future proof method to meet government missions.

How did GSAs AI Center Of Excellence get started?

Krista Kinnard: GSAs CoE program was conceived during conversations with the White House and innovative industry players, looking at government service delivery and the contrasts between the ease and convenience of interacting with private companies and the sometimes challenging interactions with the government. The focus being on a higher level of change and a redesigning of government services, based on access to the best ideas from industry and most updated technology advances in government.

It is a signature initiative of the Administration, designed by the White Houses Office of American Innovation and implemented at GSA in 2017. The CoE approach was established to scale and accelerate IT transformation at federal agencies. The approach leverages a mix of government talent and private sector innovation in partnership while centralizing best practices and expertise into CoE.

The programs goal is to facilitate repeatable and sustainable transformation, scaling and accelerating transformation by continually building on lessons learned from current and prior agencies. Since inception, FAS TTS has formed six CoEs: Artificial Intelligence, Cloud Adoption, Contact Center, Customer Experience, Data & Analytics, and Infrastructure Optimization. These six capability areas are typically the key focus areas needed by an organization when driving IT Modernization and undergoing a digital transformation. The AI CoE was specifically designed in support of the Executive Order on Maintaining American Leadership in Artificial Intelligence.

How is the AI Center of Excellence being applied to improve or otherwise enhance whats happening with AI in the government?

Krista Kinnard: Because AI has become such a technology of interest, the CoE is focused on partnering with federal agencies to identify common challenges both in mission delivery and in mission support that can be enhanced using this technology. We are not interested in developing AI solutions for the sake of technology. We are interested in helping government agencies understand their mission delivery and support challenges so that we can work together to create a clear path to a meaningful AI solution that provides true value to the organization and the people it serves.

Why is it important for the Federal Government to adopt AI?

Neil Chaudhry: Data on USAspending.gov in 2019 showed that the federal government spends over $1.1 trillion on citizen services per year. The American public, conditioned by the private sector, expect better engagement with government agencies. Using AI at scale can help modernize the delivery of services while improving the effectiveness and efficiency of government services.

AI can help in many ways, such as proactively identifying trends in service or projected surges in service requirements. AI is excellent at pattern recognition and can assist federal programs identify anomalous activity or suspected fraud much faster than humans can. AI can speed service delivery by automatically resolving routine claims, thus freeing up federal employees to focus on more complex problems that require a human touch.

Where do you see federal agencies today in their AI adoption?

Neil Chaudhry: It really varies. Overall, we see federal agencies as cautiously optimistic in the adoption of AI. Every large federal agency is executing on one or a combination of proofs of concept, pilot projects, or technology demonstration projects related to AI technologies while other federal agencies with mature data science practices are further along in their AI exploration, for example, implementing Robotic Process Automation, Chatbots, fraud detection tools, and automated translation services. The common thread here is that all agency leaders understand that using AI provides a competitive advantage in the marketplace for delivery of citizen services in a cost effective and impactful manner and are actively supporting AI efforts in their agencies.

How is the Federal government adopting AI compared to private industry?

Neil Chaudhry: Within the AI CoE, we have been able to develop a very broad and very deep perspective on government wide efforts related to AI adoption because we work with many federal agencies at different stages of AI adoption. Right now, most federal agencies are looking to institutionalize AI as an enabling work stream in a sustainable way in this sense, they are very similar to the private sector in terms of AI adoption.

However, the crucial distinction between AI adoption in the private sector and public sector is that the federal government is heavily invested in learning resources like Communities of Practice that focus on sharing use cases, lessons learned, and best practices.

How do you see process automation fitting into the overall AI landscape?

Neil Chaudhry: Process automation is a critical component of applied AI. It is one of the best examples of augmented intelligence in this space right now. Process automation is critical because it is the key to upskilling knowledge workers in the federal workforce. It can take the drudgery out of routine work and free up time for these practitioners to do what they do best come up with innovative solutions to solve ordinary problems in extraordinary ways. It can also reduce the amount of rework on service requests and claims applications due to human error by virtue of built-in error checking that gets smarter as more requests are routed through the AI application.

What are some of the most interesting use cases youve seen so far with the government's adoption of AI?

Krista Kinnard: There are a number of impactful use cases. Broadly, we see a lot that focus on four outcomes: increased speed and efficiency, cost avoidance and cost saving, improved response time, and increased quality and compliance. We see these in a number of applications that enable agencies to provide direct service to the American public in the form of intelligent chat bots and innovative call centers for customer support. We also are starting to see AI making progress in the fraud detection and prevention space to ensure the best use and allocation of government funds. One of the biggest areas weve started to see advancement is in data management. Agencies are using intelligent systems to automate both collection and aggregation of government data, as well as, provide deeper understanding and more targeted analysis.

We have seen that the potential for use of natural language processing (NLP) is huge in government. NLP enables us to read and understand free-form text, like that of a form used to apply for benefits, or document government decisions. So much of government data exists in government forms with open text fields and government documents, like memos and policy documents. NLP can really help to understand the relationships between these data and provide deeper insight for government decision making.

What do you see as critical needs for education and training around AI?

Neil Chaudhry: At its core, AI is operational research and applied statistics analysis on steroids. The AI workforce of the future needs to have a fundamental understanding of statistical concepts, probability concepts, decision science concepts, optimization techniques, queuing theories, and various problem solving methodologies used in the business community. In addition, the AI workforce of the future needs periodic training around ethics, critical thinking, collaboration, and working in diverse teams, to name a few, to effectively understand things like global data sets that are generated by people with different norms and values.

The critical needs for education and training for AI revolve around soft skills, such as flexibility, empathy, and the ability to give and receive feedback, along with critical thinking, reasoning, and decision-making. Any seasoned AI practitioner has experienced instances in AI research where we end up correlating shark bites with ice cream sales. So the ability to seek out subject matter experts, convince an organization to share proprietary datasets, and communicate actionable insights are all critical needs for education and training around AI.

What is the CoE doing to help educate and inform government employees when it comes to AI?

Krista Kinnard: Education and training is a priority for many government agencies. Employees are passionate about serving the mission of their agency and delivering quality service to the American people. At the CoE we aim to empower them through the use of AI as a tool. As such, a critical component of the CoE model is to learn by doing; its experiential learning with real world application with a little coaching as they gain their AI footing. As our technical experts partner with agencies, we engage with their workforce in every step of the process so that when the CoE completes an engagement, the agency has a team of people who know the solution we delivered, can take ownership of its success, and repeat the process for future innovation.

Beyond partnering, the CoE reaches out more broadly to share experiences through the governmentwide AI Community of Practice. The AI Community of Practice supports AI education by creating a space to share lessons learned, best practices, and advice. We regularly host events and webinars and are forming working groups to focus on specific topics within AI. The challenge is that people learn in different ways. Class and certifications can certainly provide a foundation, but learning to apply those skills in a specific context can be difficult. If organizations can create a culture of experimentation where people can learn by doing in a safe and controlled environment, government will be able to build skills around AI adoption. For that, we have established a page on OPMs Open Opportunities platform. Here programs and offices across government can post micro-project opportunities or learning assignments under the Data Savvy Workforce tag. Again, this isnt just for data practitioners. Employees on program teams, acquisition teams, and HR teams can learn how AI could enhance their processes.

How can the Federal Government ensure their AI systems are built with citizen trust in mind?

Krista Kinnard: This point is critically important to the AI CoE. We have deep respect for the people and communities that government agencies serve and the data housed in government systems that helps agencies serve those people and communities. Part of the CoE engagement model is to embed a clear and transparent human-centered approach into all of our AI engagements.

Another critical element of developing trustworthy AI is ensuring all stakeholders have a clear understanding of the problem we are trying to solve and the desired outcome. It sounds simple, but in order to effectively monitor and evaluate the impact of any AI system we build, we have to first truly understand the problem and how we are trying to solve it.

We also emphasize creating infrastructure to support regular and iterative evaluation of data, models and outcomes to assess impact of the AI system both intended and unintended.

Creating trustworthy AI is not just about the data and technology itself. We engage early with the acquisition team to ensure that they are making a smart buy. We engage with the Chief Information Security Officer and the security team early and often since approval and security checks can be a hurdle for implementation. We engage Privacy Officers to ensure AI solutions are in compliance with organizational privacy policies. By bringing in these key stakeholders early in the AI development process, we help embed responsible AI into these solutions from the onset.

What advice do you have for government agencies and public sector organizations looking to adopt AI?

Krista Kinnard: I would offer two pieces of advice. First, start small. Choose a challenge that can be easily scoped and understood with known data. This will help prove the value of this technology. Once the organization becomes more comfortable with all that is involved with building an AI on a smaller scale, it can move towards bigger and more complex projects. The second piece of advice I would offer is to know what AI can do, and what it cannot. This is a powerful technology that is already producing meaningful and valuable results, but it is not magic. Understanding the limitations of AI will help select realistic and attainable AI projects.

Neil Chaudhry: AI is meant to augment the humans in the workforce, not replace them with synthetic media and autonomous robots or chatbots.

I always discuss what a successful AI implementation means to the partner and their frontline staff during our initial meetings because every successful Ai implementation that I have seen or been follows a hierarchy of people over process, process over technology, and technology as the tool used by people to improve organizational processes. As part of my discussions, I always advise the partner agency to think of how the frontline staff will use AI. My experience has shown that if the frontline staff cannot leverage AI in a meaningful way then the AI implementation is not sustainable or actionable.

If a partner is looking to replace people, then their AI adoption strategy will not be sustainable. In addition, if a partner is looking to circumvent or bypass an established law, regulation, policy, or procedure then their AI adoption will also be unsuccessful because it will amplify the biases inherent in the new processes.

The advice I always give my partners who are looking to implement AI is to define a set of sustainable use cases for AI and measure the impact of those use cases against the existing tech debt within their organization. It may be that the agency is ready to implement AI now but waiting a year may allow the agency to implement AI in a cost-effective manner.

Read this article:

How The Federal Governments AI Center Of Excellence Is Impacting Government-Wide Adoption Of AI - Forbes

Microsoft Uses AI to Make Our Eyes Look at the Webcam – PCMag

It doesn't matter where your webcam is positioned, it's always going to be offset from the person you're talking to. The end result is we hardly ever look people in the eye when talking to them over video chat. Microsoft has created an intelligent solution, though.

As spotted by Liliputing, the latest Windows 10 Insider Preview Build (20175) announcement contains details of a new feature Microsoft is calling "Eye Contact." It uses artificial intelligence to "adjust your gaze on video calls so you appear to be looking directly in the camera." So there's no need to remember to look at the webcam instead of the person on your screen, which no one ever does as it's simply not natural.

The one drawback of Eye Contact, at least for now, is the fact it's limited to only working on the Surface Pro X. That's because the Pro X contains Microsoft's SQ1 ARM processor, which it developed in partnership with Qualcomm and includes the "artificial intelligence capabilities" required for the gaze adjustment to work, according to Microsoft. If you do own a Pro X and have access to this latest Windows Preview Build, then the feature can be turned on via the Surface app. After that, it should work with any video app using the webcam.

Considering Intel's x86 processors are capable of handling AI-intensive video games, it seems likely Microsoft will eventually expand the Eye Contact feature to other models of its Surface range and hopefully to Windows 10 in general. After that, we can all look at the people we are talking to on video chat and know that they are also seeing us looking directly at them, albeit thanks to AI.

Further Reading

Webcam Reviews

Read this article:

Microsoft Uses AI to Make Our Eyes Look at the Webcam - PCMag

World’s First AI Therapist, SARAH, is Born. – PRNewswire

NEW YORK, July 14, 2020 /PRNewswire/ -- In times of uncertainty, fear, anxiety, and stress are natural responses to real or perceived threats. Faced with the new realties, of today's current environment, an idea was born. SARAH the AI Therapist, where users can engage 24/7 to seek comfort, advice and inspire confidence by positively influencing their lives, particularly when suffering from psychological and mental health issues. Unlike chat bots, SARAH'sartificial intelligence allows it to become personal with its users, improving its responsiveness as time progresses. Unique features include the ability to recall previous conversations, detect emotions by sound vibration and calm users down by providing solutions. By listening and understanding SARAH can provide counseling therapy and optimize users' schedules through a reminder and track feature.

SARAH can help individuals, couples, and families within multiple fields of Therapy, from depression to PTSD and from couples counseling to children's psychology. Moreover, SARAH can advise its users on the ways they can improve their relationship with themselves as well as pointers on how they can finally rid themselves of their negative habits and replace them with positive ones.

Mental health is a serious issue and with ever-increasing pressure on our daily lives symptoms and signs ofdepression, anxiety, and fear are not going away. Since the beginning of the COVID-19 pandemic and isolation efforts in March, people everywhere have been reporting increasing pressure on their mental health. Whether it is because they are now out of employment or fear losing their job, people are struggling to piece together money to pay bills. The continuous onslaught of worry and insecurity is leading to greater psychological issues than ever before.

There is so much SARAH can contribute and Founder, Alireza Dehghanis very excited about this. "This is an opportunity for all of us to make a difference by joining a cause and contributing back to society."

SARAH has started a crowdfundingcampaign on Indiegogoand funds will be used to advance Dehghan's dream of providing affordable on-demand therapy to all those in need."In an ideal world, everyone should have easy, affordable access to mental health care professionals and be able to properly treat their maladies" states Dehghan.

Subscriptions are available at a 65% discount whereby sponsors can help by donating a subscription to a mentally ill person or families struggling with the repercussions of COVID-19.

Media Contact

Alireza DehghanPhone Number: +1 (917)-936-2030Email: [emailprotected]

Related Images

sarah.png SARAH The world's first AI therapist

Related Links

Indiegogo campaign

Instagram

SOURCE SARAH AI

Originally posted here:

World's First AI Therapist, SARAH, is Born. - PRNewswire

Amazon Is Humiliating Google & Apple In The AI Wars – Forbes


Forbes
Amazon Is Humiliating Google & Apple In The AI Wars
Forbes
Amazon's strategy to make Alexa available absolutely everywhere on every device ever created will give it the advantage it needs in the upcoming AI wars. The news that Amazon is making its AI technology available in the UK to third party developers (it ...

and more »

The rest is here:

Amazon Is Humiliating Google & Apple In The AI Wars - Forbes

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat

Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.

Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.

The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.

Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.

Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.

In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing.

Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.

To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.

We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.

For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.

Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.

We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.

They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.

But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?

That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.

Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'

So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.

And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.

I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?

You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.

The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.

A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.

For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.

Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.

You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.

I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.

Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.

We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.

The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.

Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.

All the above requires one final component: talent.

You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.

The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?

The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.

Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.

Read the rest here:

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls - VentureBeat

Smart Grid Security Will Get Boost from AI and 5G – IoT World Today

The energy grid is poised for major change through such technologies as AI and 5G. But with advancements come new cybersecurity challenges.

Key takeaways from this article:

For the energy industry, securing the grid is mission-critical.

Increasingly, too, securing devices that lie beyond the centralized grid at the edge, so to speak is also critical as well as a moving target. Zero-trust cybersecurity, 5G connectivity and machine learning, though, may ultimately help this smart grid, as this connected energy grid is known, become more resilient in the face of attacks.

While the shift toward sustainable energy could help secure a better future for the planet and reduce carbon footprint, the smart grid fueled by connected things, microgrids and so on creates two-way, risky data flows that add complexity to an already antiquated energy grid.

Smart grid technologies can balance peak demand, flatten the load curve and make energy generation sources more efficient, said Brian Crow, Sensus vice president of analytic solutions, in a recent article on the role of IoT in utilities.

Malicious attackers can exploit these two-way flows.

These devices at the edge have the potential to impact grid reliability, said Christine Hertzog, principal technical leader at Electric Power Research Institute. Malicious actors can target the grid and have the ability to change the load in dramatic ways, she said, and you could then see some issues with grid reliability.

Distributed Energy, Smart Grids Accelerate

New energy sources and distribution methods including solar panels, generators and microgrids show promise in curbing climate change and helping consumers take greater control of energy consumption during peak usage times.

Smart grid technologies decentralize energy delivery, enabling people to quickly connect to and disconnect from the larger grid and generate and deliver electricity locally. Unlike todays massive, centralized grid, an attack or disruption of a microgrid, for example, doesnt affect the entire system. Thats important for areas like California where wildfires can prompt spontaneous grid shutdowns.

But smart grids also create erratic demand on the larger grid and present two-way traffic to that grid, posing security risks. And these risks are amplified by aging energy grid infrastructure.

Decentralized energy production components of which are used in smart grids is growing. The International Energy Agency expects that renewable energy capacity will increase by 50% through 2024, with solar photovoltaics and onshore wind making up the lions share of that increase.

The whole world is moving more toward that paradigm of relying on on-site power whether its solar, a backup generator or another device, said Peter Asmus, principal research analyst of Navigant Research.

The world is shifting from large centralized resources to look more like telecom, Asmus said. He noted that while some deployments have slowed down because of coronavirus, he anticipates a greater acceleration of decentralized energy sources over the next couple of years.

Grid Edge Brings Complexity to Already Antiquated Energy Grid

The traditional energy grid itself lags these modern developments. According to the U.S. Energy Department, 70% of the grids transmission lines and power transformers are more than 25 years old, and the average age of power plants is more than 30 years old. Parts of the U.S. grid network are more than a century old.

Technologies such as the Internet of Things (IoT) devices, edge computing architecture and machine learning will modernize the grid. Examples include IoT-enabled backup generators that provide additional power to a home, electric vehicle charging stations or connected thermostats. These kinds of technologies are rapidly becoming extensions of the traditional grid.

According to the Internet of Things in Energy Market report, the global market for IoT in energy is expected to grow from $20.2 billion in 2020 to $35.2 billion by 2025, with a compound annual growth rate of 11.8% during the forecast period.

Just as connected devices are part of this equation, edge architecture is as well.

Edge computing architecture brings compute and data closer to the devices and users that need them to improve response times and reduce bandwidth needs. Myriad devices have emerged and reside at the edge rather than in the cloud, which requires a round trip from device to the cloud and back, increasing bandwidth requirements, reducing response time and potentially posing securing risks.

This is what we would call grid edge, and its a paradigm shift, Hertzog said. We used to consider cybersecurity like the fort concept: You have a perimeter. But when youre talking about the edge of the grid and cloud-based apps, youre blowing up that concept, she said.

Grid edge architecture adds risk and complexity to the grid. Edge devices may not have been patched and updated frequently, may have less vigorous authentication protocols applied, may share a network with other key IT systems and become a target for infiltration, or they may house poorly written code that is easy to penetrate and thus a target for malicious attackers.

These kinds of security risks have been amplified as utilities turn to IoT for better grid management and as consumers take advantage of devices at the edge, such as connected energy meters and home-charging stations for electric cars. As a result, security breaches can now be bidirectional, enabling grids to be penetrated not only via their own networks but also via consumer devices connected to the grid.

To combat security issues, businesses are implementing private networks for IoT. In a recent Omdia survey on IoT adoption, 97% of respondents said that they had considered or are using private networks for IoT deployments to bolster security.

AI, Zero-Trust Cybersecurity

A potential antidote to these risks is the emergence of machine learning and AI-enabled tools to aid IT pros. Machine learning tools can identify threats among the vast number of alerts that IT pros may receive. AI-enabled cybersecurity tools are becoming key to edge security, because humans simply cant keep up with all the information.

On a massive scale, that data starts to go beyond what the human brain capacity is able to do, Hertzog said. Were getting so much additional information through new tools and capability, but the ability to assimilate and make sense of that is going to be a big challenge.

Companies such as National Grid Partners have enlisted AI for cybersecurity monitoring and anticipate using automation for other tasks, such as predictive maintenance and customer service.

Hertzog said that AI is critical for validating identity at the edge, which requires a zero-trust cybersecurity strategy. The underlying principle of zero trust is to never trust and always verify.

Hertzog noted that this approach to cybersecurity requires intelligence at the edge to achieve that identity authentication. We need distributed intelligence to deliver that zero trust down to the granular level, she said. AI would be involved in looking at all the activity and seeing if there were any anomalies.

We can take this data and inform our decision-making, Hertzog said. She emphasized that true AI for this use case may be in the distance, but automated monitoring is already in place.

Hertzog also noted, however, that automation in decision-making can only take place if the data derived is accurate, clean and ready to use.

Garbage in, garbage out, Hertzog said. Hertzog noted that poor data quality is compelling reasons for utilities to put the work into data management for cybersecurity. Studies indicate that about 80% of the time spent on a project involving AI is getting that data into the right format just getting it ready to be used for AI.

AI will also require greater speed and network slicing which allows networks to be partitioned to provide different levels of access to the grid to enable granular security policy setting. Such fine-grained policies are needed to protect these distributed networks.

Hertzog and others have noted that corollary technologies such as 5G connectivity, the new wireless standard, could bolster zero-trust security by providing the network bandwidth to enable the speed and data intensiveness needed for intelligent activity at the edge.

5G is a game changer, Hertzog said. It will enable this concept of slicing networks and the ability to define security policies more granularly. That has some ramifications for zero trust.

At the same time, Hertzog said, while 5G will bolster smart grid security, the infrastructure required isnt coming tomorrow. It will take a decade to roll out, she said.

Follow this link:

Smart Grid Security Will Get Boost from AI and 5G - IoT World Today

Facebook AI chief: We can give machines common sense – ZDNet

Charlie Osborne | ZDNet

Neural networking could pave the way for AI systems to be given a capability which we have, until now, considered a human trait: the possession of common sense.

While some of us may have less of this than others, the idea of "common sense" -- albeit a vague concept -- is the general idea of making fair and good decisions in what is a complex environment, drawing on our own experience and an understanding of the world, rather than relying on structured information -- something which artificial intelligence has trouble with.

This kind of intuition is a human concept, but according to Facebook AI research group director Yann LeCun, leaps forward in neural networking and machine vision could one day lead to software with common sense.

Speaking to MIT's Technology Review, LeCun said there is still "progress to be made" when it comes to neural networking which is required for machine vision.

Neural networks are artificial systems which mimic the structure of the human brain, and by combining this with more advanced machine vision -- which are ways to pull data from imagery for use in tasks and decision-making -- LeCun says common sense will be the result.

For example, if you have a dominant object in an image, and enough data in object categories, machines can recognize specific objects like dogs, plants, or cars. However, some AI systems can now also recognize more abstract groupings, such as weddings, sunsets, and landscapes.

LeCun says that just five years ago, this wasn't possible, but as machines are granted vision, machine expertise is growing.

AI is still limited to the specific areas that humans train them in. You could show an AI system an image of a dog at a wedding, but unless the AI has seen one before and understands the context of the image, the response is likely to be what the executive calls "garbage." As such, they lack common sense.

Facebook wants to change this. LeCun says that while you can interact with an intelligent system through language to recognize objects, "language is a very low-bandwidth channel" -- and humans have a wealth of background knowledge which helps them interpret language, something machines do not currently have the capability to draw on in real-time to make contextual connections in a way which mimics common sense.

One way to solve this problem could be through visual learning and media such as streamed images and video.

"If you tell a machine "This is a smartphone," "This is a steamroller," "There are certain things you can move by pushing and others you cannot," perhaps the machine will learn basic knowledge about how the world works," LeCun told the publication. "Kind of like how babies learn."

"One of the things we really want to do is get machines to acquire the very large number of facts that represent the constraints of the real world just by observing it through video or other channels," the executive added. "That's what would allow them to acquire common sense, in the end."

By giving intelligent machines the power to observe the world, contextual gaps will be filled and it may be that AI could make a serious leap from programmed algorithms and set answers. One area, for example, Facebook wants to explore is the idea of AI systems being able to predict future events by showing them a few frames.

"If we can train a system to do this we think we'll have developed techniques at the root of an unsupervised learning system," LeCun says. "That is where, in my opinion, a lot of interesting things are likely to happen. The applications for this are not necessarily in vision -- it's a big part of our effort in making progress in AI."

Follow this link:

Facebook AI chief: We can give machines common sense - ZDNet

5 findings that could spur imaging AI researchers to ‘avoid hype, diminish waste and protect patients’ – Health Imaging

5. Descriptive phrases that suggested at least comparable (or better) diagnostic performance of an algorithm to a clinician were found in most abstracts, despite studies having overt limitations in design, reporting, transparency and risk of bias. Qualifying statements about the need for further prospective testing were rarely offered in study abstractsand werent mentioned at all in some 23 studies that claimed superior performance to a clinician, the authors report. Accepting that abstracts are usually word limited, even in the discussion sections of the main text, nearly two thirds of studies failed to make an explicit recommendation for further prospective studies or trials, the authors write. Although it is clearly beyond the power of authors to control how the media and public interpret their findings, judicious and responsible use of language in studies and press releases that factor in the strength and quality of the evidence can help.

Expounding on the latter point in their concluding section, Nagendran et al. reiterate that using overpromising language in studies involving AI-human comparisons might inadvertently mislead the media and the public, and potentially lead to the provision of inappropriate care that does not align with patients best interests.

The development of a higher quality and more transparently reported evidence base moving forward, they add, will help to avoid hype, diminish research waste and protect patients.

The study is available in full for free.

See the rest here:

5 findings that could spur imaging AI researchers to 'avoid hype, diminish waste and protect patients' - Health Imaging

Artificial Intelligence (AI) Software Market Size By Product Analysis, Application, End-Users, Regional Outlook, Competitive Strategies And Forecast…

New Jersey, United States,- Latest update on Artificial Intelligence (AI) Software Market Analysis report published with extensive market research, Artificial Intelligence (AI) Software Market growth analysis, and forecast by 2026. this report is highly predictive as it holds the overall market analysis of topmost companies into the Artificial Intelligence (AI) Software industry. With the classified Artificial Intelligence (AI) Software market research based on various growing regions, this report provides leading players portfolio along with sales, growth, market share, and so on.

The research report of the Artificial Intelligence (AI) Software market is predicted to accrue a significant remuneration portfolio by the end of the predicted time period. It includes parameters with respect to the Artificial Intelligence (AI) Software market dynamics incorporating varied driving forces affecting the commercialization graph of this business vertical and risks prevailing in the sphere. In addition, it also speaks about the Artificial Intelligence (AI) Software Market growth opportunities in the industry.

Artificial Intelligence (AI) Software Market Report covers the manufacturers data, including shipment, price, revenue, gross profit, interview record, business distribution etc., these data help the consumer know about the competitors better. This report also covers all the regions and countries of the world, which shows a regional development status, including Artificial Intelligence (AI) Software market size, volume and value, as well as price data.

Artificial Intelligence (AI) Software Market competition by top Manufacturers:

Artificial Intelligence (AI) Software Market Classification by Types:

Artificial Intelligence (AI) Software Market Size by End-user Application:

Listing a few pointers from the report:

The objective of the Artificial Intelligence (AI) Software Market Report:

Cataloging the competitive terrain of the Artificial Intelligence (AI) Software market:

Unveiling the geographical penetration of the Artificial Intelligence (AI) Software market:

The report of the Artificial Intelligence (AI) Software market is an in-depth analysis of the business vertical projected to record a commendable annual growth rate over the estimated time period. It also comprises of a precise evaluation of the dynamics related to this marketplace. The purpose of the Artificial Intelligence (AI) Software Market report is to provide important information related to the industry deliverables such as market size, valuation forecast, sales volume, etc.

Major Highlights from Table of contents are listed below for quick lookup into Artificial Intelligence (AI) Software Market report

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

See the original post:

Artificial Intelligence (AI) Software Market Size By Product Analysis, Application, End-Users, Regional Outlook, Competitive Strategies And Forecast...

The weird, frightening future of AI may not be what you think – CNET

A photo of the cover of Janelle Shane's book on AI: You Look Like A Thing And I Love You.

There are five principles of AI weirdness, according to author and researcher Janelle Shane. One of them is: "The danger of AI is not that it's too smart but it's not smart enough." But another is: "AI does not really understand the problem you want it to solve."

The world feels like absolute chaos right now. And often, AI feels like part of the problem. Algorithms determining how messages are delivered, and to which people. Facial recognition software that can be biased. The rise of deepfakes.

From the lab to your inbox. Get the latest science stories from CNET every week.

What we watch on streaming services is peppered with suggestions from AI. My photos are tuned and curated by AI. And every day, I'm writing emails that prompt me with suggestions to autofinish my sentences, as if my thoughts are being led by AI, too.

Janelle Shane is an AI researcher and long-time writer of the popular AI Weirdness blog. Her experiments are, well, weird. You've probably seen them: lists of Harry Potter-themed desserts. Escape room name generators. AI-generated cat portraits.

Now playing: Watch this: The future of AI is weird, broken and sometimes full...

22:00

I bought her book, You Look Like a Thing and I Love You because even though I cover tech, after all these years I still don't feel like I understand AI. If you feel that way, Shane's book is a primer, and a guide full of real observations from her experiments training neural nets. There are fun cartoons, too. It's a way to grasp the madness of our AI world, and also realize that the weirdness has rules. Grasping its underpinnings and its mistakes feels essential now more than ever.

I'd been thinking of connecting with Shane before everything happened to disrupt 2020, but we spoke over Zoom a few weeks ago to discuss weird AI and her book, and some thoughts of where AI could lead things next. I'm particularly interested in the idea of AI as a collaborative tool, for better and for worse.

Our conversation, recorded before George Floyd's death and the protests that have followed, is embedded above. And if you're looking for a great starter book to reflect on AI in a time that's already impossibly strange, to find another way to reflect on 2020, Shane's work could be a place to start.

Read more: CNET Book Club interviews great tech and sci-fi authors

Here is the original post:

The weird, frightening future of AI may not be what you think - CNET