The latest challenge to Google’s AI dominance comes from an unlikely place — Firefox – CNBC

Mozilla, the company behind the Firefox internet browser, has begun testing a feature that lets you enter a search query using your voice instead of typing it in. The move could help Mozilla's efforts to make Firefox more competitive with Google Chrome.

If you're using Firefox in English on Mac, Windows or Linux, you can turn on the experimental "Voice Fill" feature and then use it on Google, Yahoo and DuckDuckGo. Support for other websites will come later.

Alphabet's Google offers speech recognition on its search engine when accessed through Chrome on desktop -- it became available in 2013 -- and Yahoo, Microsoft's Bing and Google all let you run search queries with your voice on mobile devices. But searching with your voice on Google while using Firefox on the desktop, for example, has historically been impossible. Now Mozilla wants to make its desktop browser more competitive.

The Voice Fill feature comes a few weeks after Mozilla announced the Common Voice Project that allows people to "donate" recordings of them saying various things in order to build up "an open-source voice recognition engine" that anyone will be able to use. Mozilla will use recordings from Voice Fill and the Common Voice Project in order to make the speech recognition more accurate, speech engineer Andre Natal told CNBC in an interview.

Mozilla's latest efforts follow Facebook's push into speech recognition. And speech technology has become hotter thanks to the rise of "smart" speakers like the Amazon Alexa, the Google Home, and the Apple HomePod. Harman Kardon is now building a speaker that will let people interact with Microsoft's Cortana assistant.

But these big technology companies have collected considerable amounts of proprietary voice data. So while they zig, Mozilla will zag. Mozilla will release to the public its voice snippets from the Common Voice Project later this year. The speech recognition models will be free for others to use as well, and eventually there will be a service for developers to weave into their own apps, Natal said.

"There's no option for both users and developers to use -- something that is both concerned about your privacy and also affordable," Natal said.

That said, Mozilla is following along with the rest of the tech crowd in the sense that the underlying system -- a fork of the Kaldi open-source software -- employs artificial neural networks, a decades-old but currently trendy architecture for training machines to do things like recognize the words that people say.

Mozilla initially explored incorporating speech recognition into the assistant for its Firefox OS for phones, but in 2016 it shifted the OS focus to connected devices, and earlier this year Mozilla closed up the connected devices group altogether.

Today Mozilla has five people working on speech research and a total of about 30 people working on speech technology overall, Natal said. Eventually the team wants to make the technology work in languages other than English.

Mozilla introduced the browser that became Firefox back in 2002. Over the years the nonprofit Mozilla Foundation has received financial support from Google and Yahoo. Mozilla CEO Chris Beard is currently focused on trying to get people to care about the company again, as CNET's Stephen Shankland reported this week. Recent moves include the launch of the Firefox Focus mobile browser and the acquisition of read-it-later app Pocket.

But while Firefox could have roughly 300 million monthly active users, Chrome has more than 1 billion.

View original post here:

The latest challenge to Google's AI dominance comes from an unlikely place -- Firefox - CNBC

Recent AI Developments Offer a Glimpse of the Future of Drug Discovery – Tech Times

The science and practice of medicine has been around for much of recorded human history. Even today, doctors still swear an oath that dates back to ancient Greece, containing many of the ethical obligations we still expect our physicians to adhere to. It is one of the most necessary and universal fields of human study.

Despite the importance of medicine, though, true breakthroughs don't come easily. In fact, most medical professionals will only see a few within their lifetime. Developments such as the first medical x-ray, penicillin, stem cell therapy - true game changers that advance the cause of medical care don't happen often.

That's especially true when it comes to the development of medications. It takes a great deal of research and testing to find compounds that have medicinal benefits. Armies of scientists armed with microplate readers to measure absorbance, centrifuges for sample separation, and hematology analyzers to test compound efficacy make up just the beginnings of the long and labor-intensive process. It's why regulators tend to approve around 22 new drugs per year for public use, leaving many afflicted patients waiting for cures that may come too late.

Now, however, some recent advances in AI technology are promising to speed that process up. It could be the beginnings of a new medical technology breakthrough on the same order of magnitude as the ones mentioned earlier. Here's what's going on.

One of the reasons that it takes so long to develop new drug therapies, even for diseases that have been around for decades, is that much of the process relies on humans screening different molecule types to find ones likely to have an effect on the disease in question. Much of that work calls for lengthy chemical property analysis, followed by structured experimentation. On average, all of that work takes between three and six years to complete.

Recently, researchers have begun to adapt next-generation AI implementations for molecule screening that could cut that time down significantly. In one test, a startup called Insilico Medicine matched its' AI platform up against the already-completed work of human researchers seeking treatment options for fibrosis. It had taken them 8 years to come up with viable candidate molecules. It took the AI just 21 days. Although further refinements are required to put the AI on par with the human researchers in terms of result quality (the AI candidates performed a bit worse in treating fibrosis), the results were overwhelmingly positive.

Another major time-consuming hurdle that drug developers face is in trying to detect adverse side effects or toxicity in their new compounds. It's difficult because such effects don't always surface in clinical trials. Some take years to show up, long after scores of patients have already suffered from them. To avoid those outcomes, pharmaceutical firms take lots of time to study similar compounds that have already have reams of human interaction data, looking for patterns that could indicate a problem.

It's yet another part of the process that AI is proving adept at. AI systems can analyze vast amounts of data about known compounds to generate predictions about how a new molecule may behave. They can also model interactions between a new compound and different physical and chemical environments. That can provide clues to how a new drug might affect different parts of a human body. Best of all, AI can accomplish those tasks with more accuracy and in a fraction of the time it would take a human research team.

Even at this early stage of the development of drug discovery AI systems, there's every reason to believe that AI-developed drugs will be on the market in the very near future. In fact, there's already an AI-designed drug intended to treat obsessive-compulsive disorder (OCD) entering human trials in Japan. If successful, it will then proceed to worldwide testing and eventual regulatory approval processes in multiple countries.

It's worth noting that the drug in question took a mere 12 months for the AI to create, which would represent a revolution in the way we develop new disease treatments. With that as a baseline, it's easy to foresee drug development and testing cycles in the future reduced to weeks, not years. It's also easy to predict the advent of personalized drug development, with AI selecting and creating individualized treatments using patient physiological and genetic data. Such outcomes would render the medical field unrecognizable compared to today - and could create a disease-free future and a new human renaissance like nothing that's come before it.

2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.

See the rest here:

Recent AI Developments Offer a Glimpse of the Future of Drug Discovery - Tech Times

DeepMind’s neural network teaches AI to reason about the world – New Scientist

Size isnt everything, relationships are

imageBROKER/REX/Shutterstock

By Matt Reynolds

The world is a confusing place, especially for an AI. But a neural network developed by UK artificial intelligence firm DeepMind that gives computers the ability to understand how different objects are related to each other could help bring it into focus.

Humans use this type of inference called relational reasoning all the time, whether we are choosing the best bunch of bananas at the supermarket or piecing together evidence from a crime scene. The ability to transfer abstract relations such as whether something is to the left of another or bigger than it from one domain to another gives us a powerful mental toolset with which to understand the world. It is a fundamental part of our intelligence says Sam Gershman, a computational neuroscientist at Harvard University.

Whats intuitive for humans is very difficult for machines to grasp, however. It is one thing for an AI to learn how to perform a specific task, such as recognising what is in an image. But transferring know-how learned via image recognition to textual analysis or any other reasoning task is a big challenge. Machines capable of such versatility will be one step closer to general intelligence, the kind of smarts that lets humans excel at many different activities.

DeepMind has built a neural network that specialises in this kind of abstract reasoning and can be plugged into other neural nets to give them a relational-reasoning power-up. The researchers trained the AI using images depicting three-dimensional shapes of different sizes and colours. It analysed pairs of objects in the images and tried to work out the relationship between them.

The team then asked it questions such as What size is the cylinder that is left of the brown metal thing that is left of the big sphere? The system answered these questions correctly 95.5 per cent of the time slightly better than humans. To demonstrate its versatility, the relational reasoning part of the AI then had to answer questions about a set of very short stories, answering correctly 95 per cent of the time.

Still, any practical applications of the system are still a long way off, says Adam Santoro at DeepMind, who led the study. It could initially be useful for computer vision, however. You can imagine an application that automatically describes what is happening in a particular image, or even video for a visually impaired person, he says.

Outperforming humans at a niche task is also not that surprising, says Gershman. We are still a very long way from machines that can make sense of the messiness of the real world. Santoro agrees. DeepMinds AI has made a start by understanding differences in size, colour and shape but theres more to relational reasoning that that. There is a lot of work needed to solve richer real-world data sets, says Santoro.

Read more: The road to artificial intelligence: A case of data over theory

Read more: Im in shock! How an AI beat the worlds best human at Go

More on these topics:

Continue reading here:

DeepMind's neural network teaches AI to reason about the world - New Scientist

Byonic.ai Redefines the Future of Digital Marketing – inForney.com

FRISCO, Texas, Aug. 23, 2021 /PRNewswire-PRWeb/ --The next generation of AI- and ML-powered marketing is coming soon. Byonic.ai is the first-of-its-kind end-to-end platform for personalized lead insights, creative content, account intelligence, intent-based data, account-based marketing, and marketing automation. It allows data-driven teams to align their marketing, product, and customer success goals with revenue growth and sales.

Byonic.ai uses an extensive database that identifies the purchasing intent and habits of in-market prospects at various points in the sales and marketing cycles. AI capabilities target the right people at the right time, providing users with unparalleled real-time engagement opportunities that help turn prospects into well-qualified customers.

The platform uses predictive and actionable insights to discover the highest-quality leads for more successful marketing and sales outcomes. Users can measure campaign success with extensive reports and analysis. The end-to-end repeatable process embedded within Byonic.ai allows users to: discover, build, target, deliver, analyze, engage, and convert.

Account intelligence finally meets artificial intelligence.

How Byonic.ai Works

Byonic.ai will revolutionize digital marketing for:

B2B marketing and sales professionals can use the platform in several ways:

"Most platforms weren't built as a one-stop-shop for all your marketing campaign needs," says Snehhil Gupta, Chief Technology Officer, at Bython Media, creators of Byonic.ai. "Now, you get a full suite of end-to-end capabilities that include account intelligence, lead insights, marketing automation, and creative content, powered by AI/ML and wrapped in one simple and intuitive platform to run smarter campaigns."

Byonic.AI will launch in Fall 2021. Marketing and demand generation professionals can sign up for an early demo on the company's website, http://www.Byonic.AI.

Media Contact

Bython Media, Bython Media, +1 (214) 295-7729, dw@bython.com

Facebook

SOURCE Bython Media

Visit link:

Byonic.ai Redefines the Future of Digital Marketing - inForney.com

Google will shut down its AI-guided Photos printing service on June 30th – Engadget

Googles automated Photos printing service wasnt long for this world at least, not in its first incarnation. Droid Life has learned (via The Verge) that Google is shutting down the AI-guided trial service on June 30th. It didnt say what prompted the closure in a notice to members, but it said it hoped to evolve this feature and make it more widely available. This isnt the end, then, even if the service is likely to change.

The $8 per month trial had AI pick your 10 best pictures (prioritized by faces, landscapes or a mix) and print them on 4x6 cardstock, with edits if you preferred. They were meant to be gifts, or just fond memories if you wanted more than just digital copies. Google didnt have the best timing, however. The service became public knowledge in February, just a month before much of the world entered pandemic lockdowns its hard to justify spending money on a photo service when you cant socialize or travel. If there is a follow-up service, it might have to wait.

More:

Google will shut down its AI-guided Photos printing service on June 30th - Engadget

3 ways COVID-19 is transforming advanced analytics and AI – World Economic Forum

While the impact of AI on COVID-19 has been widely reported in the press, the impact of COVID-19 on AI has not received much attention. Three key impact areas have helped shape the use of AI in the past five months and will continue to transform advanced analytics and AI in the months and years to come.

Responding to the COVID-19 pandemic requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forums mission as the International Organization for Public-Private Cooperation.

Since its launch on 11 March, the Forums COVID Action Platform has brought together 1,667 stakeholders from 1,106 businesses and organizations to mitigate the risk and impact of the unprecedented global health emergency that is COVID-19.

The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.

As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.

The speed of decision-making leads to agile data science

The spread of the pandemic first in China and South Korea and then in Europe and the United States was swift and caught most governments, companies and citizens off-guard. This global health crisis developed into an economic crisis and a supply chain crisis within weeks (demand for toilet paper and paper towels alone rose by 600-750% during the week of March 8 in the US). Less than 100,000 global confirmed cases in early March ballooned to more than 13 million by July with more than 580,000 deaths.

With business leaders needing to act quickly, the crisis provided a chance for advanced analytics and AI-based techniques to augment decision-making. While machine learning models were a natural aid, development time for machine learning models or advanced analytical models typically clock in at four to eight weeks and thats after there is a clear understanding of the scope of the use case, as well as the necessary data to train, validate and test the models. If you add use-case evaluation before model development, and model deployment after the model has been trained, you are looking at three to four months from initial conception to production deployment.

"With business leaders needing to act quickly, the crisis provided a chance for advanced analytics and AI-based techniques to augment decision-making."

For solutions in days - not weeks or months - minimum viable AI models (MVAIM) had to be developed in much shorter time frames. Using agile data science methodologies, PwC was able to compress these times significantly, building a SEIRD (Susceptible-Exposed-Infected-Recovered-Death) model of COVID-19 progression for all 50 US states in one week. We then tested, validated and deployed it in another week. Once this initial model was deployed, we extended it to all counties in the US and made the model more sophisticated.

Uncertainty about the future leads to multi-agent simulations

Uncertainty touched every aspect of life under COVID-19 from health, to behavior to economic impact - and expedited the increased adoption of advanced analytics and AI techniques. Uncertainty feeds emotional reactions such as fear, anger and frustration and such emotionally-driven behavior took precedence over rational decisions and actions, especially in the early days of the pandemic.

Uncertainty along the different dimensions made scenario planning the dominant framework for evaluating plans and decisions. Scenario analysis became the predominant paradigm for evaluating disease progression, economic downturn and recovery (e.g., V-, U-, L, W-shaped economic recoveries), as well as for management decision-making on site openings, contingency planning, demand sensing, supply chain disruptions and workforce planning. While qualitative scenario analysis is quite common in the business world, using AI-based simulations to quantitatively understand the causal linkages of different drivers and develop contingent plans of action was brought to the fore by the pandemic.

Modeling human behavior (rational and emotional) became an important aspect of the scenario analysis. For example, compliance to the stay-at-home orders was one of the primary behavioral drivers to help contain the spread of the disease as well as economic activity. As a result, agent-based modeling and simulation was one of the primary advanced analytics and AI techniques used to perform scenario analysis. Daily mobility data on how many miles were driven within each zip code in the country became a proxy for the effectiveness of the stay-at-home orders. The same data was used to model the mobility behavior of people in different parts of US as the pandemic progressed. Agent-based models were one of the best techniques to capture the time- and location-dependent variations in human behavior during the pandemic.

System dynamic modeling, another well-known modeling technique, was critical in integrating multiple decision-making domains (e.g., COVID-19 disease progression, government interventions, people behavior, demand sensing, supply disruptions etc.). Agent-based simulation has traditionally been used by the Centers for Disease Control and Prevention (CDC) and other health authorities to model disease progression and health behaviors. Both methods have been used successfully in a number of uncertain scenarios to help make strategic and operational management decisions. Some recent examples include the following:

Lack of historical data leads to upsurge of model-based AI

Given the rarity of the pandemic event there was very little historical data at a global level on the disease. As a result, there was little data to power data-rich, model-free approaches to AI, like deep learning that have become popular in recent years.

By necessity, model-based AI (which leverages the data available) saw a resurgence. As the pandemic progressed, and more data was available, data-rich and model-free approaches could be combined, leading to a few key hybrid solutions:

In many ways, the pandemic has highlighted the inadequacies of our systems, processes, governance and behaviors. On the other hand, it has also provided an opportunity for data scientists and AI scientists to put their advanced techniques and tools to use by helping business leaders make decisions in a challenging environment thats dominated by speed, uncertainty and lack of data.

In summary, as organizations manage through this pandemic and transform themselves post-pandemic three key learnings are worth keeping in mind. First, focus on agile data science methods that address the speed, urgency, and uncertainty of decision making. Second, build and manage your business using dynamic and resilient models (e.g., scenario-based simulations using system dynamic and agent-based models) that capture the inter-relationships of multiple domains (e.g., demand, production, supply, finance) and human behavior. Third, combine model-rich and data-rich approaches to obtain the best of both worlds while building AI systems. These approaches can ensure you can seek solutions quickly while maximizing the technologies and processes already in place.

Go here to see the original:

3 ways COVID-19 is transforming advanced analytics and AI - World Economic Forum

AIs Data Hunger Will Drive Intelligence Collection – Breaking Defense

A sensor analyst at work at Joint Space Operations Center, Vandenberg Air Force Base, California

WASHINGTON: Artificial intelligence has an insatiable appetite for data but if you feed it the wrong kind of data, its going to choke. To get clean-enough data in large enough quantities for machine-learning algorithms to actually learn something from it, officials say that the intelligence community needs to change how drones, satellites, and other sensors performs their mission every day.

The turning point will be when we start seeing collection requirements for the production of training-quality datasets, versus the support of a tactical operation, said David Spirk, who became the Defense Departments Chief Data Officer in June and is now finalizing the DoDs new data strategy. I dont know that weve entirely made that turn yet, but I think were talking about it.

For example, the US has collected vast amounts of data on the Central Command theater, said Spirk, who served in Afghanistan himself as a Marine Corps intel specialist. But, he told the AFCEA AI+ML conference yesterday, that data collection was driven by urgent tactical needs, without a systematic approach to archiving it, curating it, and making it accessible for machine learning.

Predator drone over Afghanistan

Thats understandable. Artificial intelligence in the modern sense was in its infancy on Sept. 11th, 2001, and the Pentagon did not systematically embrace AI until 2014, long after the peak of fighting in Afghanistan and Iraq. But it means that the military has vast archives of legacy data from drone video to maintenance records that are in poorly catalogued, inconsistently formatted or otherwise too messy for a machine-learning algorithm to use without a massive and costly clean-up.

Is that juice worth the squeeze? asked Capt. Michael Kanaan, an Air Force intelligence officer who heads the USAF-MIT Artificial Intelligence Accelerator. In many cases, you could spend a lot of time and money cleaning up out of date, low-quality data that no longer reflects how your agency does analysis today. You get a much better return on investment, he told the conference, by digitizing your [current] workflows and ensuring they produce training-quality data going forward.

Its crucial to set up a strong data management culture to govern your data collection from the beginning, said Terrence Busch, the Defense Intelligence Agencys technical director for the Machine-assisted Analytic Rapid-repository System (MARS).

It took DIA years of effort and a lot of back-end money to set up the processes, training, and technology required for data management, Busch told the conference. Its not exciting work, [and] a lot of folks didnt want to invest in it, he said but now that system is in place, the new data that DIA collects is much more accessible for AI.

At the same time, another DIA official warned the conference, you dont want to clean up your data too much, because you might erase a seemingly irrelevant detail that turns out to be useful later on.

We never want to throw it away, [because] we dont know if its going to have value later, said Brian Drake, DIAs director of artificial intelligence. The concept we are socializing inside of our agency is something weve done since World War Two, which is creating a gold copy of that data: a copy of the data as originally collected, with all its flaws, thats archived and kept unchanged in perpetuity for the benefit of future analysts.

We have to have an honest conversation with our vendors on that point, Drake told the conference, [because] we do find some data sets that come to us that have been pre-prepped and labeled, especially when it comes to imagery. While that cleaned-up data is often great for the immediate task at hand in the contract, he said, DIA needs the raw material as well.

Getting everyone from contracting officers to analysts thinking about AI-quality data is a long-term effort, Busch said. Down in the workforce level, culture adaptation is slow, he said. Weve spent at least 10 years getting people acculturated to big data, getting used to automation.

That cultural revolution now needs to spread beyond the intelligence community. Every single soldier, airman, sailor, Coast Guardsmen is really a data officer in the future, said Greg Garcia, the Armys Chief Data Officer. Every single individual, no matter what their specialty is, has to think about data.

Continue reading here:

AIs Data Hunger Will Drive Intelligence Collection - Breaking Defense

For eBay, AI is ride or die – VentureBeat

If youre not doing AI today, dont expect to be around in a few years, says Japjit Tulsi, VP of engineering at eBay,It really is that important for companies to invest in especially commerce companies.

Tulsi will speak next week at MB 2017, July 11 and 12 in SF, MobileBeats flagship event where this year weve gathered more than 30 brands to talk about how AI is being applied in businesses today.

eBay is working to stay ahead of the curve, now that machine learning and AI is growing in importance. It has focused on the potential of AI for the past ten years. The companys approach to AI has been built on a platform of research and development, Tulsi says, plus decades of insights and data about consumer behavior, making even the simplest applications incredibly valuable.

As an example, Tulsi points to the merchandizing strip at the bottom of every item page, which shows similar items that a shopper might be intrigued by, and often leads them down a positive rabbit hole of shopping and buying.

Itsmachine learning and AI at the very simplest level, and weve seen a tremendous amount of return on investment on that. Tulsi says.

However, evolving that into more sophisticated personalization has proven difficult, say Tulsi, because of the limitations on computing power in the past 10 years. Then in 2015 or so, processors hit the event horizon, with game-changing advances in GPUs and the dedicated hardware used for deep learning.

Massive calculations can now be made swiftly and cost-effectively. New algorithms are increasing the speed and depth of learning. And deep learning can now go broad across billions of data points with thousands of aspects and dozens of layers.

eBay has no shortage of data. The company manages about 1 billion live listings and 164 million active buyers daily, and receives 10 million new listings via mobile every week.

So another big bet was born: Investment in AI technologies like natural language understanding, computer vision, and semantic search, to drive growth and, Tulsi says, reinvent the future of commerce.

The future looks pretty much like their engineering team building descriptive and predictive models from the enormous volume of behavioral and description data generated by eBays many buyers, sellers, and products. It requires the complex fusion of massive amounts of behavior log, text, and image data, all with a particular emphasis on on developing data-driven models to improve user experience.

The question now is, can we provide you with even further personalized, relevant information over the course of the next ten years? he says. Were very focused on how AI will impact commerce.

Specifically, how it will impact the primary goal of commerce: understanding consumer buying intent wherever they are, from bricks and mortar to online browsing. Of course, cross-platform understanding of what a shopper wants is the key to delivering a truly personal, contextual shopping experience.

You want an exact item that youre looking for whether you want it, you need it, or you just like it at the price point you care about, Tulsi says. With AI, our aim is to achieve that kind of perfection underneath the hood so you dont have to spend a lot of time finding that ideal match for you.

He points at one of their beta projects, launched last year on Facebook Messenger: the eBay ShopBot. Its essentially a multimodal search engine, or a personalized shopping assistant, powered by contextual understanding, predictive modeling, and machine learning.

Keywords are not enough any more, and dont offer the most optimized shopping experience. With ShopBot, consumers can text, talk, or snap a picture, and then the assistant asks questions to better understand your intent and dig up hyper-personalized recommendations. And it gets smarter about what you want, every time you use it.

These consumer interactions also yield a tremendous amount of intent data, which can be poured right back into the algorithm.

Across the three spectrums of multimodal AI that it represents, were starting to get much much better at understanding you and whichever way that you want to interact with us, Tulsi says.

And as theyre able to improve their ability to simulate human cognitive capabilities like perception, language processing. and visual processing, the company expects that commerce will become increasingly conversational even to the point where the search box becomes redundant.

What I think is really exciting going forward is the machine will actually do the thinking for you, Tulsi laughs. You will just talk naturally to it as if youre talking to a friend and spitballing and the machine should be able to understand your intent.

And just as importantly, commerce will will become present wherever and whenever the user is engaged on their social messaging platforms.

Its an approach that digital assistant-focused companies should sit up and take notice of, Tulsi adds. They need to start investing in commerce capabilities or partnering with commerce companies to really make their assistant pan out from a financial model perspective.

From our perspective, every company should be heavily investing in AI, and it shouldnt just be about using cognitive services but actually developing your own models that keep you on the cutting edge of technology, Tulsi says. And that will hold you in good stead over the course of the next many years to come.

See the rest here:

For eBay, AI is ride or die - VentureBeat

How AI could make living in cities much less miserable – MarketWatch


MarketWatch
How AI could make living in cities much less miserable
MarketWatch
How AI could make living in cities much less miserable. By. Share. Here's how artificial intelligence can be used to create the 'smart city' of the future. Posted August 15, 2017. Latest from SectorWatch. Play All. By 2021, your Lyft ride will likely ...

Read this article:

How AI could make living in cities much less miserable - MarketWatch

AI Could Start Third World War: Alibaba’s Jack Ma (BABA) – Investopedia

Alibaba Group Holding Limited (BABA) chairman Jack Ma is preparing for the Third World War. Or at least it would seem that way from his comments to television network CNBC during an interview. According to Ma, advances in technology have caused world wars. "The first technology revolution caused World War I. The second technology revolution caused World War II. This is the third technology revolution," he said. But he did not outline the possible causes for this war.

Ma's interview was wide ranging and covered disparate topics that ranged from the future of humanity to the difference between wisdom and intelligence. He sketched the contours of a future world disrupted by artificial intelligence (AI) trends. According to Ma, the next 30 years will be marked by "very painful" changes for humanity as it enters an age defined by data and artificial intelligence. Ma said that humans will win in a war with machines. This is because machines do not possess wisdom, which comes from the heart. (See also: Alibaba's Ma: We're Not Looking to Invade US.)

That said, the age of machines will witness far-reaching changes. As machines take over labor-intensive tasks, the working week will diminish to 16 hours, Ma predicted. The extra leisure time will create a mobile population that will work across borders and put a stop to the backlash against globalization. "The only thing is how can we make trade more inclusive, knowledge more inclusive, and this is how we can deal with the instability of the world (that machines will create)," he said. Governments will have to make "hard choices," Ma added. (See also: Jack Ma: Success Story.)

Among those choices will the decision to open up borders to enable cross-border e-commerce. According to current rules, it is difficult for small businesses to trade across borders using e-commerce sites due to a phalanx of customs and duty provisions. Ma's e-commerce juggernaut is leading the charge for international e-commerce and already has a thriving business in Tmall, its cross-border e-commerce site that sells overseas goods in China. (See also: Special Delivery: Alibaba Wants Faster Traffic to Europe.)

Original post:

AI Could Start Third World War: Alibaba's Jack Ma (BABA) - Investopedia

Soccer looks to AI for an edge: Could an algorithm really predict injuries? – ESPN

Artificial intelligence can drive a car, curate the films and documentaries that you watch, develop chess programmes capable of beating grandmasters and use your face to access your phone. And, one company claims, it can also predict when footballers are about to suffer an injury.

Off the field, football has gone through a huge transformation in the 21st century, with the emergence of GPS-driven player performance data in the early 2000s, followed in the 2010s by the advanced analytics that now form a major part of every top club's player recruitment strategy. Just last month, Manchester City announced the appointment of Laurie Shaw to a new post of lead AI scientist at the Etihad Stadium, taking him from his role as research scientist and lecturer at Harvard University.

Football has always searched out innovations to make small, but crucial, differences. Many have become staples of the game, including TechnoGym to improve biomechanics, IntelliGym to improve cognitive processing and cryogenic gym sessions to ease the strain on muscles. Others have fallen by the wayside. Anyone remember nasal strips or the ball-bending properties of Predator boots?

The use of AI to predict when players are on the brink of suffering an injury could prove to be the next game-changing innovation that becomes a key component at the elite end of the game.

In a game dominated by clubs wanting to discover the extra 1% in marginal gains, keeping a player fit is arguably the most important challenge facing any coach. A depleted squad can lead to negative results and, if a team suffers too many, the manager or coach is generally the one who pays the price. This season has been more challenging than most, with the COVID-19 pandemic leading to fixtures being crammed into a reduced time frame, and players being forced to play 2-3 games a week on a regular basis.

- Stream ESPN FC Daily on ESPN+ (U.S. only)- ESPN+ viewer's guide: Bundesliga, Serie A, MLS, FA Cup and more

The toll on players' fitness is borne out by the injury lists. Crystal Palace and Southampton fulfilled their midweek Premier League fixtures with 10 first-team squad members sidelined. Champions Liverpool lost to Brighton on Wednesday with eight absentees, including long-term injury victims Virgil van Dijk, Joe Gomez and Joel Matip. Research by premierinjuries.com shows that up to and including match-week 21 of the Premier League this season, there has been a five percent increase in time lost to injuries this season. At the same stage last season, there were 356 "time-loss absences" (a player missing at least one league game), but the number has jumped to 374 this time around. With COVID-related absences, the number is 435.

Liverpool had suffered 14 time-loss absences at this stage of last season, but they're now up to 29 in 2020-21. Their league position -- fourth place, seven points adrift of top spot -- suggests they are paying a price for their sharp increase in players lost to injury.

But finding reliable injury prevention technology is the holy grail of sports scientists and fitness coaches. By November, ESPN reported a 16% rise in muscle injuries in the Premier League compared to the same stage last season. So can AI successfully predict when players are about to be injured?

Since the start of the 2017-18 season, La Liga side Getafe have partnered with the California-based AI company Zone7 to break down performance data and predict when players are at risk of injury. In simple terms, clubs like Getafe in Spain, Scottish Premiership leaders Rangers and MLS sides Real Salt Lake and Toronto FC send their training and match data to Zone7, who analyze it using their algorithm and send back daily emails with information about players who may be straying close to the so-called "danger zone."

Between the start of the 2017-18 season and March 2020, when La Liga was suspended due to the COVID-19 pandemic, Getafe recorded a substantial reduction in injuries.

"Three seasons ago, during the first year with Zone7, we saw a reduction of 40% in injury volume," Javier Vidal, the Getafe's Head of Performance, said. "As the Zone7 engine became more reliable and we had access to more data in the second year, we saw a reduction of 66 percent in the volume of injuries.

"This means that of every three injuries we had two seasons ago, we now have only one."

Jordi Cruyff, the former Barcelona and Manchester United midfielder, told ESPN that he has become a "minor, minor investor" in Zone7 after trialling the AI tool during his time as sporting director at Maccabi Tel Aviv in 2017. But he admits that he was only convinced by the AI technology after monitoring the data, even though Maccabi's then-coach declined to use it.

"I presented the tool to our then-coach and he wasn't too interested." Cruyff told ESPN. "So for the four to five months the coach was in charge, he would follow his own plan, but we would still give our performance data to the company, which they would run through their algorithm. I would then receive an email before training each day with which players were at risk and it actually predicted five of seven injuries.

"I thought 'wow.' Once or twice could be a coincidence, but catching five out of seven muscular injuries is a different thing. I would wait until after training to be told if a player had been injured. I would then go back to look at my email and there was the name. We were lucky in some ways that the coach wasn't interested in it because it gave us the chance to test it.

2 Related

"It was the perfect test, although I wish the coach would have listened, because then we would have avoided some injuries."

Tal Brown, who founded Zone7 with Eyal Eliakim in 2017 having worked together in the Israeli Defense Forces Intelligence Corps, spoke to ESPN to explain how AI can be used to detect injury risk.

"Every single player is now using a GPS vest, they are being tested for strength and flexibility at their clubs, many teams distribute watches to their players to measure sleep, so the reality is that somebody working for a club needs to look at two dozen dashboards every day -- multiplied by 20 players, multiplied by six days a week," Brown said via Zoom. "It is becoming a puzzle that a human brain wasn't really meant to solve.

"We can use a chess metaphor. Chess programmes used to be pretty simplistic and the experts could beat them, but today, a Google chess programme is unbeatable. It's not because Google has taught that chess programme 10,000 equations manually, it is because the programme has automatically studied every recorded chess game played in the history of mankind and, using AI, has developed its own understanding and interpretation.

"We are not there yet as a company. We don't have access to every single football injury that ever occurred, but we are getting much better and there will be a point where a programme focused on injury risk will out-perform humans in interpreting data."

More than 50 clubs across the world now use Zone7's AI programme. Many wish to remain anonymous, in an effort to protect any competitive advantage that the tool may provide -- football clubs are notoriously protective of such proprietary data -- while others simply do not wish to discuss any pros or cons they have discovered while using it. Despite repeated attempts by ESPN to speak to Real Salt Lake and Toronto, neither MLS team responded to enquiries.

1:32

Julien Laurens puts Eden Hazard's latest injury into context for Real Madrid.

Rangers, 23 points clear at the top of the Scottish Premiership and on course for a first domestic title since 2011, adopted Zone7's AI tool last summer and, while keen to make a broader assessment after a full season of use, they believe it's been a valuable addition to their injury prevention strategy.

"I believe AI, coupled with the experience levels of those using it, will eventually become a bedrock within clubs' decision-making as data and technology advances," Jordan Milsom, Rangers' head of performance told ESPN. "Given our players had been exposed to one of the longest lockdowns of all [93 days] and the unknowns associated with such prolonged layoffs, we felt investing in such a system may well provide another layer of support for how we managed the players on what would clearly be a challenging season.

"We haven't used the system long enough compare season-to-season analysis, and it's important to understand we are a department that is data-informed and not data-driven. But it is my opinion that if such systems are used in this way, it can have many positive benefits."

Rangers manager Steven Gerrard has praised the club's fitness and sports science department, saying in December that the team were enabling his players to "hit top numbers," and Milson says that the AI data is helping to inform player rotation, even to the extent of highlighting which players should be substituted during games.

"All of our GPS and heart rate training load data from sessions and games is uploaded automatically into the Zone7 system," Milsom said. "The platform digests this, performs its modelling and provides us with risk alerts each day for players.

"Generally, there would be 1-2 players who may be flagged [for further monitoring]. Sometimes, these flags relate to overload -- other times it's under-load. This allows us to have a deeper dive into why specifically they are at risk. This information will feed into our general staff discussions to determine if any further areas support this information. As we typically compete every 3-4 days, if risk is associated with overload, I can often use that information to help support in-game substitutions as a means of maximising player availability, whilst potentially reducing risk through reduced minutes if and when possible."

The key to the success of the AI tool is the amount of data Zone7 are able to upload and analyse. While Brown stresses that "nobody ever sees your data. We don't own it and we're not allowed to retain a copy of it, post-relationship, so it's very strict," the volume of information provided by each client club is used to create a huge database that then enables the programme to predict injury risk.

"We can use 200 million hours of football data because we are working with 50-60 clients," Brown said. "As a result, we have 50-60 times more data than a typical team has, so the data set is very large. But what is important is that it's not just the injury in the sense of the date it occurred and what happened, it is every single day of training and games and medical data leading to the injury, going back as much as a year prior.

"That amount of information gives us the ability to look at the daily data leading to an incident and, using AI and deep learning, to find patterns that repeat themselves before hamstring injuries or groin injuries or knee injuries happen. That's how it works.

"If you are trying to forecast an event, which is an injury, you need to have a big database of incidents. A typical team would have something like 30-40 incidents a year for a squad, so multiply that by several years of historical data."

1:17

The Gab and Juls show analyse Liverpool's loss to Brighton and look forward to their next game against Man City.

ESPN has spoken to people in sports science who believe that AI is a positive innovation if used alongside existing methods. "Their results are impressive," said one sports scientist, who has worked with several Premier League clubs in the past and spoke on condition of not being named. "The issue is the level of individualisation with injury results is high, so lots of variant data only gives you a small answer. Therefore, it definitely has to be a blended approach."

Zone7's AI tool is not restricted to sports. In tandem with Garmin wearable devices and Zone7, medical staff in Israel are having their health and well-being monitored during the COVID-19 pandemic and there is a similar project with a major hospital in New York City. There are also projects ongoing with military and special forces. In football, however, Getafe are the best example of AI being used successfully to improve the fitness record of a team, as explained by head of performance Vidal.

"It would take 200 people all day to analyse the data, but with this, I get the recommendations within minutes." Vidal said. "We use our own high-quality ultrasound to clinically to evaluate players that show predefined risk indications. After starting to use Zone7, some players would report feeling fine despite the engine identifying immediate risk for them.

"In many cases, our ultrasound tests confirmed muscular damage, allowing us to address this before the injury occurred. These players could have sustained injury but for the AI detection."

Cruyff, now coaching in China with Shenzhen FC, believes AI can become a key component for teams, but he makes clear that AI alone cannot be regarded as the silver bullet to prevent all injuries.

"It's not a deciding tool," he said. "You can see a risk of injury and decide to take the risk or not. It's part of the modernisation of sport. You have so many things -- video analysts, GPS tracking devices -- and I think this is a part that maybe we missed, but it is coming, little by little."

See the original post:

Soccer looks to AI for an edge: Could an algorithm really predict injuries? - ESPN

3 Tips to Find a Good AI Partner for Your Recovery – Entrepreneur

May7, 20204 min read

Opinions expressed by Entrepreneur contributors are their own.

Savvy business owners have already begun to prepare for the new normal,taking advantage of the opportunity to arm themselves with new tools for the future. Thats not to say recovery will be easy for everyone,or even that everyone will recover. In April, Main Street America reported that 7.5 million small businesses could close permanently within five months. Nearly half of those businesses could close permanently in just two months, especially if they don't receive financial assistance.

Businesses can't expect to jump back into the drivers seat and pick up where they left off. Future success demands present growth, and in a future filled with smarter technologies, artificial intelligence stands out as the most vital investment for businesses of all sizes.

Tight budgets currently prevent many founders from pursuing their vision. For many, scraping by from one day to the next counts as a win. This state of affairs can't continue for long, though. Something has to give. Businesses that attempt to hold tight instead of pressing forward will find their grip slipping as a new and tougher market lets them fall.

Rather than take a conservative wait-and-see approach, entrepreneurs should do what they do best in these situations:innovate, explore and challenge the status quo. Tools like artificial intelligence empower even the smallest businesses to scale their capabilities beyond what they could accomplish on their own, enabling them to get more out of limited resources.

What better way to survive and thrive than to implement solutions that yield better returns on smaller investments?

Related: 6 AI Business Tools for Entrepreneurs on a Budget

During times of stress and complexity, entrepreneurs dont have to go it alone. Artificial intelligence tools provided by competent and helpful partners can help businesses do more with their limited budgets and push through any challenge. Finding the right partner isnt always easy, but with a little preparation and some digging, every company can identify and implement an AI solution to make life a little easier.

Check out these helpful tips to find the right AI partner:

AI tools remain decades away from human-level intelligence. Most business owners can perform all the same tasks an AI tool can perform. The difference is thatcompetent AI can help companies prioritize and streamline workflows, saving humans time as the tools take care of the details.

Related: How Entrepreneurs Can Use AI to Boost Their Business

CureMetrix, an AI business that operates in the medical industry, cautions radiology professionals against the dangers of burnout. With so many important decisions to make, people can quickly succumb to analysis paralysis if left unchecked. When evaluating potential AI partners, look for someone who can help the business and its workers prioritize and manage workloads.

When vetting partners, look into which tool will be most beneficial for you. As a branch of AI, machine learning involves teaching systems to learn from data sets and make independent decisions based on that information. Artificial intelligence includes a host of functions, machine learning included, so business owners should weigh their needs against any offer from a potential partner.

Not sure what type of solution to go for? Business AI provider D-Labs put together thishelpful guide on how to evaluate AI solutions for specific business needs.

Businesses can use AI for all sorts of cool things. Right now, though, most companies cant afford to splurge on luxuries. Microsoft highlights a few different uses of AI and clarifies exactly how different implementations impact the businesses using them.

For example, a business with interactive customer-service AI could answer basic questions and provide simple services without the need for human intervention. Chatbots are a popular version of this type of AI. Customer relationship management (CRM) solutions empower companies to track and manage communications to maximize time spent on the most promising leads.

Microsoft also mentions the benefits of cybersecurity AI, which may not translate directly to revenue but can save business owners thousands of dollars in avoided mischief. With hacker activity up, cybersecurity investments may be prudent for companies that can afford them.

Related: 3 Ways You Can Use Artificial Intelligence to Grow Your Business Right Now

Why wait until the pandemic passes to start looking into smarter tools? By adding the right technologies and partners now, business owners can get ahead of the curve, equippedwith the ability to earn more money with fewer resources. No one knows how long the downturn in the economy will last, so the sooner businesses invest in themselves, the more impactful the returns will be.

See original here:

3 Tips to Find a Good AI Partner for Your Recovery - Entrepreneur

Ultria Unveils Orbit AI, the Latest Addition to Its AI Powered CLM Capabilities – The Trentonian

PRINCETON, N.J., Oct. 25, 2019 /PRNewswire/ -- Ultria, a leading provider of Enterprise Contract Lifecycle Management in the cloud, unveiled ULTRIA ORBIT AI, a next-generation, Artificial Intelligence solution, expanding beyond the AI-powered Contract Management portfolio.

"Orbit AI is a complementary solution for our contract management application,"Arthur Raguette, Executive Vice President, Ultria said. "We are proud to introduce Orbit as a part of our contract management suite and to provide additional value to our customer's CLM experience. We expect Orbit to be a game-changer and take contract management across the horizon."

Ultria Orbit is designed to assist users across the contract lifecycle, from drafting and assembling contracts, streamlining work flows, to post-contract compliance and obligations management. Orbit's abilities to extract metadata and identify clause match conditions are powered by proprietary Artificial Intelligence algorithms.

Orbit embodies three different personas, a Guide, an Explorer, and a Protector, all powered by Ultria's Artificial Intelligence capabilities to streamline your contract management journey.

Orbit Guidesteers through key stages of intake request, authoring, assembling, and negotiating.

Orbit Explorerworks as an intelligent companion for more intuitive searching with more accurate results and faster reporting and analytics.

Orbit Protectorsafeguards organizations from the complexities of a rapidly shifting regulatory landscape, ever-increasing commercial compliance risks and post-award commercial and compliance management.

Read more about Ultria Orbit hereand harnessOrbit's AI capabilities to enhance your team's performance and transform your contracts into dynamic, intuitive, and smart legal documents for informed decision making.

About Ultria:

Ultria develops and licenses Artificial Intelligence powered applications, including its flagshipContract Lifecycle Management solutionfor the enterprise. Ultria CLM is a proven, scalable, SaaS-deployed Contract Lifecycle Management system that leverages Artificial Intelligence and Machine Learning to be robustly and rapidly provisioned in today's complex business landscapes. Ultria Orbit, Ultria's Artificial Intelligence enables teams to analyze contract terms and extract metadata from any contract. For information on Ultria, visitwww.ultria.com.

Press ContactRewaKulkarni, Marketing and Public Relations, UltriaEmail: rewa.kulkarni@ultria.com

Related Images

say-hello-to-ultria-orbit-ai.pngSay Hello to Ultria Orbit AI

Here is the original post:

Ultria Unveils Orbit AI, the Latest Addition to Its AI Powered CLM Capabilities - The Trentonian

AI will help us download meeting notes to our brains by 2030 – VentureBeat

The internet is overflowing with tips on how to hack your health. From increasing cognitive function by drinking butter-spiked coffee to tracking sleep, stress, and activity levels with increasingly sophisticated fitness wearables, ours is a culture obsessed with optimizing performance. Combining this ethos with recent breakthroughs in artificial intelligence, its practically inevitable that the next frontier in achieving superhuman status lies in the rapidly developing field of brain augmentation.

Artificial intelligence has already proven its value in making software more intuitive and user friendly. From voice-activated personal assistants like Alexa and Siri becoming the new norm, to smarter app authentication through facial recognition technology, we have reached the point where people are starting to trust that the machines are here to improve our lives. The science fiction based fear of bots taking over is being put to rest as consumers embrace the ease and enhanced security that AI brings to our daily devices. Now that it has nestled itself comfortably inside our smartphones, scientists are aiming higher with the next device hack: the human brain.

Visionary entrepreneurs including Elon Musk and Bryan Johnson have teamed up with scientists around the world to make brain augmentation a reality sooner than you may have thought possible. Simply put, the goal is to enhance intelligence and repair damaged cognitive abilities through brain implants. Duke University senior researcher Mikhail Lebedev, who recently published a comprehensive collection of 150 brain augmentation research papers and articles, is confident that brain augmentation will be an everyday reality by 2030.

Lebedevs main focus of research is developing a device that can be fully implanted in the brain. Creating a power source and wireless communication system is a huge challenge, but one that Elon Musk is also working on. Musk made headlines earlier this year with the launch of Neuralink, a company working on the development of what science fiction fans refer to as neural lace, or the merging of the human brain with software to optimize output of both biological and technological functioning. Musk hopes to offer a new treatment for severe brain traumas, including stroke and cancer lesions, in about four years.

With Neuralink is still in its early stages, other Silicon Valley heavy hitters are eager to crack the code of brain augmentation. Braintree founder Bryan Johnson invested more than $100M of personal funding to launch Kernel, a startup staffed by neuroscientists and engineers working to reverse the effects of neurodegenerative diseases such as Parkinsons through the creation of a neuroprosthetic in the form of a tiny embeddable chip. Scientists admit that there is much research on how neurons function and interact that needs to happen before neural code can be written by computers, but the resources and attention garnered by some of todays brightest entrepreneurs is sure to accelerate the process.

While we wait for technology to advance to the level of creating a fully implantable brain enhancement device, the short term breakthroughs we can expect to see from AI brain augmentation revolve around sensory augmentation.

The use of electronic stimuli to trigger the brain into producing artificial sensations has huge possibilities for improving damaged cognitive functioning. Vision could be triggered for the blind to experience sight for the first time. Sensory touch could be stimulated in paralysed limbs. And cognitive functions that tend to degenerate with age, such as memory, could be optimized.

The implications are even larger than repairing cognitive functioning, though. In 2013, Miguel Nicolelis, a neurobiologist at Duke University, successfully led an experiment demonstrating a direct communication linkage between brains in rats. This first successful brain-to-brain interface allowed rats to electronically share information on how to respond to stimuli and the implications for humans could be staggering. From sharing memories to information, altering our shared consciousness is a more far-flung but nevertheless attainable goal of AI. Imagine all the collective suffering in office conference rooms that could be eliminated if meetings could be directly downloaded to our brains!

The field of AI based brain augmentation represents the biggest evolutionary step forward in mankinds history. Creating technologies to augment and enhance human intelligence holds the promise of eliminating diseases and providing a higher quality of life through optimizing, well, everything. Just think: the smartphone was just a crazy idea until the iPhone hit the market ten years ago. Now 44 percent of the worlds population owns a smartphone, along with the ability to expand its computing powers exponentially be connecting to the cloud.

Famed futurist and Google executive Ray Kurzweil predicts that by the 2030s, nanobots will enter our brains via capillaries, providing a fully immersive virtual reality that connects our neocortex to the cloud, expanding our brain power similar to how our smartphones tap into the cloud for outsized computing power.

If Kurzweils incredible track record of predicting emerging technologies is any indicator hes been right about 86 percent of his predictions since the early 90s then we can expect to add a whole new meaning to the phrase head in the clouds. Were living into an exciting age where what was once science fiction is becoming reality, and having a head in the clouds will no longer mean being lost in daydreams, but plugged into the enhanced intelligence of a superbrain.

Andrew DiCosmo is the CTO for Blackspoke, a company that specializes in IT consulting to the Federal Government.

See the original post:

AI will help us download meeting notes to our brains by 2030 - VentureBeat

Teaming with AI: How Microsoft is taking on Zoom on virtual background front – The Financial Express

When governments the world over announced lockdowns, the hunt for best collaboration and video-calling apps had begun for most users. There were video calling apps for funHousepartyand then there were business apps. But competition in the space was limited. Zoom captured a large share of the market with its user interface and accessible features. The disaster that followed in terms of the company trying to keep up with demand and buying Chinese servers gave space to the likes of Microsoft and Google to add more users. But as work from home becomes a norm, and people get attuned to living with video calling apps, companies are incorporating more features to keep their userbase. One of the biggest highlights for all apps has been the use of artificial intelligence and machine learning to attract users. The latest addition to this is Microsoft.

What has Microsoft introduced?Microsoft last week announced features that will help users enable team mode, where they can sit together in a different environment. So, with a virtual background, you can see everyone sitting right in front of you in a classroom, library or coffee house setting, thereby making the whole experience more personal. Microsoft is also trying to incorporate a feature which allows you to adjust brightness and other parameters of the video.

How is it different from virtual backgrounds?Zoom has had virtual backgrounds for long now. Microsoft is a late entrant, but the concept is the same. When Zoom uses virtual background, it often renders the depth of the image to superimpose other backgrounds on it. The machine-learning algorithm then identifies the human component and changes the rest. The technology is not perfect, adjust the camera too fast, and it will not work. In this case, Microsoft is using the same technology to extract you from the image and put you in the same room along with friends and colleagues sitting behind a desk or a table. That way you can see all the participants in one window.

What is Google Meet doing?Google is using AI differently. Instead of using it for video, it is using the technology to cut out background noise. This active noise filtering means that you can only hear the sound of the speaker, and every other sound gets muzzled.

Techsplained @FE features weekly on Mondays. For queries, mail us at ishaan.gera@expressindia.com

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Follow this link:

Teaming with AI: How Microsoft is taking on Zoom on virtual background front - The Financial Express

Weird AI illustrates why algorithms still need people – The Next Web

These days, it can be very hard to determine where to draw the boundaries around artificial intelligence. What it can and cant do is often not very clear, as well as where its future is headed.

In fact, theres also a lot of confusion surrounding what AI really is. Marketing departments have a tendency to somehow fit AI in their messaging and rebrand old products as AI and machine learning. The box office is filled with movies about sentient AI systems and killer robots that plan to conquer the universe. Meanwhile, social media is filled with examples of AI systems making stupid (and sometimes offending) mistakes.

If it seems like AI is everywhere, its partly because artificial intelligence means lots of things, depending on whether youre reading science fiction or selling a new app or doing academic research, writes Janelle Shane inYou Look Like a Thing and I Love You, a book about how AI works.

Shane runs the famous blogAI Weirdness, which, as the name suggests, explores the weirdness of AI through practical and humorous examples. In her book, Shane taps into her years-long experience and takes us through many examples that eloquently show what AIor more specificallydeep learningis and what it isnt, and how we can make the most out of it without running into the pitfalls.

While the book is written for the layperson, it is definitely a worthy read for people who have a technical background and even machine learning engineers who dont know how to explain the ins and outs of their craft to less technical people.

In her book, Shane does a great job of explaining how deep learning algorithms work. From stacking up layers of artificial neurons, feeding examples, backpropagating errors, using gradient descent, and finally adjusting the networks weights, Shane takes you through the training ofdeep neural networkswith humorous examples such as rating sandwiches and coming up with knock-knock whos there? jokes.

All of this helpsunderstand the limitsand dangers of current AI systems, which has nothing to do with super-smart terminator bots who want to kill all humans or software system planning sinister plots. [Those] disaster scenarios assume a level of critical thinking and a humanlike understanding of the world that AIs wont be capable of for the foreseeable future, Shane writes.She uses the same context to explain some of the common problems that occur when training neural networks, such as class imbalance in the training data,algorithmic bias, overfitting,interpretability problems, and more.

Instead, the threat of current machine learning systems, which she rightly describes asnarrow AI, is to consider it too smart and rely on it to solve a problem that is broader than its scope of intelligence. The mental capacity of AI is still tiny compared to that of humans, and as tasks become broad, AIs begin to struggle, she writes elsewhere in the book.

AI algorithms are also very unhuman and, as you will see inYou Look Like a Thing and I Love You, they often find ways to solve problems that are very different from how humans would do it. They tend to ferret out the sinister correlations that humans have left in their wake when creating the training data. And if theres a sneaky shortcut that will get them to their goals (such as pausing a game to avoid dying), they will use it unless explicitly instructed to do otherwise.

The difference between successful AI problem solving and failure usually has a lot to do with the suitability of the task for an AI solution, Shane writes in her book.

As she delves into AI weirdness, Shane sheds light on another reality about deep learning systems: It can sometimes be a needlessly complicated substitute for a commonsense understanding of the problem. She then takes us through a lot of other overlooked disciplines of artificial intelligence that can prove to be equally efficient at solving problems.

InYou Look Like a Thing and I Love You, Shane also takes care to explain some of the problems that have been created as a result of the widespread use of machine learning in different fields. Perhaps the best known isalgorithmic bias, the intricate imbalances in AIs decision-making which lead to discrimination against certain groups and demographics.

There are many examples where AI algorithms, using their own weird ways, discover and copy the racial and gender biases of humans and copy them in their decisions. And what makes it more dangerous is that they do it unknowingly and in an uninterpretable fashion.

We shouldnt see AI decisions as fair just because an AI cant hold a grudge. Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering, Shane warns. The bias is still there, because the AI copied it from its training data, but now its wrapped in a layer of hard-to-interpret AI behavior.

This mindless replication of human biases becomes a self-reinforced feedback loop thatcan become very dangerouswhen unleashed in sensitive fields such as hiring decisions, criminal justice, and loan application.

The key to all this may be human oversight, Shane concludes. Because AIs are so prone to unknowingly solving the wrong problem, breaking things, or taking unfortunate shortcuts, we need people to make sure their brilliant solution isnt a head-slapper. And those people will need to be familiar with the ways AIs tend to succeed or go wrong.

Shane also explores several examples in which not acknowledging the limits of AI has resulted in humans being enlisted to solve problems that AI cant. Also known asThe Wizard of Oz effect, this invisible use of often-underpaid human bots is becoming a growing problem as companies try to apply deep learning to anything and everything and are looking for an excuse to put an AI-powered label on their products.

The attraction of AI for many applications is its ability to scale to huge volumes, analyzing hundreds of images or transactions per second, Shane writes. But for very small volumes, its cheaper and easier to use humans than to build an AI.

All the egg-shell-and-mud sandwiches, the cheesy jokes, the senseless cake recipes, the mislabeled giraffes, and all the other weird things AI does bring us to a very important conclusion. AI cant do much without humans, Shane writes. A far more likely vision for the future, even one with the widespread use of advanced AI technology, is one in which AI and humans collaborate to solve problems and speed up repetitive tasks.

While we continuethe quest toward human-level intelligence, we need to embrace current AI as what it is, not what we want it to be. For the foreseeable future, the danger will not be that AI is too smart but that its not smart enough, Shane writes. Theres every reason to be optimistic about AI andevery reason to be cautious. It all depends on how well we use it.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Published July 18, 2020 13:00 UTC

Read the original post:

Weird AI illustrates why algorithms still need people - The Next Web

Big Tech Is Battling Cyberthreats With AI – Motley Fool

Cybercrime is a worldwide epidemic, and frequent headlines attest to the need for novel solutions. Research firm Cybersecurity Ventures estimates that the global cost of cybercrime will reach $6 trillion annually by 2021, double the $3 trillion cost in 2015. It further reports that spending on products and services to defend against cybercrime will exceed $1 trillion over the next five years.With the number of hacks and infiltrations on the rise, there will be a growing shortage of experts to fill cybersecurity positions that will result in 3.5 million open positions in the field by 2021.

The most recent example of a widespread threat is the ransomware WannaCry, which spread to 150 countries and over 200,000 organizations. Infected computers were encrypted while hackers demanded payments from users for the release of their data.

Experts are increasingly turning to artificial intelligence (AI) in the fight against cybercrime for its ability to analyze data more quickly than its human counterparts and potentially block malicious code before it gains access or causes significant damage. Microsoft Corporation (NASDAQ:MSFT) just joined a growing number of tech companies that are banking on AI to bridge the gap in the battle for the digital domain.

Hexadite can respond to cyberthreats in minutes. Image source: Pixabay.

Microsoft has acquired cybersecurity start-up Hexadite, which specializes in the use of AI to identify and respond to cyberattacks. This acquisition will allow the company to expand its existing capabilities and portfolio of security products. Most cyberattacks are the result of sophisticated algorithms running protocols to identify vulnerabilities and exploit them. Hexadite claims that its AI-based solution reduces the time necessary to respond to cyberincidents by 95%. Its system can launch multiple "probes" and identify breaches in real time, allowing either a human or automated response to begin within minutes.

Microsoft has acquired a number of start-ups in recent years in an effort to increase the security of its Azure cloud-computing services.In 2014, Microsoft picked up enterprise-security company Aorato, which uses machine learning to create a behavior-monitoring firewall to quickly identify anomalies in data networks. By continuously reviewing and updating its interpretation of normal user behavior, the system can detect unusual or suspicious activity within a company's network before threats are realized.This latest move by the software giant is part of a broader trend that's changing the way companies combat cyberincursions.

E-commerce giant Amazon.com, Inc. (NASDAQ:AMZN) acquired AI-based cybersecurity company Harvest.ai, which uses analytics to spot unusual behavior by users and within key business systems. The system determines the importance of, and assigns values to, critical documents, data, and source code to detect and eliminate data breaches. The company's flagship MACIE system provides real-time monitoring and detects unauthorized access to prevent targeted attacks and data leaks. It is thought that Amazon used Harvest.ai to strengthen the security of its Amazon Web Services cloud-computing service.

Microsoft will roll out Hexadite to commercial Windows 10 customers. Image source: Getty Images.

Tech giant International Business Machines Corporation (NYSE:IBM) embarked on a mission in mid-2016 to train its AI-based cognitive-computing system, Watson, in cybersecurity. The company partnered with a number of universities in a yearlong research project to gain access to data on previous security threats. By processing this data, Watson would be able to discover similar events based on the information contained in the volumes of security data. IBM later expanded the project to more than 40 organizations from a variety of industries to further Watson's capabilities in the area.

In early 2017, the company announced that after ingesting over 1 million security documents, Watson for Cyber Security would be available to its customers to enhance their cybersecurity proficiency.

Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) Google has been a pioneer in AI and has been at the forefront of numerous AI technologies, from autonomous driving to designing a chip that may be the future of AI systems. One of the more intriguing developments in its AI research may have broad implications in the field of cybersecurity.

Google detailed in a research paper how two AI systems were tasked with communicating with each other, while preventing a third from discovering the content of their communications. The two systems, called Bob and Alice, were given a security key that was not provided to Eve, the third system. While they were not trained regarding coding, Bob and Alice were able to devise a sophisticated encryption protocol that stymied Eve. This could have practical applications for cybersecurity in the future.

As cybercriminals and their techniques become more sophisticated, so, too, must the methods used to defend against them. Since complex algorithms are being used to perpetrate these attacks, it seems only fitting that artificially intelligent software systems be used to secure against the intrusions. While there is probably no silver bullet, each new advancement adds another weapon in the battle for cybersecurity.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Danny Vena owns shares of Alphabet (A shares) and Amazon. Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Amazon. The Motley Fool has a disclosure policy.

Read more:

Big Tech Is Battling Cyberthreats With AI - Motley Fool

More Bad News for Gamblers AI WinsAgain – HPCwire (blog)

AI-based poker playing programs have been upping the ante for lowly humans. Notably several algorithms from Carnegie Mellon University (e.g. Libratus, Claudico, and Baby Tartanian8) have performed well. Writing in Science last week, researchers from the University of Alberta, Charles University in Prague and Czech Technical University report their poker algorithm DeepStack is the first computer program to beat professional players in heads-up no-limit Texas holdem poker.

Sorting through the firsts is tricky in the world of AI-game playing programs. What sets DeepStack apart from other programs, say the researchers, is its more realistic approach at least in games such as poker where all factors are never full known think bluffing, for example. Heads-up no-limit Texas holdem (HUNL) is a two-player version of poker in which two cards are initially dealt face down to each player, and additional cards are dealt face-up in three subsequent rounds. No limit is placed on the size of the bets although there is an overall limit to the total amount wagered in each game.

Poker has been a longstanding challenge problem in artificial intelligence, says Michael Bowling, professor in the University of Albertas Faculty of Science and principal investigator on the study. It is the quintessential game of imperfect information in the sense that the players dont have the same information or share the same perspective while theyre playing.

Using GTX 1080 GPUs and CUDA with the Torch deep learning framework, we train our system to learn the value of situations, says Bowling on an NVIDIA blog. Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game.

In the last two decades, write the researchers, computer programs have reached a performance that exceeds expert human players in many games, e.g., backgammon, checkers, chess, Jeopardy!, Atari video games, and go. These successes all involve games with information symmetry, where all players have identical information about the current state of the game. This property of perfect information is also at the heart of the algorithms that enabled these successes, write the researchers.

We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.

In total 44,852 games were played by the thirty-three players with 11 players completing the requested 3,000 games, according to the paper. Over all games played, DeepStack won 492 mbb/g. This is over 4 standard deviations away from zero, and so, highly significant. According to the authors, professional poker players consider 50 mbb/g a sizable margin. Using AIVAT to evaluate performance, we see DeepStack was overall a bit lucky, with its estimated performance actually 486 mbb/g.

(For those of us less prone to take a seat at the Texas holdem poker table, mbb/g equals milli-big-blinds per game or the average winning rate over a number of hands, measured in thousandths of big blinds. A big blind is the initial wager made by the non-dealer before any cards are dealt. The big blind is twice the size of the small blind; a small blind is the initial wager made by the dealer before any cards are dealt. The small blind is half the size of the big blind.)

Its an interesting paper. Game theory, of course, has a long history and as the researchers note, The founder of modern game theory and computing pioneer, von Neumann, envisioned reasoning in games without perfect information. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory. One game that fascinated von Neumann was poker, where players are dealt private cards and take turns making bets or bluffing on holding the strongest hand, calling opponents bets, or folding and giving up on the hand and the bets already added to the pot. Poker is a game of imperfect information, where players private cards give them asymmetric information about the state of game.

According to the paper, DeepStack algorithm is composed of three ingredients: a sound local strategy computation for the current public state, depth-limited look-ahead using a learned value function to avoid reasoning to the end of the game, and a restricted set of look-ahead actions. At a conceptual level these three ingredients describe heuristic search, which is responsible for many of AIs successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

The researchers describe DeepStacks architecture as a standard feed-forward network with seven fully connected hidden layers each with 500 nodes and parametric rectified linear units for the output. The turn network was trained by solving 10 million randomly generated poker turn games. These turn games used randomly generated ranges, public cards, and a random pot size. The flop network was trained similarly with 1 million randomly generated flop games.

Link to paper: http://science.sciencemag.org/content/early/2017/03/01/science.aam6960.full

Link to NVIDIA blog: https://news.developer.nvidia.com/ai-system-beats-pros-at-texas-holdem/

See original here:

More Bad News for Gamblers AI WinsAgain - HPCwire (blog)

AI Trust Remains an Issue in the Life Sciences – EnterpriseAI

The rise of AI machine and deep learning in life sciences has stirred the same excitement and skepticism as in other fields of scientific research.

AI is mostly used in two areas of life sciences. The first is embedded in instruments such as cryo-electron microscopes, where AI tools assists in feature recognition. Those tools are mostly hidden from users.

The other application includes high profile projects such as the National Cancer Institute-Department of Energy effort known as CANDLE (NCI-DoEs CANcer Distributed Learning Environment), which has access to supercomputing capacity and plenty of AI expertise.

AI is, of course, being used against COVID-19 at many large labs. Theres not much in between.

AI nevertheless remains in its infancy elsewhere in life sciences. The appetite for its use is high but so are the early obstaclesnot least the hype which has already ticked off clinicians. Another stumbling block is uneven data quality, non-optimal compute infrastructure and limited AI expertise.

On the plus side, practical pilot programs are emerging.

Sister website HPCwirespoke with Fernanda Foertter, senior scientific consultant at BioTeam, the research computing consultant. An AI specialist, Foertter joined Bioteam from Nvidia where she worked on AI for healthcare. Foertter also did a stint at Oak Ridge National Laboratory as a data scientist working on scalable algorithms for data analysis and deep learning.

What got the AI ball rolling at research agencies was natural language processing (NLP), especially within the Energy Departments supercomputer initiatives, Foertter noted. We have theCANDLEproject whose three pilots had the main, basic AI applications [like] NLP, using AI for accelerating molecular dynamics and drug discovery.

NLP is actually working really well, [and] the molecular dynamics is working really well, Foertter added. The drug discovery issue was they didn't have the right data to begin with. So, they're still generating data, in vivo data, for that.

The application of HPC to AI also has been used extensively in COVID-19 research. It seems reasonable to expect those efforts will not only bear fruit against the pandemic, but also generate new approaches for using HPC cum AI in life sciences research.

Still, making AI work in clinical settings remains challenging.

When Foertter joined Nvidia in 2018), medical imaging applications were emerging. Those expectations have since been tempered. We went from talking about, Can we discover pneumonia? Can we discover tumors? to talking about being able to grade tumors, which is much more refined. The one application I think everybody wishes would happen really, really quickly, but hasn't really materialized, is digital pathology, said Foertter.

The workflow remains challenging. Somebody has to go through a picture that has a very, very high pixel number, she continued. They choose a few points and they have experience, and the miss rate is anywhere between 30 percent to 40 percent. That means you can send somebody to pathology and they're going to miss that you have cancer.

Large image formats also have slowed digital pathology. To do any sort of convolution neural network on a really large image to train it would just break it, Foertter explained. Just the memory size is really hard.

AI hype also has turned off clinicians, particularly claims the technology would make them obsolete. There's a lot of animosity [among] physicians. The whole AI thing was kind of sold as if it could replace a lot of folks, an impression that AI vendors moved quickly to correct.

Hence, trust and the ability to see and analyze data used for training and inference remain issues within the medical the medical community.

AI is still seen as a black box, Foertter stressed.

Read John Russells full report detailing AIs impact on the life sciences here.

Related

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

Read the original post:

AI Trust Remains an Issue in the Life Sciences - EnterpriseAI

Emory students advance artificial intelligence with a bot that aims to serve humanity – SaportaReport

A team of six Emory computer science students are helping to usher in a new era in artificial intelligence. Theyve developed a chatbot capable of making logical inferences that aims to hold deeper, more nuanced conversations with humans than have previously been possible. Theyve christened their chatbot Emora, because it sounds like a feminine version of Emory and is similar to a Hebrew word for an eloquent sage.

The team is now refining their new approach to conversational AI a logic-based framework for dialogue management that can be scaled to conduct real-life conversations. Their longer-term goal is to use Emora to assist first-year college students, helping them to navigate a new way of life, deal with day-to-day issues and guide them to proper human contacts and other resources when needed.

Eventually, they hope to further refine their chatbot developed during the era of COVID-19 with the philosophy Emora cares for you to assist people dealing with social isolation and other issues, including anxiety and depression.

The Emory team is headed by graduate students Sarah Finch and James Finch, along with faculty advisorJinho Choi, associate professor in the Department of Computer Sciences. The team also includes graduate student Han He and undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell. All the students are members of ChoisNatural Language Processing Research Laboratory.

Were taking advantage of established technology while introducing a new approach in how we combine and execute dialogue management so a computer can make logical inferences while conversing with a human, Sarah Finch says.

We believe that Emora represents a groundbreaking moment for conversational artificial intelligence, Choi adds. The experience that users have with our chatbot will be largely different than chatbots based on traditional, state-machine approaches to AI.

Last year, Choi and Sarah and James Finch headed a team of 14 Emory students that took first place in Amazons Alexa Prize Socialbot Grand Challenge, winning $500,000 for their Emora chatbot. The annual Alexa Prize challenges university students to make breakthroughs in the design of chatbots, also known as socialbots software apps that simplify interactions between humans and computers by allowing them to talk with one another.

This year, they developed a completely new version of Emora with the new team of six students.

They made the bold decision to start from scratch, instead of building on the state-machine platform they developed in 2020 for Emora. We realized there was an upper limit to how far we could push the quality of the system we developed last year, Sarah Finch says. We wanted to do something much more advanced, with the potential to transform the field of artificial intelligence.

They based the current Emora on three types of frameworks to advance core natural language processing technology, computational symbolic structures and probabilistic reasoning for dialogue management.

They worked around the clock, making it into the Alexa Prize finals in June. They did not complete most of the new system, however, until just a few days before they had to submit Emora to the judges for the final round of the competition.

That gave the team no time to make finishing touches to the new system, work out the bugs, and flesh out the range of topics that it could deeply engage in with a human. While they did not win this years Alexa Prize, the strategy led them to develop a system that holds more potential to open new doors of possibilities for AI.

In the run-up to the finals, users of Amazons virtual assistant, known as Alexa, volunteered to test out the competing chatbots, which were not identified by their names or universities. A chatbots success was gauged by user ratings.

The competition is extremely valuable because it gave us access to a high volume of people talking to our bot from all over the world, James Finch says. When we wanted to try something new, we didnt have to wait long to see whether it worked. We immediately got this deluge of feedback so that we could make any needed adjustments. One of the biggest things we learned is that what people really want to talk about is their personal experiences.

Sarah and James Finch, who married in 2019, are the ultimate computer power couple. They met at age 13 in a math class in their hometown of Grand Blanc, Michigan. They were dating by high school, bonding over a shared love of computer programming. As undergraduates at Michigan State University, they worked together on a joint passion for programming computers to speak more naturally with humans.

If we can create more flexible and robust dialogue capability in machines, Sarah Finch explains, a more natural, conversational interface could replace pointing, clicking and hours of learning a new software interface. Everyone would be on a more equal footing because using technology would become easier.

She hopes to pursue a career in enhancing computer dialogue capabilities with private industry after receiving her PhD.

James Finch is most passionate about the intellectual aspects of solving problems and is leaning towards a career in academia after receiving his PhD.

The Alexa Prize deadlines required the couple to work many 60-hour-plus weeks on developing Emoras framework, but they didnt consider it a grind. Ive enjoyed every day, James Finch says. Doing this kind of dialogue research is our dream and were living it. We are making something new that will hopefully be useful to the world.

They chose to come to Emory for graduate school because of Choi, an expert in natural language processing, and Eugene Agichtein, professor in the Department of Computer Science and an expert in information retrieval.

Emora was designed not just to answer questions, but as a social companion.

A caring chatbot was an essential requirement for Choi. At the end of every team meeting, he asks one member to say something about how the others have inspired them. When someone sees a bright side in us, and shares it with others, everyone sees that side and that makes it even brighter, he says.

Chois enthusiasm is also infectious.

Growing up in Seoul, South Korea, he knew by the age of six that he wanted to design robots. I remember telling my mom that I wanted to make a robot that would do homework for me so I could play outside all day, he recalls. It has been my dream ever since. I later realized that it was not the physical robot, but the intelligence behind the robot that really attracted me.

The original Emora was built on a behavioral mathematical model similar to a flowchart and equipped with several natural language processing models. Depending on what people said to the chatbot, the machine made a choice about what path of a conversation to go down. While the system was good at chit chat, the longer a conversation went on, the more chances that the system would miss a social-linguistic nuance and the conversation would go off the rails, diverting from the logical thread.

This year, the Emory team designed Emora so that she could go beyond a script and make logical inferences. Rather than a flowchart, the new system breaks a conversation down into concepts and represents them using a symbolic graph. A logical inference engine allows Emora to connect the graph of an ongoing conversation into other symbolic graphs that represent a bank of knowledge and common sense. The longer the conversations continue, the more its ability to make logical inferences grows.

Sarah and James Finch worked on the engineering of the new Emora system, as well as designing logic structures and implementing related algorithms. Undergraduates Sophy Huang, Daniil Huryn and Mack Hutsell focused on developing dialogue content and conversational scripts for integrating within the chatbot. Graduate student Han He focused on structure parsing, including recent advances in the technology.

A computer cannot deal with ambiguity, it can only deal with structure, Han He explains. Our parser turns the grammar of a sentence into a graph, a structure like a tree, that describes what a chatbot user is saying to the computer.

He is passionate about language. Growing up in a small city in central China, he studied Japanese with the goal of becoming a linguist. His family was low income so he taught himself computer programming and picked up odd programmer jobs to help support himself. In college, he found a new passion in the field of natural language processing, or using computers to process human language.

His linguistic background enhances his technological expertise. When you learn a foreign language, you get new insights into the role of grammar and word order, He says. And those insights can help you to develop better algorithms and programs to teach computers how to understand language. Unfortunately, many people working in natural language processing focus primarily on mathematics without realizing the importance of grammar.

After getting his masters at the University of Houston, He chose to come to Emory for a PhD to work with Choi, who also emphasizes linguistics in his approach to natural language processing. He hopes to make a career in using artificial intelligence as an educational tool that can help give low-income children an equal opportunity to learn.

A love of language also brought senior Mack Hutsell into the fold. A native of Houston, he came to Emorys Oxford College to study English literature. His second love is computer programming and coding. When Hutsell discovered the digital humanities, using computational methods to study literary texts, he decided on a double major in English and computer science.

I enjoy thinking about language, especially language in the context of computers, he says.

Chois Natural Language Processing Lab and the Emora project was a natural fit for him.

Like the other undergraduates on the team, Hutsell did miscellaneous tasks for the project while also creating content that could be injected into Emoras real-world knowledge graph. On the topic of movies, for instance, he started with an IMDB dataset. The team had to combine concepts from possible conversations about the movie data in ways that would fit into the knowledge graph template and generate unique responses from the chatbot. Thinking about how to turn metadata and numbers into something that sounds human is a lot of fun, Hutsell says.

Language was also a key draw for senior Danii Huryn. He was born in Belarus, moved to California with his family when he was four, and then returned to Belarus when he was 10, staying until he completed high school. He speaks English, Belarusian and Russian fluently and is studying German.

In Belarus, I helped translate at my church, he says. That got me thinking about how different languages work differently and that some are better at saying different things.

Huryn excelled in computer programming and astronomy in his studies in Belarus. His interests also include reading science fiction and playing video games. He began his Emory career on the Oxford campus, and eventually decided to major in computer science and minor in physics.

For the Emora project, he developed conversations about technology, including an AI component, and another on how people were adapting to life during the pandemic.

The experience was great, Huryn says. I helped develop features for the bot while I was taking a course in natural language processing. I could see how some of the things I was learning about were coming together into one package to actually work.

Team member Sophy Huang, also a senior, grew up in Shanghai and came to Emory planning to go down a pre-med track. She soon realized, however, that she did not have a strong enough interest in biology and decided on a dual major of applied mathematics and statistics and psychology. Working on the Emora project also taps into her passions for computer programming and developing applications that help people.

Psychology plays a big role in natural language processing, Huang says. Its really about investigating how people think, talk and interact and how those processes can be integrated into a computer.

Food was one of the topics Huang developed for Emora to discuss. The strategy was first to connect with users by showing understanding, she says.

For instance, if someone says pizza is their favorite food, Emora would acknowledge their interest and ask what it is about pizza that they like so much.

By continuously acknowledging and connecting with the user, asking for their opinions and perspectives and sharing her own, Emora shows that she understands and cares, Huang explains. That encourages them to become more engaged and involved in the conversation.

The Emora team members are still at work putting the finishing touches on their chatbot.

We created most of the system that has the capability to do logical thinking, essentially the brain for Emora, Choi says. The brain just doesnt know that much about the world right now and needs more information to make deeper inferences. You can think of it like a toddler. Now were going to focus on teaching the brain so it will be on the level of an adult.

The team is confident that their system works and that they can complete full development and integration to launch beta testing sometime next spring.

Choi is most excited about the potential to use Emora to support first-year college students, answering questions about their day-to-day needs and directing them to the proper human staff or professor as appropriate. For larger issues, such as common conflicts that arise in group projects, Emora could also serve as a starting point by sharing how other students have overcome similar issues.

Choi also has a longer-term vision that the technology underlying Emora may one day be capable of assisting people dealing with loneliness, anxiety or depression. I dont believe that socialbots can ever replace humans as social companions, he says. But I do think there is potential for a socialbot to sympathize with someone who is feeling down, and to encourage them to get help from other people, so that they can get back to the cheerful life that they deserve.

Read more:

Emory students advance artificial intelligence with a bot that aims to serve humanity - SaportaReport