CORRECTION – OMNIQ’s Artificial Intelligence-Based Quest Shield Solution Selected by the Talmudical Academy of Baltimore – GlobeNewswire

SALT LAKE CITY, July 10, 2020 (GLOBE NEWSWIRE) -- In a release issued under the same headline on June 1, 2020 by OMNIQ, Inc. (OTCQB:OMQS), please be advised that the second paragraph as originally issued contained certain inaccuracies, not related to financial results or projections, which have been corrected below.

OMNIQ, Inc. (OTCQB:OMQS) (OMNIQ or the Company), announces that it has been selected to deploy its Quest Shield campus safety solution at the Talmudical Academy of Baltimore in Maryland.

The Quest Shield security package uses the Companys AI-based SeeCube technology platform, a ground-breaking cloud-based/on-premise security solution for Safe Campus/School applications. The platform provides unique AI-based computer vision technology and software to gather real-time vehicle data, enabling the Quest Shield to identify and record images of approaching vehicles including color, make and license plate information. The license plate is then compared against the schools internal watch list to provide immediate notifications of unauthorized vehicles to security and administrative personnel. In addition to providing a vehicle identification and recognition solution to the Talmudical Academy, the Quest Shield comprehensive security platform addresses other security concerns including controlling access to the buildings and visitor management as well as the ability to pre-register guests for school activities.

Additionally, as part of COVID-19 mitigation, parents in Maryland will be asked to take and record their childs temperature each day before they leave for school. Quest Shield will automate this process, by providing parents an online form where they may record the temperature. All Talmud Academy students will be equipped with an ID tag that will have a QR code that can be read with a barcode scanner. As students enter campus, faculty equipped with Quest handheld scanners will read the barcode to confirm that the students temperature has been taken that day; if the form has not been filled in, faculty will check temperatures before allowing students inside.

Shai Lustgarten, CEO of OMNIQ, commented: It is our privilege to work with the Talmudical Academy to provide our solution to enhance safety at their Baltimore campus. Quest Shield is an extension of the homeland security solution we designed for the Israeli authorities to fight terrorism and save lives.

Rabbi Yaacov Cohen, Executive Director, Talmudical Academy of Baltimore, commented:Concern about campus safety and the safety of our students and faculty drove the Talmudical Academy to seek ways to implement new strategies aimed at preventing crimes and violence that may be committed on the school grounds. The unfortunate reality today is that situations we could never imagine just a few years ago are happening now with increasing regularity. Most security systems that are currently being deployed on other campuses are good at recording events subsequent to crimes being committed. With Quest Shield, we have an opportunity to alert personnel and Law Enforcement ahead of any sign of violence.

Mr. Lustgarten added: The Quest Shield has been tailored to provide a proactive solution to improve security and safety in schools and on campuses as well as community centers and places of worship in the U.S. that have unfortunately become a target for ruthless attacks. Were pleased to work with a forward-thinking organization like the Talmudical Academy, it is gratifying that the Academy selected the Quest Shield platform to strengthen its security precautions.

Additionally, many schools and communities are expressing concern around children returning to school in the fall due to COVID-19. With that in mind, Talmudical Academy will also employ the Quest Shield to provide an automated screening process to confirm that students have had their temperatures checked, per Maryland regulation, upon their arrival on campus and prior to them entering the school facilities.

Mr. Lustgarten concluded, We are proud to be able to improve student safety in the U.S., as well as in other vulnerable communities. Quest Shield has previously been implemented by a pre-K through Grade 12 school in Florida and at a Jewish Community Center in Salt Lake City. We look forward to working closely with the Academy and other institutions to promote the health and safety of students, faculty and support personnel.

About OMNIQ, Corp.OMNIQ Corp. (OMQS) provides computerized and machine vision image processing solutions that use patented and proprietary AI technology to deliver data collection, real time surveillance and monitoring for supply chain management, homeland security, public safety, traffic & parking management and access control applications. The technology and services provided by the Company help clients move people, assets and data safely and securely through airports, warehouses, schools, national borders, and many other applications and environments.

OMNIQs customers include government agencies and leading Fortune 500 companies from several sectors, including manufacturing, retail, distribution, food and beverage, transportation and logistics, healthcare, and oil, gas, and chemicals. Since 2014, annual revenues have grown to more than $50 million from clients in the USA and abroad.

The Company currently addresses several billion-dollar markets, including the Global Safe City market, forecast to grow to $29 billion by 2022, and the Ticketless Safe Parking market, forecast to grow to $5.2 billion by 2023.

Information about Forward-Looking StatementsSafe Harbor Statement under the Private Securities Litigation Reform Act of 1995. Statements in this press release relating to plans, strategies, economic performance and trends, projections of results of specific activities or investments, and other statements that are not descriptions of historical facts may be forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934.

This release contains forward-looking statements that include information relating to future events and future financial and operating performance. The words anticipate, may, would, will, expect, estimate, can, believe, potential and similar expressions and variations thereof are intended to identify forward-looking statements. Forward-looking statements should not be read as a guarantee of future performance or results, and will not necessarily be accurate indications of the times at, or by, which that performance or those results will be achieved. Forward-looking statements are based on information available at the time they are made and/or managements good faith belief as of that time with respect to future events, and are subject to risks and uncertainties that could cause actual performance or results to differ materially from those expressed in or suggested by the forward-looking statements. Important factors that could cause these differences include, but are not limited to: fluctuations in demand for the Companys products particularly during the current health crisis, the introduction of new products, the Companys ability to maintain customer and strategic business relationships, the impact of competitive products and pricing, growth in targeted markets, the adequacy of the Companys liquidity and financial strength to support its growth, the Companys ability to manage credit and debt structures from vendors, debt holders and secured lenders, the Companys ability to successfully integrate its acquisitions, and other information that may be detailed from time-to-time in OMNIQ Corp.s filings with the United States Securities and Exchange Commission. Examples of such forward looking statements in this release include, among others, statements regarding revenue growth, driving sales, operational and financial initiatives, cost reduction and profitability, and simplification of operations. For a more detailed description of the risk factors and uncertainties affecting OMNIQ Corp., please refer to the Companys recent Securities and Exchange Commission filings, which are available at http://www.sec.gov. OMNIQ Corp. undertakes no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, unless otherwise required by law.

Investor Contact: John Nesbett/Jen BelodeauIMS Investor Relations203.972.9200jnesbett@institutionalms.com

See more here:

CORRECTION - OMNIQ's Artificial Intelligence-Based Quest Shield Solution Selected by the Talmudical Academy of Baltimore - GlobeNewswire

AI Healthcare Stocks Will Mint The Worlds First Trillionaire – Yahoo Finance

You wont read about thesestocksin the mainstream media. The average investor doesnt even know they exist. But dont let that fool you:because a small hiddengroup of companies is figuring out how to mergeartificial intelligencewithmedical technology.

Some folks are even scared of it. They think AI could eventually try to wipe out the human race. They imagine human-like robots like Ava from the film Ex Machinaor Skynetthe computer program that tried to wipe out the human race in the Terminator movies.

AI in real life isn't what you see in the movies. It's not about robots or computers that outthink and enslave humans. It's about computers with mind-boggling processing powers that solve problems faster than teams of scientist PhDs ever could.

AI healthcare stocks ventilator availability CVD Patients PFSi Coronavirus Africa pension relief workers wages COVID 19 vaccine biggest controllable factor biggest controllable factor bill gates coronavirus vaccine SEAT, Coronavirus, COVID19, ventilators

SEAT is collaborating with the healthcare system by making automated ventilators with adapted windscreen wiper motors

Other investors ignore AI because they assume its impact will be felt for decades.But AI is already improving your life in ways you probably dont even realize. AI is whyNetflix (NFLX)is so good at recommending movies. And whySpotify (SPOT)is so good at recommending music that suits your tastes. Its also howAmazons (AMZN)Alexa can tell you everything from the weather to who Americas fourth president was in just seconds. AI is also howTeslas (TSLA)Model X can navigate traffic on the highway on its own without you laying a finger on the steering wheel!

These breakthroughs are all thanks to AI. But theyre just a taste of whats to come.

According to tech entrepreneur and Dallas Mavericks owner Mark Cuban, the worlds firsttrillionairewont be a hedge fund manager, oil baron, or social media tycoon.It will be someone who masters AI.

A trillion dollars is almost an unfathomable amount of money. To put this in perspective, Amazon founder Jeff Bezosthe worlds richest manis worth around $117 billion. Thats more than the annual economic output of Ecuador. And yet, a trillionaire would be worthat least ten timesas much as Bezos!

But heres the thing.You dont have to be a genius inventor or entrepreneur to strike gold in AI.Just like prior megatrends, everyday investors stand to make millions off of AI.

According to ARK Invest, AI could add $30 trillion to the global equity markets over the next two decades. Thats almost as much as the entire US stock market is worth today!

And the best way to take advantage isnt with traditional AI companies. As I mentioned above,its with AI healthcare stocks. The fusion of AI and healthcare is one of the most lucrative opportunities Ive come across in my entire career.

Researchers at MIT have already used AI to identify a powerful new antibiotic compound for the coronavirus. Scientists in China were able recreate and copy the coronavirus genome sequence in just one month! Chinese tech giantAlibaba (BABA)recently created a new AI algorithm that can diagnose the coronavirus in as little as 20 seconds! Thats 45 times faster than humans can. And its reportedly 96% accurate.

Insilico Medicine used AI to successfully identify thousands of molecules for potential medications in just four days. Additionally, the Food and Drug Administration recently approved the use of an AI-driven diagnostic for COVID-19 developed by AI radiology company Behold.ai. The tool analyzes lung x-rays and provides radiologists with a tentative diagnosis as soon as the image is captured, reducing time and expense.

In short, AI will likely be the reason we never experience an outbreak like the coronavirus again.But thats certainly not the only way AI is revolutionizing healthcare.

Story continues

AI is also being used to identify a drug candidate that could be repurposed for different uses. It can also help medical professionals parse through data faster than ever before.I cannot overstate the importance of this. Every year, 1.2 billion unstructured clinical documents are created every year. A staggering amount of data is contained in these documents. And thats only going to increase. The amount of medical data is poised to double every couple of months! Its nearly impossible to search and make sense of this data without the help of AI.

Genomics companies will play a major role in this revolution. But AI will play a massive role in this revolution, too.

You see, analyzing genomic sequences takes time and a ton of computing power. AI rapidly accelerates this process. It greatly reduces the time it takes to develop valuable drugs. Not only that, it drives down drug development costs. And increases the success rate of trials.

Money is pouring into AI companies at a breathtaking rate. According to CB Insights, $4 billion was invested in private healthcare AI startups last year. And that included 367 deals. That was the most money of any sector!

Its also a huge spike from 2018 when $2.7 billion was invested across 264 deals. Its easy to see why venture capitalists (VC) are betting so big on healthcare AI. According to Grand View Research, the market is growing at nearly 42% per year! By 2025, its projected to be a $31 billion industry. When an industry grows this fast, fortunes stand to be made.

You can get in on the ground floor of this trend by buying the right "AI healthcare stocks." Im not talking aboutMicrosoft (MSFT),Amazon (AMZN), or any other blue-chip tech company using AI for their healthcare initiatives. These companies are already behemoths. They dont offer explosive upside.

So, I wouldnt focus on the usual suspects. Instead, pay attention to the smaller AI healthcare stocks. Many of which have gone public recently. These are still unknown to most of the investing world. And they offer the best chance to multiply your money in the coming months.

The Great Disruptors:3 Breakthrough Stocks Set to Double Your Money"

Get our latest report where we reveal our three favorite stocks that will hand you 100% gains as they disrupt whole industries.Get your free copy here.

Video: Top 5 Stocks Among Hedge Funds

At Insider Monkey we leave no stone unturned when looking for the next great investment idea. For example, 2020s unprecedented market conditions provide us with the highest number of trading opportunities in a decade. So we are checking out stocks recommended/scorned bylegendary Bill Miller. We interview hedge fund managers and ask them about their best ideas. If you want to find out the best healthcare stock to buy right now, you can watch our latesthedge fund manager interview here. We read hedge fund investor letters and listen to stock pitches at hedge fund conferences. Our best call in 2020 was shorting the market when the S&P 500 was trading at 3150 after realizing the coronavirus pandemics significance before most investors. You can subscribe to our free enewsletter below to receive our stories in your inbox:

[daily-newsletter][/daily-newsletter]

Article by Justin Spittler, Mauldin Economics

Related Content

More:

AI Healthcare Stocks Will Mint The Worlds First Trillionaire - Yahoo Finance

Even the Best AI Models Are No Match for the Coronavirus – WIRED

The stock market appears strangely indifferent to Covid-19 these days, but that wasnt true in March, as the scale and breadth of the crisis hit home. By one measure, it was the most volatile month in stock market history; on March 16, the Dow Jones average fell almost 13 percent, its biggest one-day decline since 1987.

To some, the vertigo-inducing episode also exposed a weakness of quantitative (or quant) trading firms, which rely on mathematical models, including artificial intelligence, to make trading decisions.

Some prominent quant firms fared particularly badly in March. By mid-month, some Bridgewater Associates funds had fallen 21 percent for the year to that point, according to a statement posted by the companys co-chairman, Ray Dalio. Vallance, a quant fund run by DE Shaw, reportedly lost 9 percent through March 24. Renaissance Technologies, another prominent quant firm, told investors that its algorithms misfired in response to the months market volatility, according to press accounts. Renaissance did not respond to a request for comment. A spokesman for DE Shaw could not confirm the reported figure.

The turbulence may reflect a limit with modern-day AI, which is built around finding and exploiting subtle patterns in large amounts of data. Just as algorithms that grocers use to stock shelves were flummoxed by consumers sudden obsession with hand sanitizer and toilet paper, those that help hedge funds wring profit from the market were confused by the sudden volatility of panicked investors.

In finance, as in all things, the best AI algorithm is only as good as the data its fed.

Andrew Lo, a professor at MIT and the founder and chairman emeritus of AlphaSimplex, a quantitative hedge fund based in Cambridge, Massachusetts, says quantitative trading strategies have a simple weakness. By definition a quantitative trading strategy identifies patterns in the data, he says.

Lo notes that March bears similarities to a meltdown among quantitative firms in 2007, in the early days of the financial crisis. In a paper published shortly after that mini-crash, Lo concluded that the synchronized losses among hedge funds betrayed a systemic weakness in the market. What we saw in March of 2020 is not unlike what happened in 2007, except it was faster, it was deeper, and it was much more widespread, Lo says.

What we saw in March of 2020 is not unlike what happened in 2007, except it was faster, it was deeper, and it was much more widespread.

Andrew Lo, MIT

Zura Kakushadze, president of Quantigic Solutions, describes the March episode as a quant bust in an analysis of the events posted online in April.

Kakushadzes paper looks at one form of statistical arbitrage, a common method of mining market data for patterns that are exploited by quant funds through many frequent trades. He points out that even quant funds that employed a dollar-neutral strategy, meaning they bet equally on stocks rising and falling, did poorly in the rout.

In an interview, Kakushadze says the bust shows AI is no panacea during extreme market volatility. I don't care whether youre using AI, ML, or anything else, he says. Youre gonna break down no matter what.

In fact, Kakushadze suggests that quant funds that use overly complex and opaque AI models may have suffered worse than others. Deep learning, a form of AI that has taken the tech world by storm in recent years, for instance, involves feeding data into neural networks that are difficult to audit. Machine learning, and especially deep learning, can have a large number of often obscure (uninterpretable) parameters, he writes.

Ernie Chan, managing member of QTS Capital Management, and the author of several books on machine trading, agrees that AI is no match for a rare event like the coronavirus.

Its easy to train a system to recognize cats in YouTube videos because there are millions of them, Chan says. In contrast, only a few such large swings in the market have occurred before. You can count [these huge drops] on one hand. So its not possible to use machine learning to learn from those signals.

Still, some quant funds did a lot better than others during Marchs volatility. The Medallion Fund operated by Renaissance Technologies, which is restricted to employees money, has reportedly seen 24 percent gains for the year to date, including a 9 percent lift in March.

View original post here:

Even the Best AI Models Are No Match for the Coronavirus - WIRED

Facebook is using AI to identify suicidal thoughts — but it’s not … – Fox News

For many of its nearly 2 billion users, Facebook is the primary channel of communication, a place where they can share their thoughts, post pictures and discuss every imaginable topic of interest.

Including suicide.

Six years ago, Facebook posted a page offering advice on how to help people who post suicidal thoughts on the social network. But in the year since it made its live-streaming feature, Facebook Live, available to all users, Facebook has seen some people use its technology to let the world watch them kill themselves.

TOO MUCH SOCIAL MEDIA USE LINKED TO FEELINGS OF ISOLATION

After at least three users committed suicide on Facebook Live late last year, the companys chairman and CEO, Mark Zuckerberg, addressed the issue in the official company manifesto he posted in February:

"To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we've built infrastructure to give their friends and community tools that could save their life.

There are billions of posts, comments and messages across our services each day, and since it's impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events like suicides, some live streamed that perhaps could have been prevented if someone had realized what was happening and reported them sooner. These stories show we must find a way to do more."

Now, in its effort to do more, the company is using artificial intelligence and pattern recognition to identify suicidal thoughts in posts and live streams and to flag those posts for a team that can follow up, typically via Facebook Messenger.

FACEBOOK REPORTS JOURNALISTS TO THE COPS FOR REPORTING CHILD PORN TO FACEBOOK

Were testing pattern recognition to identify posts as very likely to include thoughts of suicide, product manager Vanessa Callison-Burch, researcher Jennifer Guadagno and head of global safety Antigone Davis wrote in a blog post.

Our Community Operations team will review these posts and, if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet.

Using artificial intelligence and pattern recognition, Facebook will monitor millions of posts to identify common behaviors among potential suicides, something a human intervention expert could never do.

FACEBOOK ADDS SUICIDE-PREVENTION TOOLS FOR LIVE VIDEO

But it still doesnt go far enough, some experts say.

Cheryl Karp Eskin, program director at Teen Line, said using artificial intelligence (AI) to identify patterns holds great promise in detecting expressions of suicidal thoughts but it wont necessarily decrease the number of suicides.

There has been very little progress in preventing suicides in the last 50 years. Suicide is the second leading cause of death among 15- to 29-year-olds, and the rate in that age group continues to rise.

Eskin expressed concerns that the technology might wrongly flag posts, or that users might hide their feelings if they knew a machine learning algorithm was watching them.

A TECHNICAL GLITCH LEFT SOME FACEBOOK USERS LOCKED OUT OF THEIR ACCOUNTS

AI is not a substitute for human interaction, as there are many nuances of speech and expression that a machine may not understand, she said. There are people who are dark and deep, but not suicidal. I also worry that people will shut down if they are identified incorrectly and not share some of their feelings in the future.

Joel Selanikio, MD, an assistant professor at Georgetown University who started the AI-powered company Magpi, said Facebook has a large data set of users, which helps AI parse language constantly and enables it to work more effectively.

But even if AI helps Facebook identify suicidal thoughts, that doesnt mean it can help determine the best approach for prevention.

FIFTH-GRADER HITS POLICE FACEBOOK SITE FOR EMERGENCY HOMEWORK HELP

Right now, Selanikio said, my understanding is that it just tells the suicidal person to seek help. I can imagine other situations, for example in the case of a minor, where the system notifies the parents. Or in the case of someone under psychiatric care, this might alert the clinician.

Added Wendy Whitsett, a licensed counselor, I would like to learn more about the plan for follow-up support, after the crisis had ended, and helping the user obtain services and various levels of support utilizing professional and peer support, as well as support from friends, neighbors, pastors, and others.

I am also interested to know if the algorithms are able to detect significant life events that would indicate increased risk factors and offer assistance with early intervention.

Technology has moved from offering assistance to people who view others suicidal posts to using artificial intelligence and pattern recognition to track and flag the posts automatically. But that, the experts say, is just the beginning. Facebook still has a long way to go.

Next, they hope, Facebook will be able to use AI to predict behavior and intervene in real-time to help those in need.

Read the original post:

Facebook is using AI to identify suicidal thoughts -- but it's not ... - Fox News

Instagram Turns to AI to Stop Cyberbullying on Its Platform – Government Technology

The artificial intelligence tool works by identifying words and phrases that have been reported as offensive in the past. It then allows the author to rework their comment before posting it.

(TNS) Photo-sharing social media app Instagram has a new feature that uses artificial intelligence against cyberbullying and offensive comments.

The new feature began rolling out to the apps billions of users on Monday.

The AI operates from a list of words and phrases that have been reported as offensive in the past.

If the AI detects offensive language on a comment, it will send a prompt to the writer and give them a chance to edit or delete their comment before it is posted.

The app hopes to encourage users to pause and reconsider their words before posting."

In order to create the new system, Instagram partnered with suicide prevention programs.

Best-selling author and Smith College educator Rachel Simmons told Good Morning America that it was a great first step.

We want to see social media platforms like Instagram stopping bullies before they start," she said.

The new tool is the latest arrow in Instagrams anti-bullying quiver. In October, the app launched Restrict, a feature that allows users to easily shadow-ban other users who may post bullying or offensive comments.

2019 New York Daily News Distributed by Tribune Content Agency, LLC.

Go here to see the original:

Instagram Turns to AI to Stop Cyberbullying on Its Platform - Government Technology

Where it Counts, U.S. Leads in Artificial Intelligence – Department of Defense

When it comes to advancements in artificial intelligence technology, China does have a lead in some places like spying on its own people and using facial recognition technology to identify political dissenters. But those are areas where the U.S. simply isn't pointing its investments in artificial intelligence, said director of the Joint Artificial Intelligence Center. Where it counts, the U.S. leads, he said.

"While it is true that the United States faces formidable technological competitors and challenging strategic environments, the reality is that the United States continues to lead in AI and its most important military applications," said Nand Mulchandani, during a briefing at the Pentagon.

The Joint Artificial Intelligence Center, which stood up in 2018, serves as the official focal point of the department's AI strategy.

China leads in some places, Mulchandani said. "China's military and police authorities undeniably have the world's most advanced capabilities, such as unregulated facial recognition for universal surveillance and control of their domestic population, trained on Chinese video gathered from their systems, and Chinese language text analysis for internet and media censorship."

The U.S. is capable of doing similar things, he said, but doesn't. It's against the law, and it's not in line with American values.

"Our constitution and privacy laws protect the rights of U.S. citizens, and how their data is collected and used," he said. "Therefore, we simply don't invest in building such universal surveillance and censorship systems."

The department does invest in systems that both enhance warfighter capability, for instance, and also help the military protect and serve the United States, including during the COVID-19 pandemic.

The Project Salus effort, for instance, which began in March of this year, puts artificial intelligence to work helping to predict shortages for things like water, medicine and supplies used in the COVID fight, said Mulchandani.

"This product was developed in direct work with [U.S. Northern Command] and the National Guard," he said. "They have obviously a very unique role to play in ensuring that resource shortages ... are harmonized across an area that's dealing with the disaster."

Mulchandani said what the Guard didn't have was predictive analytics on where such shortages might occur, or real-time analytics for supply and demand. Project Salus named for the Roman goddess of safety and well-being fills that role.

"We [now have] roughly about 40 to 50 different data streams coming into project Salus at the data platform layer," he said. "We have another 40 to 45 different AI models that are all running on top of the platform that allow for ... the Northcom operations team ... to actually get predictive analytics on where shortages and things will occur."

As an AI-enabled tool, he said, Project Salus can be used to predict traffic bottlenecks, hotel vacancies and the best military bases to stockpile food during the fallout from a damaging weather event.

As the department pursues joint all-domain command and control, or JADC2, the JAIC is working to build in the needed AI capabilities, Mulchandani.

"JADC2 is ... a collection of platforms that get stitched together and woven together[ effectively into] a platform," Mulchandani said. "The JAIC is spending a lot of time and resources focused on building the AI components on top of JADC2. So if you can imagine a command and control system that is current and the way it's configured today, our job and role is to actually build out the AI components both from a data, AI modeling and then training perspective and then deploying those."

When it comes to AI and weapons, Mulchandani said the department and JAIC are involved there too.

"We do have projects going on under joint warfighting, which are actually going into testing," he said. "They're very tactical-edge AI, is the way I describe it. And that work is going to be tested. It's very promising work. We're very excited about it."

While Mulchandani didn't mention specific projects, he did say that while much of the JAIC's AI work will go into weapons systems, none of those right now are going to be autonomous weapons systems. The concepts of a human-in-the-loop and full human control of weapons, he said, "are still absolutely valid."

See the original post here:

Where it Counts, U.S. Leads in Artificial Intelligence - Department of Defense

The Station: Birds improving scooter-nomics, breaking down Tesla AI day and the Nuro EC-1 – TechCrunch

The Station is a weekly newsletter dedicated to all things transportation.Sign up here just click The Station to receive it every weekend in your inbox.

Hello readers: Welcome to The Station, your central hub for all past, present and future means of moving people and packages from Point A to Point B. Im handing the wheel over to reporters Aria Alamalhodaei and Rebecca Bellan.

Before I completely leave though, I have to share the Nuro EC-1, a series of articles on the autonomous vehicle technology startup reported by investigative science and tech reporter Mark Harris with assistance from myself and our copy editorial team. This deep dive into Nuro is part of Extra Crunchs flagship editorial offerings.

As always, you can email me at kirsten.korosec@techcrunch.com to share thoughts, criticisms, opinions or tips. You also can send a direct message to me at Twitter @kirstenkorosec.

New York City finally launched its long-awaited scooter pilot in the Bronx this past week. Over 90 parking corrals specifically for e-scooters have been installed across the borough, but residents can also park in unobstructive locations on the sidewalk. Bird, Lime and Veo were the operators chosen for the pilot, each bringing their own sets of strengths.

Bird says it intends to focus on the mobility gap in the Bronx and will use its AI drop engine to ensure equitable deployment across all neighborhoods in the pilot zone. Veo is focused on safety and accessibility, bringing its Astro VS4, the first e-scooter with turn signals, to the mix, as well as its Cosmo, a seated e-scooter. Lime is also focusing on accessibility, with its Lime Able program, which offers an on-demand suite of adaptive vehicles. Lime also highlighted a safety quiz it will require new riders to take before hopping on a vehicle.

All three companies have promised to partner with community organizations to hire locally as well as to offer discounted pricing for vulnerable groups.

Not only has Bird officially launched in NYC, but it was also awarded a 12-month permit to operate 1,500 scooters in San Francisco. Well, technically its Scoot that got the permit, but Scoot is owned by Bird, and was kind of Birds backdoor way into the city. Last month, the SFMTA asked Scoot to halt its operations just as the fresh round of scooter permits were kicking off because the company was implementing its fleet manager program with unauthorized subcontractors.

On Friday, after careful evaluation of Scoots application, SFMTA determined Scoot has qualified for a permit to operate. Scoot intends to have its vehicles back on the roads in the coming weeks.

Bird also officially launched its consumer e-bike, dubbed the Bird Bike (which I think is also the name of their shared e-bike). Bird hasnt had the easiest time with profitability, and really, not many scooter companies have, so this is a chance for Bird to diversify, get a piece of the $68 billion e-bike sales pie and create more brand awareness across marketplaces. The bike costs $2,229 and consumer sales will likely make up about 10% of Birds revenue going forward, per the companys S-4 filing.

Bird (and Scoot) are now integrated with Google Maps. So is Spin, as of this week. More integrations like these, as we saw a couple weeks ago with Lime joining Moovit, demonstrate how shared micromobility is becoming more integrated with the way we think about moving around cities and planning our journeys. I heartily welcome such integrations.

Finally, Alex Wilhelm dug into new financial data released by Bird. The tl;dr: the quarterly data shows an improving economic model and a multiyear path to profitability. However, that path is fraught unless a number of scenarios all work out in concert and without a glitch, Wilhelm reports.

Rebecca Bellan

Imagine a future in which drivers dont charge their electric vehicles but instead swap out the batteries at small, roadside pods. Thats the future Ampleis imagining, and this week it announced a fresh $160 million funding round to scale its operations.

The internationally funded Series C was led by Moore Strategic Ventures with participation from PTT, a Thai state-owned oil and gas company, and Disruptive Innovation Fund. Existing investors Eneos, a Japanese petroleum and energy company, and Singapores public transit operator SMRT also participated. Amples total funding is now $230 million.

Its an interesting idea but one that will require considerable buy-in from automakers to make it a reality for example, by selling vehicles with either a standard battery or Amples battery system pre-built in. But according to Ample co-founders John de Souza and Khaled Hassounah, it wouldnt be all that complicated for OEMs to separate the battery from the car.

The marketing departments at the OEMs want to tell you that This is a super-duper battery that is very well integrated with the car; theres no way you can separate it, Hassounah said. The truth of the matter is theyre built completely separately and so true for almost not almost, for every battery in the car, including a Tesla.

Since weve built our system to be easy to interface with different vehicles, weve abstracted the battery component from the vehicle, he added.

Other deals that got our attention this week

AEye, the lidar startup, completed its reverse merger with special purpose acquisition company CF Finance Acquisition Corp. III. AEye is now a publicly traded company that trades on the Nasdaq exchange.

Canada Drives, an online car shopping and delivery platform, announced $79.4 million ($100 million CAD) in Series B funding that it will use to expand its service across Canada. The company is going to use its recent funding to keep enhancing the product, grow its inventory in existing and new markets and hire around 200 people over the next year, particularly in product development.

DigiSure, a digital insurance company that caters to modern mobility form factors like peer-to-peer marketplaces, is officially coming out of stealth to announce a $13.1 million pre-Series A funding round. The startup will use the funds to hire more than 50 engineers, data scientists, business development, insurance and compliance specialists, as well as scale into new industry verticals and across into Europe.

High Definition Vehicle Insurance Group, a commercial auto insurance company that is initially focused on trucking, raised $32.5 million in Series B funding round led by Weatherford Capital, with new investors Daimler Trucks North America and McVestCo, and continued participation from Munich Re Ventures, 8VC, Autotech Ventures and Qualcomm Ventures LLC.

RepairSmith, a mobile auto repair service that sends a mechanic right to the drivers home, raised $42 million in fresh funding with the aim of expanding to all major metros by the end of 2022. The company is looking to disrupt auto servicing and repair, a massive industry that hasnt seen much change in the past 40 years.

REE Automotive was awarded $17 million from the UK government as part of a $57 million investment, coordinated through the Advanced Propulsion Centre. The investment, the company said, is in line with the UK governments ambition to accelerate the shift to zero-emission vehicles.

Swvl, a Dubai-based transit and mobility company, will be expanding into Europe and Latin America after it acquired a controlling interest inShotl. Shotl, which is in 22 cities across 10 countries, matches passengers with shuttles and vans heading in that same direction. The company partners with governments and municipalities to provide mobility solutions for populations that are underserved by traditional mass transit options. While Swvl declined to share the financials of the transaction, a spokesperson told TechCrunch that the companys footprint is being doubled by this acquisition.

Xos Inc., a manufacturer of electric Class 5 to Class 8 commercial vehicles completed its business combination with NextGen Acquisition Corporation. As a reuslt, Xos made its public debut on the Nasdaq exchange.

Regarding Tesla investigations, when it rains it pours. First, the National Highway Traffic and Safety Administrationopened a preliminary investigation into Teslas Autopilot advanced driver assistance system, citing 11 incidents in which vehicles crashed into parked first responder vehicles while the system was engaged.

The Tesla vehicles involved in the collisions were confirmed to have either have had engaged Autopilot or a feature called Traffic Aware Cruise Control, according toinvestigation documents posted on the agencys website. Most of the incidents took place after dark and occurred despite scene control measures, such as emergency vehicle lights, road cones and an illuminated arrow board signaling drivers to change lanes.

A few days later, Senators Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.)asked the new chair of the Federal Trade Commission to investigate Teslas statements about the autonomous capabilities of its Autopilot and Full Self-Driving systems. The senators expressed particular concern over Tesla misleading customers into thinking their vehicles are capable of fully autonomous driving.

Teslas marketing has repeatedly overstated the capabilities of its vehicles, and these statements increasingly pose a threat to motorists and other users of the road, they said. Accordingly, we urge you to open an investigation into potentially deceptive and unfair practices in Teslas advertising and marketing of its driving automation systems and take appropriate enforcement action to ensure the safety of all drivers on the road.

Waymo, Alphabets self-driving arm, is seriously scaling up its autonomous trucking operations across Texas, Arizona and California. The company said it was building a dedicated trucking hub in Dallas and partnering with Ryder for fleet management services.

The Dallas hub will be a central launch point for testing not only the Waymo Driver, but also its transfer hub model, which is a mix of automated and manual trucking that optimizes transfer hubs near highways to ensure the Waymo Driver is sticking to main thoroughfares and human drivers are handling first and last mile deliveries.

Canoois expecting 25,000 units out of its manufacturing partner VDL Nedcars facility by 2023, CEO Tony Aquila said during the companys quarterly earnings call.

Year over year, Canoo upped its workforce from 230 to 656 total employees, 70% of which are hardware and software engineers. The startups operating expenses have increased from $19.8 million to $104.3 million YOY, with the majority of that increase coming from R&D.

Ford, Stellantis, Toyota and Volkswagen are among the carmakers this week that have announced production cuts in response to the ongoing global shortage of semiconductors. Its been a grim week.

A brief run-down: Toyota said it anticipated a production drop of anywhere from 60,000-90,000 vehicles across North America in August. Then Ford joined the chorus, saying it would temporarily close its F-150 factory in Kansas City. Volkswagen told Reuters it couldnt rule out further changes to production in light of the chip shortage. And finally, Stellantis is halting production at one of its factories in France.

Teslaunveiled what its calling the D1 computer chip to power its advanced AI training supercomputer, Dojo, at its AI Day on Thursday. According to Tesla director Ganesh Venkataramanan, the D1 has GPU-level compute with CPU connectivity and twice the I/O bandwidth of the state of the art networking switch chips that are out there today and are supposed to be the gold standards.

Venkataramanan also revealed a training tile that integrates multiple chips to get higher bandwidth and an incredible computing power of 9 petaflops per tile and 36 terabytes per second of bandwidth. Together, the training tiles compose the Dojo supercomputer.

But there was more, of course. CEO Elon Musk also unveiled that the company is developing a humanoid robot, with a prototype expected in 2022. The bot is being proposed as a non-automotive robotic use case for the companys work on neural networks and its Dojo advanced supercomputer.

Reality check: Tesla is not the first automaker, or company, to dip its toe into humanoid robot development.Hondas Asimo robot has been around for decades, ToyotaandGM have their own robots and Hyundai recently acquired robotics company Boston Dynamic.

The full rundown of Teslas AI Day can be found here.

General Motors and AT&T will be rolling out 5G connectivity in select Chevy, Cadillac and GMC vehicles from model year 2024, in a boost that the two companies say will bring more reliable software updates, faster navigation and downloads and better coverage on roadways.

5G technology has generated a lot of hype for its promises to boost speed and reduce latency across a range of industries, a next-gen tech that everyone thought would change the world far sooner than now. That hasnt happened (yet), in part because network rollout was much slower than people anticipated. So this announcement can be taken as a clear signal that, at the very least, AT&T thinks its 5G network will be mature enough to handle millions of connected vehicles by 2024.

RubiRides, a new ride-hailing company focuses on transporting kids, launched in the Washington D.C. metro area. The ride-hailing service is designed for children ages 7 and older. But the service also offers ride services for seniors and people with special needs. The company was founded by Noreen Butler, who was inspired to start the company after searching for transportation to support the busy schedules of her children.

Continue reading here:

The Station: Birds improving scooter-nomics, breaking down Tesla AI day and the Nuro EC-1 - TechCrunch

Meet STACI: your interactive guide to advances of AI in health care – STAT

Artificial intelligence has become its own sub-industry in health care, driving the development of products designed to detect diseases earlier, improve diagnostic accuracy, and discover more effective treatments. One recent report projected spending on health care AI in the United States will rise to $6.6 billion in 2021, an 11-fold increase from 2014.

The Covid-19 pandemic underscores the importance of the technology in medicine: In the last few months, hospitals have used AI to create coronavirus chatbots, predict the decline of Covid-19 patients, and diagnose the disease from lung scans.

Its rapid advancement is already changing practices in image-based specialties such as radiology and pathology, and the Food and Drug Administration has approved dozens of AI products to help diagnose eye diseases, bone fractures, heart problems, and other conditions. So much is happening that it can be hard for health professionals, patients, and even regulators to keep up, especially since the concepts and language of AI are new for many people.

The use of AI in health care also poses new risks. Biased algorithms could perpetuate discrimination along racial and economic lines, and lead to the adoption of inadequately vetted products that drive up costs without benefiting patients. Understanding these risks and weighing them against the potential benefits requires a deeper understanding of AI itself.

Its for these reasons that we created STACI: the STAT Terminal for Artificial Computer Intelligence. She will walk you through the key concepts and history of AI, explain the terminology, and break down its various uses in health care. (This interactive is best experienced on screens larger than a smartphones.)

Remember, AI is only as good as the data fed into it. So if STACI gets something wrong, blame the humans behind it, not the AI!

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Originally posted here:

Meet STACI: your interactive guide to advances of AI in health care - STAT

A concept in psychology is helping AI to better navigate our world – MIT Technology Review

The concept: When we look at a chair, regardless of its shape and color, we know that we can sit on it. When a fish is in water, regardless of its location, it knows that it can swim. This is known as the theory of affordance, a term coined by psychologist James J. Gibson. It states that when intelligent beings look at the world they perceive not simply objects and their relationships but also their possibilities. In other words, the chair affords the possibility of sitting. The water affords the possibility of swimming. The theory could explain in part why animal intelligence is so generalizablewe often immediately know how to engage with new objects because we recognize their affordances.

The idea: Researchers at DeepMind are now using this concept to develop a new approach to reinforcement learning. In typical reinforcement learning, an agent learns through trial and error, beginning with the assumption that any action is possible. A robot learning to move from point A to point B, for example, will assume that it can move through walls or furniture until repeated failures tell it otherwise. The idea is if the robot were instead first taught its environments affordances, it would immediately eliminate a significant fraction of the failed trials it would have to perform. This would make its learning process more efficient and help it generalize across different environments.

The experiments: The researchers set up a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle and had the agent explore its range of motion until it had learned what the environment would allow it to doits affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain amount to the right or to the left. They found that, compared with an agent that hadnt learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.

Why it matters: The work is still in its early stages, so the researchers used only a simple environment and primitive objectives. But their hope is that their initial experiments will help lay a theoretical foundation for scaling the idea up to much more complex actions. In the future, they see this approach allowing a robot to quickly assess whether it can, say, pour liquid into a cup. Having developed a general understanding of which objects afford the possibility of holding liquid and which do not, it wont have to repeatedly miss the cup and pour liquid all over the table to learn how to achieve its objective.

View post:

A concept in psychology is helping AI to better navigate our world - MIT Technology Review

AI’s inflation paradox – FT Alphaville (registration)


FT Alphaville (registration)
AI's inflation paradox
FT Alphaville (registration)
Many of the jobs that AI will destroy like credit scoring, language translation, or managing a stock portfolio are regarded as skilled, have limited human competition and are well-paid. Conversely, many of the jobs that AI cannot (yet) destroy ...

Originally posted here:

AI's inflation paradox - FT Alphaville (registration)

Why Should We Bother Building Human-Level AI? Five Experts Weigh In

Working on artificial intelligence can be a vicious cycle. It’s easy to lose track of the bigger picture when you spend an entire career developing a niche, hyper-specific AI application. An engineer might finally step away and realize that the public never actually needed such a robust system; each of the marginal improvements they’ve spent so much time on didn’t mean much in the real world.

Still, we need these engineers with lofty, yet-unattainable goals. And one specific goal still lingers in the horizon for the more starry-eyed computer scientists out there: building a human-level artificial intelligence system that could change the world.

Coming up with a definition of human-level AI (HLAI) is tough because so many people use it interchangeably with artificial general intelligence (AGI) – which is the thoughtful, emotional, creative sort of AI that exists only in movie characters like C-3PO and “Ex Machina’s” Ava.

Human-level AI is similar, but not quite as powerful as AGI, for the simple reason that many in the know expect AGI to surpass anything we mortals can accomplish. Though some see this as an argument against building HLAI, some experts believe that only an HLAI could ever be clever enough to design a true AGI – human engineers would only be necessary up to a certain point once we get the ball rolling. (Again, neither type of AI system exists nor will they anytime soon.)

At a conference on HLAI held by Prague-based AI startup GoodAI in August, a number of AI experts and thought leaders were asked a simple question: “Why should we bother trying to create human-level AI?”

For those AI researchers that have detached from the outside world and gotten stuck in their own little loops (yes, of course we care about your AI-driven digital marketplace for farm supplies), the responses may remind them why they got into this line of work in the first place. For the rest of us, they provide a glimpse of the great things to come.

For what it’s worth, this particular panel was more of a lightning round — largely for fun, the experts were instructed to come up with a quick answer rather than taking time to deliberate and carefully choose their words.

“Why should we bother trying to create human-level AI?”

Ben Goertzel, CEO at SingularityNET and Chief Scientist at Hanson Robotics

AI is a great intellectual challenge and it also has more potential to do good than any other invention. Except superhuman AI which has even more.

Tomas Mikolov, Research Scientist At Facebook AI

[Human-Level AI will give us] ways to make life more efficient and basically guide [humanity.]

Kenneth Stanley, Professor At University Of Central Florida, Senior Engineering Manager And Staff Scientist At Uber AI Labs

I think we’d like to understand ourselves better and how to make our lives better.

Pavel Kordik, Associate Professor at Czech Technical University and Co-founder at Recombee

To create a singularity, perhaps.

Ryota Kanai, CEO at ARAYA

To understand ourselves.

More on the future of AI: Five Experts Share What Scares Them the Most About AI

Excerpt from:

Why Should We Bother Building Human-Level AI? Five Experts Weigh In

A conversation on the future of AI. – Axios

The big picture: On Thursday morning, Axios' Cities Correspondent Kim Hart and Emerging Technology Reporter Kaveh Waddell hosted a roundtable conversation to discuss the future of AI, with a focus on policy and innovation.

The conversation touched on how to balance innovation with necessary regulation, create and maintain trust with users, and prepare for the future of work.

As AI continues to become more sophisticated and more widely used, how to provide regulatory guardrails while still encouraging innovation was a focal point of the discussion.

Attendees discussed balancing regulation and innovation in the context of global competition, particularly with China.

The conversation also highlighted who is most impacted by technological development in AI, and the importance of future-proofing employment across all industries. As AI is something that touches all industries, the importance of centering the human experience in creating solutions was stressed at multiple points in the conversation.

With the accelerating development of AI, creating and maintaining trust with users, consumers, and constituents alike was central to the discussion.

Thank you SoftBank Group for sponsoring this event.

View original post here:

A conversation on the future of AI. - Axios

Codota raises $12 million for AI that suggests and autocompletes code – VentureBeat

Codota, a startup developing a platform that suggests and autocompletes Python, C, HTML, Java, Scala, Kotlin, and JavaScript code, today announced that it raised $12 million. The bulk of the capital will be spent on product R&D and sales growth, according to CEO and cofounder Dror Weiss.

Companies like Codota seem to be getting a lot of investor attention lately, and theres a reason. According to a study published by the University of Cambridges Judge Business School, programmersspend50.1% of their worktimenotprogramming; the other halfis debugging. And thetotal estimated cost of debugging is $312 billion per year. AI-powered code suggestion and review tools, then, promise to cut development costs substantially while enabling coders to focus on more creative, less repetitive tasks.

Codotas cloud-based and on-premises solutions which it claims are used by developers at Google, Alibaba, Amazon, Airbnb, Atlassian, and Netflix complete lines of code based on millions of Java programs and individual context locally, without sending any sensitive data to remote servers. They surface relevant examples of Java API within integrated development environments (IDE) including Android Studio, VSCode, IntelliJ, Webstorm, and Eclipse, and Codotas engineers vet the recommendations to ensure theyve been debugged and tested.

GamesBeat Summit 2020 Online | Live Now, Upgrade your pass for networking and speaker Q&A.

Codota says the program analysis, natural language processing, and machine learning algorithms powering its platform learn individual best practices and warn of deviation, largely by extracting an anonymized summary of the current IDE scope (but not keystrokes or string contents) and sending it via an encrypted connection to Codota. The algorithms are trained to understand the semantic models of code not just the source code itself and trigger automatically whenever they identify useful suggestions. (Alternatively, suggestions can be manually triggered with a keyboard shortcut.)

Codota is free for individual users the company makes money from Codota Enterprise, which learns the patterns and rules in a companys proprietary code. The free tiers algorithms are trained only on vetted open source code from GitHub, StackOverflow, and other sources.

Codota acquired competitor TabNine in December last year, and since then, its user base has grown by more than 1,000% to more than a million developers monthly. That positions it well against potential rivals like Kite, which raised $17 million last January for its free developer tool that leverages AI to autocomplete code, and DeepCode, whose product learns from GitHub project data to give developers AI-powered code reviews.

This latest funding round which was led by e.ventures, with the participation of existing investor Khosla Ventures and new investors TPY Capital and Hetz Ventures came after seed rounds totaling just over $2.5 million. It brings Codotas total raised to over $16 million. As a part of it, e.ventures general partner Tom Gieselmann will join Codotas board of directors.

Codota is headquartered in Tel Aviv. It was founded in 2015 by Weiss and CTO Eran Yahav, a Technion professor and former IBM Watson Fellow.

Originally posted here:

Codota raises $12 million for AI that suggests and autocompletes code - VentureBeat

AI Weekly: Transform 2020 showcased the practical side of AI and ML – VentureBeat

Watch all the Transform 2020 sessions on-demand right here.

Today marked the conclusion of VentureBeats Transform 2020 summit, which took place online for the first time in our history. Luminaries including Google Brain ethicist Timnit Gebru and IBM AI ethics leader Francesca Rossi spoke about how women are advancing AI and leading the trend of AI fairness, ethics, and human-centered AI. Twitter CTO Parag Agrawal detailed the social networks efforts to apply AI to detect fake or hateful tweets. Pinterest SVP of technology Jeremy King walked through learnings from Pinterests explorations of computer vision to create inspirational experiences. And Unity principal machine learning engineer Cesar Romero brought clarity to the link between synthetic data sets and real-world AI model training.

Thats just a sampling of the panels, interviews, and discussions to which Transform 2020 attendees had front-row seats this week. But the sessions that caught my eye were those touching on practical, tangible AI applications as opposed to theoretical. Research remains crucial to the fields advancement, and theres no sign its slowing the over 1,000 papers accepted to ICML 2020 suggest the contrary. However, production environments are perhaps the best opportunity to battle-test proposed tools and algorithms for robustness. Outcome predictions are just that: predictions. It takes real-world experimentation to know whether hypotheses will truly pan out.

Barak Turovsky, Google AI director of product for the natural language understanding team, elucidated steps Google took to mitigate gender bias from the language models powering Google Translate. Leveraging three AI models to detect gender-neutral queries and generate gender-specific translations before checking for accuracy, Googles system can provide multiple responses to translations of words like nurse and let users choose the best one (e.g., the masculine enfermero or the feminine enfermera). Google is a leader in artificial intelligence, and with leadership comes the responsibility to address a machine learning bias that has multiple examples of results about race and sex and gender across many areas, including conversational AI, Turovsky said.

Like Google, software company Cloudera doubled down on productization of its AI and ML technologies. Senior director of engineering Adam Warrington said it deployed a chatbot to improve customer question-and-answer experiences in under a month, leveraging proprietary data sets of client interactions, community posts, subject-matter expert guidance, and more. The underpinning models can understand relevant words and sentences within a support case and extract the right solution from the best source, whether a knowledge base article, product documentation, or community post.

For Yelp, deployment is a core part of the experimentation process, enabled by the companys Bunsen platform. Using Bunsen through a frontend user interface called Beaker, data scientists, engineers, execs, and even public relations reps can determine whether products and models have any negative impact on the growth of business metrics or if theyre meeting goals. Yelp employees get the scale of being able to deploy a model to a cohort of users depending on how they want to reach them, as well as the flexibility to determine if the functionality is perhaps not optimal or, worst-case scenario, is harmful. We have a rapid way of turning those experiences off and doing what we need to do to fix them on the backend, Yelp head of data science Justin Norman told VentureBeat. One of the best things about what Bunsen allows us to do is to scale at speed.

When it comes to practical uses of AI and machine learning in the financial sector, Visa is at the forefront with projects that demonstrate the potential of these technologies. As a rule, the company looks for use cases where AI and ML could deliver at least a 20% to 30% efficiency increase. Its Visa Advanced Authorization platform is a case in point: It uses recurrent neural networks along with gradient boosted trees to determine the likelihood transactions are fraudulent. Melissa McSherry, a senior vice president and global head of data for Visa, said the company prevents $25 billion in annual fraud thanks to the AI it developed. We have definitely taken a use case approach to AI, she said. We dont deploy AI for the sake of AI. We deploy it because its the most effective way to solve a problem.

AI has a role to play in health care, as well. CommonSpirit Health, the largest not-for-profit health care provider in the country, is applying models to optimize the rounds its doctors and nurses make every day. Based upon our thousands of analysis of patients, [if] we dont address the patient in room seven first, theyre going to have to stay longer than they would need to otherwise, chief strategic innovation officer Rich Roth explained. Using AI that way, really to accelerate our workflow, and to clearly show to our caregivers the clinical benefit of why that data is important, is a great example of how technology can help enhance care.

For AI coverage, send news tips toKhari JohnsonandKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark ourAI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Read more:

AI Weekly: Transform 2020 showcased the practical side of AI and ML - VentureBeat

Insight into AI highlights its harmful infection potential – Poultry World

Commercial poultry should be protected from the risk of contracting harmful bird flu from migrating flocks, according to new research.

Insights from a study of the devastating 2017/17 bird flu outbreak show how highly pathogenic bird flu viruses can be transmitted from wild migrating bird population to domestic flocks and back again.

Research shows that avian influenza introduced by migratory birds has a devastating effect on commercial poultry flocks. Photo: Mark Pasveer

These viruses can readily exchange genetic material with other low pathogenic viruses which are less harmful during migration, raising the likelihood of serious outbreaks in domestic poultry and wild birds.

The study, led by a team including the Roslin Institute, representing the Global Consortium for H5N8 and Related Influenza Viruses, studied the genetic makeup of the 2016/7 bird flu virus in various birds at key stages during the flu season. The outbreak began in domestic birds in Asia before being spread via wild migratory flocks to create the largest bird flu epidemic in Europe to date. The team interpreted genetic sequence data from virus samples collected during the outbreak, together with details of where, when and in which bird species they originated. Using a computational technique, known as phylogenetic inference, researchers estimated where and when the virus exchanged genetic material with other viruses in wild or domestic birds.

The virus could easily exchange genetic material with other, less harmful viruses, at times and locations corresponding to bird migratory cycles. These included viruses carried by wild birds on intersecting migratory routes and by farmed ducks in China and central Europe. Migratory birds harbouring weaker viruses are more likely to survive their journey and potentially pass disease to domestic birds, the study found.

Bird flu viruses can readily exchange genetic material with other influenza viruses...

Commenting on the results, Dr Sam Lycett of the Roslin Institute, said: Bird flu viruses can readily exchange genetic material with other influenza viruses and this, in combination with repeated transmission of viruses between domestic and wild birds, means that a viral strain can emerge and persist in wild bird populations, which carries a high risk of disease for poultry. This aids our understanding of how a pathogenic avian flu virus could become established in wild bird populations.

The research, published in Proceedings of the National Academy of Sciences, was carried out in collaboration with the Friedrich Loeffler Institute, Germany, the Erasmus University Medical Centre, Holland and the University of Edinburghs Usher Institute and Roslin Institute. It was supported by funding from EU Horizon 2020 and others.

Here is the original post:

Insight into AI highlights its harmful infection potential - Poultry World

China’s Plan for World Domination in AI Isn’t So Crazy After All – Bloomberg

Xu Lis software scans more faces than maybe any on earth. He has the Chinese police to thank.

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts Chinas biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from Chinas investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantlystaunch government support that includes handing over gobs of citizens data - something that makes Western officials squirm.

Data is key because thats how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so its much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

This flood of data will only rise. China just enshrined the pursuit of AI into a kind of national technology constitution. A state plan, issued in July, calls for the nation to become the leader in the industry by 2030. Five years from then, the government claims the AI industry will create 400 billion yuan ($59 billion) in economic activity. Chinas tech titans, particularly Tencent Holdings Ltd. and Baidu Inc., are getting on board. And the science is showing up in unexpected places: Shanghais courts are testing an AI system that scours criminal cases to judge the validity of evidence used by all sides, ostensibly to prevent wrongful prosecutions.

Data access has always been easier in China, but now people in government, organizations and companies have recognized the value of data, said Jiebo Luo,a computer science professor at the University of Rochester who has researched China. As long as they can find someone they trust, they are willing to share it.

The AI-MATHS machine took the math portion of Chinas annual university entrance exam in Chengdu.

Photographer: AFP via Getty Images

Every major U.S. tech company is investing deeply as well. Machine learning -- a type of AI that lets driverless cars see, chatbots speak and machines parse scores of financial information -- demands computers learn from raw data instead of hand-cranked programming. Getting access to that data is a permanent slog. Chinas command-and-control economy, and its thinner privacy concerns, mean that country can dispense video footage, medical records, banking information and other wells of data almost whenever it pleases.

Xu argued this is a global phenomenon. "Theres a trend toward making data more public. For example, NHS and Google recently shared some medical image data," he said. But that example does more to illustrate Chinas edge.

DeepMind, the AI lab of Googles Alphabet Inc., has labored for nearly two years to access medical records from the U.K.s National Health Service for a diagnostics app. The agency began a trial with the company using 1.6 million patient records. Last month, the top U.K. privacy watchdog declared the trial violates British data-protection laws, throwing its future into question.

Go player Lee Se-Dol, right, in a match against Googles AlphaGo, during the DeepMind Challenge Match in March 2016.

Photographer: Google via Getty Images

Contrast that with how officials handled a project in Fuzhou. Government leaders from that southeastern Chinese city of more than seven million people held an event on June 26. Venture capital firm Sequoia Capital helped organize the event, which included representatives from Dell Inc., International Business Machines Corp. and Lenovo Group Ltd.A spokeswoman for Dell characterized the event as the nations first "Healthcare and Medical Big Data Ecology Summit."

The summit involved a vast handover of data. At the press conference, city officials shared 80 exabytes worth of heart ultrasound videos, according to one company that participated. With the massive data set, some of the companies were tasked with building an AI tool that could identify heart disease, ideally at rates above medical experts. They were asked to turn it around by the fall.

"The Chinese AI market is moving fast because people are willing to take risks and adopt new technology more quickly in a fast-growing economy," said Chris Nicholson, co-founder of Skymind Inc., one of the companies involved in the event. "AI needs big data, and Chinese regulators are now on the side of making data accessible to accelerate AI."

Representatives from IBM and Lenovo declined to comment. Last month, Lenovo Chief Executive Officer Yang Yuanqing said he will invest $1 billion into AI research over the next three to four years.

Along with health, finance can be a lucrative business in China. In part, thats because the country has far less stringent privacy regulations and concerns than the West. For decades the government has kept a secret file on nearly everyone in China called a dangan. The records run the gamut from health reports and school marks to personality assessments and club records. This dossier can often decide a citizens future -- whether they can score a promotion or be allowed to reside in the city they work.

U.S. companies that partner in China stress that AI efforts, like those in Fuzhou, are for non-military purposes. Luo, the computer science professor, said most national security research efforts are relegated to select university partners. However, one stated goal of the governments national plan is for a greater integration of civilian, academic and military development of AI.

The government also revealed in 2015 that it was building a nationwide database that would score citizens on their trustworthiness, which in turn would feed into their credit ratings. Last year, China Premier Li Keqiang said 80 percent of the nations data was in public hands and would be opened to the public, with an unspecific pledge to protect privacy. The raging popularity of live video feeds -- where Chinese internet users spend hours watching daily footage caught by surveillance video -- shows the gulf in privacy concerns between the country and the West. Embraced in China, the security cameras also reel in mountains of valuable data.

Some machine-learning researchers dispel the idea that data can be a panacea. Advanced AI operations, like DeepMind, often rely on "simulated" data, co-founder Demis Hassabis explained during a trip to China in May. DeepMind has used Atari video games to train its systems. Engineers building self-driving car software frequently test it this way, simulating stretches of highway or crashes virtually.

"Sure, there might be data sets you could get access to in China that you couldnt in the U.S.," said Oren Etzioni, director of the Allen Institute for Artificial Intelligence. "But that does not put them in a terrific position vis-a-vis AI. Its still a question of the algorithm, the insights and the research."

Historically, the country has been a lightweight in those regards. Its suffered through a "brain drain," a flight of academics and specialists out of the country. "China currently has a talent shortage when it comes to top tier AI experts," said Connie Chan, a partner at venture capital firm Andreessen Horowitz. "While there have been more deep learning papers published in China than the U.S. since 2016, those papers have not been as influential as those from the U.S. and U.K."

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

But China is gaining ground. The country is producing more top engineers, who craft AI algorithms for U.S. companies and, increasingly, Chinese ones. Chinese universities and private firms are actively wooing AI researchers from across the globe. Juo, the University of Rochester professor, said top researchers can get offers of $500,000 or more in annual compensation from U.S. tech companies, while Chinese companies will often double that.

Meanwhile, Chinas homegrown talent is starting to shine. A popular benchmark in AI research is the ImageNet competition, an annual challenge to devise a visual recognition system with the lowest error rate. Like last year, this years top winners were dominated by researchers from China, including a team from the Ministry of Public Securitys Third Research Institute.

Relentless pollution in metropolises like Beijing and Shanghai has hurt Chinese companies ability to nab top tech talent. In response, some are opening shop in Silicon Valley. Tencent recently set up an AI research lab in Seattle.

Photographer: David Paul Morris/Bloomberg

Baidu managed to pull a marquee name from that city. The firm recruited Qi Lu, one of Microsofts top executives, to return to China to lead the search giants push into AI. He touted the technologys potential for enhancing Chinas "national strength" and cited a figure that nearly half of the bountiful academic research on the subject globally has ethnically Chinese authors, using the Mandarin term "huaren" -- a term for ethnic Chinese that echoes government rhetoric.

"China has structural advantages, because China can acquire more and better data to power AI development," Lu told the cheering crowd of Chinese developers. "We must have the chance to lead the world!"

Continue reading here:

China's Plan for World Domination in AI Isn't So Crazy After All - Bloomberg

iFLYTEK and Hancom Group Launch Accufly.AI to Help Combat the Coronavirus Pandemic – Business Wire

HEFEI, China--(BUSINESS WIRE)--Asias leading artificial intelligence (AI) and speech technology company, iFLYTEK has partnered with the South Korean technology company, Hancom Group, to launch the joint venture Accufly.AI in South Korea. Accufly.AI launched its AI Outbound Calling System to assist the South Korean government at no cost and provide information to individuals who have been in close contact with or have had a confirmed coronavirus case.

The AI Outbound Calling System is a smart, integrated system that is based on iFLYTEK solutions and Hancom Groups Korean-based speech recognition. The technology saves manpower and assists in the automatic distribution of important information to potential carriers of the virus and provides a mechanism for follow up with recovered patients. iFLYTEK is looking to make this technology available in markets around the world, including North American and Europe.

The battle against the Covid-19 epidemic requires collective wisdom and sharing of best practices from the international community, said iFLYTEK Chief Financial Officer Mr. Dawei Duan. Given the challenges we all face, iFLYTEK is continuously looking at ways to provide technologies and support to partners around the world, including in the United States, Canada, the United Kingdom, New Zealand, and Australia.

In February, the Hancom Group donated 20,000 protective masks and 5 thermal devices to check temperatures to Anhui to help fight the epidemic.

iFLYTEKs AI technology helped stem the spread of the virus in China and will help the South Korean government conduct follow-up, identify patients with symptoms, manage self-isolated residents, and reduce the risk of cross-infection. The system also will help the government distribute important health updates, increase public awareness, and bring communities together.

iFLYTEK is working to create a better world through artificial intelligence and seeks to do so on a global scale. iFLYTEK will maximize its technical advantages in smart services to support the international community in defeating the coronavirus, said Mr. Duan.

More:

iFLYTEK and Hancom Group Launch Accufly.AI to Help Combat the Coronavirus Pandemic - Business Wire

Heres why AI didnt save us from COVID-19 – The Next Web

When the COVID-19 pandemic began we were all so full of hope. We assumed our technology would save us from a disease that could be stymied by such modest steps as washing our hands and wearing face masks. We were so sure that artificial intelligence would become our champion in a trial by combat with the coronavirus that we abandoned any pretense of fear the moment the curve appeared to flatten in April and May. We let our guard down.

Pundits and experts back in January and February very carefully explained how AI solutions such as contact tracing, predictive modeling, and chemical discovery would lead to a truncated pandemic. Didnt most of us figure wed be back to business as usual by mid to late June?

But June turned to July and now were seeing record case numbers on a daily basis. August looks to be brutal. Despite playing home to nearly all of the worlds largest technology companies, the US has become the epicenter of the outbreak. Other nations with advanced AI programs arent necessarily fairing much better.

Among the countries experts would consider competitive in the field of AI compared to the US, nearly all of them have lost the handle on the outbreak: China, Russia, UK, South Korea, etc. Its bad news all the way down.

Figuring out why requires a combination of retrospect and patience. Were not far enough through the pandemic to understand exactly whats gone wrong this things far too alive and kicking for a post-mortem. But we can certainly see where AI hype is currently leading us astray.

Among the many early promises made by the tech community and the governments depending on it, was the idea that contact tracing would make it possible for targeted reopenings. The big idea was that AI could sort out who else a person who contracted COVID-19 may have also infected. More magical AI would then figure out how to keep the healthies away from the sicks and wed be able to both quarantine and open businesses at the same time.

This is an example of the disconnect between AI devs and general reality. A system wherein people allow the government to track their every movement can only work with complete participation from a population with absolute faith in their government. Worse, the more infections you have the less reliable contact-tracing becomes.

Thats why only a handful of small countries even went so far as to try it and as far as we know there isnt any current data supporting this approach actually mitigates the spread of COVID-19.

The next big area where AI was supposed to help was in modeling. For a time, the entire technology news cycle was dominated by headlines declaring that AI had first discovered the COVID-19 threat and machine learning would determine exactly how the virus would spread.

Unfortunately modeling a pandemic isnt an exact science. You cant train a neural network on data from past COVID-19 pandemics because there arent any, this coronavirus is novel. That means our models started with guesses and were subsequently trained on up-to-date data from the unfolding pandemic.

To put this in perspective: using on-the-fly data to model a novel pandemic is the equivalent of knowing you have at least a million dollars in pennies, but only being able to talk about the amount youve physically counted in any given period of time.

In other words: our AI models havent proven much better than our best guesses. And they can only show us a tiny part of the overall picture because were only working with the data we can actually see. Up to 80 percent of COVID-19 carriers are asymptomatic and a mere fraction of all possible carriers have been tested.

What about testing? Didnt AI make testing easier? Kind of but not really. AI has made a lot of things easier for the medical community, but not perhaps in the way you think. There isnt a test bot that you can pour a vial of blood into to get an instantgreen or red infected indicator. The best weve got, for the most part, is background AI that generally helps the medical world run.

Sure theres some targeted solutions from the ML community helping frontline professionals deal with the pandemic. Were not taking anything away from the thousands of developers working hard to solve problems. But, realistically, AI isnt providing game-changer solutions that face up against major pandemic problems.

Its making sure truck drivers know which supplies to deliver first. Its helping nurses autocorrect their emails. Its working traffic lights in some cities, which helps with getting ambulances and emergency responders around.

And its even making pandemic life easier for regular folks too. The fact that youre still getting packages (even if theyre delayed) is a testament to the power of AI. Without algorithms, Amazon and its delivery pipeline would not be able to maintain the infrastructure necessary to ship you a set of fuzzy bunny slippers in the middle of a pandemic.

AI is useful during the pandemic, but its not out there finding the vaccine. Weve spent the last few years here at TNW talking about how AI will one day make chemical compound discovery a trivial matter. Surely finding the proper sequence of proteins or figuring out exactly how to mutate a COVID-killer virus is all in a days work for todays AI systems right? Not so much.

Despite the fact Google and NASA told us wed reached quantum supremacy last year, we havent seen useful quantum algorithms running on cloud-accessible quantum computers like weve been told we would. Scientists and researchers almost always tout chemical discovery as one of the hard problems that quantum computers can solve. But nobody knows when. What we do know is that today, in 2020, humans are still painstakingly building a vaccine. When its finished itll be squishy meatbags who get the credit, not quantum robots.

In times of peace, every new weapon looks like the end-all-be-all solution until you test it. We havent had many giant global emergencies to test our modern AI on. Its done well with relatively small-scale catastrophes like hurricanes and wildfires, but its been relegated to the rear echelon of the pandemic fight because AI simply isnt mature enough to think outside of the boxes we build it in yet.

At the end of the day, most of our pandemic problems are human problems. The science is extremely clear: wear a mask, stay more than six feet away from each other, and wash your hands. This isnt something AI can directly help us with.

But that doesnt mean AI isnt important. The lessons learned by the field this year will go a long way towards building more effective solutions in the years to come. Heres hoping this pandemic doesnt last long enough for these yet-undeveloped systems to become important in the fight against COVID-19.

Published July 24, 2020 19:21 UTC

Read the original here:

Heres why AI didnt save us from COVID-19 - The Next Web

How Hospitals Are Using AI to Battle Covid-19 – Harvard Business Review

Executive Summary

The spread of Covid-19 is stretching operational systems in health care and beyond. The reason is both simple: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. Heres how some hospitals are employing artificial intelligence to handle the surge of patients.

Weve made our coronavirus coverage free for all readers. To get all of HBRs content delivered to your inbox, sign up for the Daily Alert newsletter.

On Monday March 9, in an effort to address soaring patient demand in Boston, Partners HealthCare went live with a hotline for patients, clinicians, and anyone else with questions and concerns about Covid-19. The goals are to identify and reassure the people who do not need additional care (the vast majority of callers), to direct people with less serious symptoms to relevant information and virtual care options, and to direct the smaller number of high-risk and higher-acuity patients to the most appropriate resources, including testing sites, newly created respiratory illness clinics, or in certain cases, emergency departments. As the hotline became overwhelmed, the average wait time peaked at 30 minutes. Many callers gave up before they could speak with the expert team of nurses staffing the hotline. We were missing opportunities to facilitate pre-hospital triage to get the patient to the right care setting at the right time.

The Partners team, led by Lee Schwamm, Haipeng (Mark) Zhang, and Adam Landman, began considering technology options to address the growing need for patient self-triage, including interactive voice response systems and chatbots. We connected with Providence St. Joseph Health system in Seattle, which served some of the countrys first Covid-19 patients in early March. In collaboration with Microsoft, Providence built an online screening and triage tool that could rapidly differentiate between those who might really be sick with Covid-19 and those who appear to be suffering from less threatening ailments. In its first week, Providences tool served more than 40,000 patients, delivering care at an unprecedented scale.

Our team saw potential for this type of AI-based solution and worked to make a similar tool available to our patient population. The Partners Covid-19 Screener provides a simple, straightforward chat interface, presenting patients with a series of questions based on content from the U.S. Centers for Disease Control and Prevention (CDC) and Partners HealthCare experts. In this way, it too can screen enormous numbers of people and rapidly differentiate between those who might really be sick with Covid-19 and those who are likely to be suffering from less threatening ailments. We anticipate this AI bot will alleviate high volumes of patient traffic to the hotline, and extend and stratify the systems care in ways that would have been unimaginable until recently. Development is now under way to facilitate triage of patients with symptoms to most appropriate care setting, including virtual urgent care, primary care providers, respiratory illness clinics, or the emergency department. Most importantly, the chatbot can also serve as a near instantaneous dissemination method for supporting our widely distributed providers, as we have seen the need for frequent clinical triage algorithm updates based on a rapidly changing landscape.

Similarly, at both Brigham and Womens Hospital and at Massachusetts General Hospital, physician researchers are exploring the potential use of intelligent robots developed at Boston Dynamics and MIT to deploy in Covid surge clinics and inpatient wards to perform tasks (obtaining vital signs or delivering medication) that would otherwise require human contact in an effort to mitigate disease transmission.

Several governments and hospital systems around the world have leveraged AI-powered sensors to support triage in sophisticated ways. Chinese technology company Baidu developed a no-contact infrared sensor system to quickly single out individuals with a fever, even in crowds. Beijings Qinghe railway station is equipped with this system to identify potentially contagious individuals, replacing a cumbersome manual screening process. Similarly, Floridas Tampa General Hospital deployed an AI system in collaboration with Care.ai at its entrances to intercept individuals with potential Covid-19 symptoms from visiting patients. Through cameras positioned at entrances, the technology conducts a facial thermal scan and picks up on other symptoms, including sweat and discoloration, to ward off visitors with fever.

Beyond screening, AI is being used to monitor Covid-19 symptoms, provide decision support for CT scans, and automate hospital operations. Meanwhile, Zhongnan Hospital in China uses an AI-driven CT scan interpreter that identifies Covid-19 when radiologists arent available. Chinas Wuhan Wuchang Hospital established a smart field hospital staffed largely by robots. Patient vital signs were monitored using connected thermometers and bracelet-like devices. Intelligent robots delivered medicine and food to patients, alleviating physician exposure to the virus and easing the workload of health care workers experiencing exhaustion. And in South Korea, the government released an app allowing users to self-report symptoms, alerting them if they leave a quarantine zone in order to curb the impact of super-spreaders who would otherwise go on to infect large populations.

The spread of Covid-19 is stretching operational systems in health care and beyond. We have seen shortages of everything, from masks and gloves to ventilators, and from emergency room capacity to ICU beds to the speed and reliability of internet connectivity. The reason is both simple and terrifying: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.

While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. This is because traditional processes those that rely on people to function in the critical path of signal processing are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale. On the other hand, digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity and we have plenty of both. Digital systems can keep pace with exponential growth.

Importantly, AI for health care must be balanced by the appropriate level of human clinical expertise for final decision-making to ensure we are delivering high quality, safe care. In many cases, human clinical reasoning and decision making cannot be easily replaced by AI, rather AI is a decision aid that helps human improve effectiveness and efficiency.

Digital transformation in health care has been lagging other industries. Our response to Covid today has accelerated the adoption and scaling of virtual and AI tools. From the AI bots deployed by Providence and Partners HealthCare to the Smart Field Hospital in Wuhan, rapid digital transformation is being employed to tackle the exponentially growing Covid threat. We hope and anticipate that after Covid-19 settles, we will have transformed the way we deliver health care in the future.

Read the original post:

How Hospitals Are Using AI to Battle Covid-19 - Harvard Business Review

Artificial Intelligence’s Power, and Risks, Explored in New Report – Education Week

Picture this: a small group of middle school students are learning about ancient Egypt, so they strap on a virtual reality headset and, with the assistance of an artificial intelligence tour guide, begin to explore the Pyramids of Giza.

The teacher, also journeying to one of the oldest known civilizations via a VR headset, has assigned students to gather information to write short essays. During the tour, the AI guide fields questions from students and points them to specific artifacts and discuss what they see.

In preparing the the AI-powered lesson on Egypt, the teacher beforehand would have worked with the AI program to craft a lesson plan that not only dives deep into the subject, but figures out how to keep the group moving through the virtual field trip and how to create more equal participation during the discussion. In that scenario, the AI listens, observes and interacts naturally to enhance a group learning experience, and to make a teachers job easier.

That classroom scenario doesnt quite exist yet, but its one example of AIs potential to transform students academic experiences, as described in a new report that also warns of the risks to privacy and students being treated unfairly by the technologys algorithms. Experts in the field of K-12 and AI say the day is coming when teachers will engage with AI in a way that goes beyond simply reading metrics off a dashboard to form actual partnerships that achieve end goals for students together.

The recently-released report is from the Center for Integrative Research in Computing and Learning Sciences, a hub for National Science Foundation-funded projects that focus on cyberlearning, and looks at how AI could shape K-12 education in the future, along with pressing questions centered on privacy, bias, transparency and fairness.

The report summarizes a two-day online panel that featured 22 experts in AI and learning and provides a set of recommendations for school leaders, policymakers, academics and education vendors to consider as general AI research progresses in leaps and bounds and technology is integrated into classrooms at an accelerated pace due to COVID-19.

It also provides some concrete visions for new and expanded uses of AI in K-12, from reimagined automated essay scoring and next-level assessments to AI used in combo with virtual reality and voice-or-gesture-based systems.

Researchers expect it to be about five to 10 years before AI can work lockstep with teachers as classroom partners in a process they have dubbed as orchestration.

That describes when an educator offloads time-consuming classroom tasks to AI, such as forming groups, creating lesson plans, and helping students work together to revise essays, and eventually to monitor progress towards bigger goals.

The report cautions, however, that experts are concerned about the tendency to overpromise what AI can do and to overgeneralize beyond todays limited capabilities.

The researchers also touched on longstanding risks related to AI and education, such as privacy, security, bias, transparency, and fairness and went further to discuss design risks and how poor design practices of AI systems could unintentionally harm users.

While the fusion of AI and K-12 is far from new, the technologys impact in the classroom so far can be measured only as small scale, according to the report.

Thats set to potentially change, and researchers who participated in the CIRCLS online panel made clear that decision-makers need to make sure they dont delay when it comes to planning and ensuring AI in K-12 is used in a manner that is equitable, ethical, and effective and to mitigate weaknesses, risks, and potential harm.

We do not yet know all of the uses and applications of AI that will emerge; new innovations are appearing regularly and the most consequential applications of AI to education are likely not even invented yet, the report says.In a future where technology is ubiquitous in education, AI will also become pervasive in learning, teaching, and assessment. Now is the time to begin responding to the novel capabilities and challenges this will bring.

See also:

Continued here:

Artificial Intelligence's Power, and Risks, Explored in New Report - Education Week