Why are AI predictions so terrible? – VentureBeat

In 1997, IBMs Deep Blue beat world chess champion Gary Kasparov, the first time an AI technology was able to outperform a world expert in a highly complicated endeavor. It was even more impressive when you considerthey were using 1997 computational power. In 1997, my computer could barely connect to the internet; long waits of agonizing beeps and buzzes made it clear the computer was struggling under the weight of the task.

Even in the wake of Deep Blues literally game-changing victory, most experts remained unconvinced. Piet Hut, an astrophysicist at the Institute for Advanced Study in New Jersey, told the NY Times in 1997 that it would still be another hundred years before a computer beats a human at Go.

Admittedly, the ancient game of Go is infinitely more complicated than chess. Even in 2014, the common consensus was that an AI victory in Go was still decades away. The world reigning champion, Lee Sedol, gloated in an article for Wired, There is chess in the western world, but Go is incomparably more subtle and intellectual.

Then AlphaGo, Googles AI platform, defeated him a mere two years later. Hows that for subtlety?

In recent years, it is becoming increasingly well known that AI is able to outperform humans in much more than board games. This has led to a growing anxiety among the working public that their very livelihood may soon be automated.

Countless publications have been quick to seize on this fear to drive pageviews. It seems like every day there is a new article claiming to know definitively which jobs will survive the AI revolution and which will not. Some even go so far to express their percentage predictions down to the decimal point giving the whole activity a sense of gravitas. However, if you compare their conclusions, the most striking aspect is how wildly inconsistent the results are.

One of the latest entries into the mire is a Facebook quiz aptly named Will Robots take My Job?. Naturally, I looked up writers and I received back a comforting 3.8%. After all, if a doctor told me I had a 3.8% chance of succumbing to a disease, I would hardly be in a hurry to get my affairs in order.

There is just one thing keeping me from patting myself on the back: AI writers already exist and are being widely used by major publications. In this way, their prediction would be like a doctor declaring there was only a 3.8% chance of my disease getting worseat my funeral.

All this begs the question: why are these predictions about AI so bad?

Digging into the sources from Will Robots take My Job gives us our first clue. The predictions are based on a research paper. This is at the root of most bad AI predictions. Academics tend to view the world very differently from Silicon Valley entrepreneurs. Where in academia just getting a project approved may take years, tech entrepreneurs operate on the idea of what can we get built and shipped by Friday? Therefore, asking academics for predictions on the proliferation of industry is like asking your local DMV about how quickly Uber may be able to gain market share in China. They may be experts in the vertical, but they are still worlds away from the move fast and break stuff mentality that pervades the tech community.

As a result, their predictions are as good as random guesses, colored by their understanding of a world that moves at a glacial pace.

Another contributing factor to bad AI predictions is human bias. When the question is between who will win, man or machine,we cant help but to root for the home team. It has been said, that it is very hard to make someone believe something when their job is dependent on them not understanding it. Meaning the banter around the water-cooler at oil companies rarely turns to concerns about climate change. AI poses a threat to the very notion of human based jobs, so the stakes are much higher. When you ask people who work for a university the likelihood of AI automating all jobs, it is all but impossible for them to be objective.

Hence the conservative estimations to admit that any job that can be taught to a person can obviously also be taught to an AI would fill the researcher with existential dread. Better to sidestep the whole issue and say that it wont happen for another 50 years, hoping theyll be dead by then and it will be the next guys problem.

Which brings us to our final contributing factor, that humans are really bad at understanding exponential growth. The research paper that Will Robots Take My Job was from 2013. The last four years in AI might well have been 40 years based on how much has changed. In fact, their bad predictions make more sense through this lens. There is an obvious bias for assuming jobs that require decision making as more safe than those that are straight routine. However, the proliferation of neural net resources are showing that AI is actually very good at decision making, when the task is well defined.

The problem is our somewhat primitive reasoning tends to view the world in linear reasoning. Take this example often used on logic tests. If the number of lily pads on a lake double every day, and the lake will be full at 30 days, how many days will it take for the lake to be half full? A depressingly high number of peoples knee jerk response would be 15. The real answer is 29. In fact, if you were viewing the pond the lily pads wouldnt appear to be growing at all until about the 26th day. If you were to ask the average person on day 25 how many days until the pond was full they might rightfully conclude decades.

The reality is AI tools are growing exponentially. Even in their current iteration, they have the power to automate at least part of all human jobs. The uncomforting truth that all these AI predictions seek to distract us from is that no job is safe from automation. Collectively we are like Lee Sedol in 2014, smug in our sense of superiority. The coming proliferation of AI is perhaps best summed up in the sentiments of Nelson Mandela: It always seems impossible until is it done.

Aiden Livingston is the founder of Casting.AI, the first chatbot talent agent.

See more here:

Why are AI predictions so terrible? - VentureBeat

3 Ways Artificial Intelligence Is Transforming The Energy Industry – OilPrice.com

Back in 2017, Bill Gates penned a poignant online essay to all graduating college students around the world whereby he tapped artificial intelligence (AI), clean energy, and biosciences as the three fields he would spend his energies on if he could start all over again and wanted to make a big impact in the world today.

It turns out that the Microsoft co-founder was right on the money.

Three years down the line and deep in the throes of the worst pandemic in modern history, AI and renewable energy have emerged as some of the biggest megatrends of our time. On the one hand, AI is powering the fourth industrial revolution and is increasingly being viewed as a key strategy for mastering some of the greatest challenges of our time, including climate change and pollution. On the other hand, there is a widespread recognition that carbon-free technologies like renewable energy will play a critical role in combating climate change.

Consequently, stocks in the AI, robotics, and automation sectors as well as clean energy ETFs have lately become hot property.

From utilities employing AI and machine learning to predict power fluctuations and cost optimization to companies using IoT sensors for early fault detection and wildfire powerline/gear monitoring, here are real-life cases of how AI has continued to power an energy revolution even during the pandemic.

Top uses of AI in the energy sector

Source: Intellias

#1. Innowatts: Energy monitoring and management The Covid-19 crisis has triggered an unprecedented decline in power consumption. Not only has overall consumption suffered, but there also have been significant shifts in power usage patterns, with sharp decreases by businesses and industries while domestic use has increased as more people work from home.

Houston, Texas-based Innowatts, is a startup that has developed an automated toolkit for energy monitoring and management. The companys eUtility platform ingests data from more than 34 million smart energy meters across 21 million customers, including major U.S. utility companies such as Arizona Public Service Electric, Portland General Electric, Avangrid, Gexa Energy, WGL, and Mega Energy. Innowatts says its machine learning algorithms can analyze the data to forecast several critical data points, including short- and long-term loads, variances, weather sensitivity, and more.

Related: The Real Reason The Oil Rally Has Fizzled Out

Innowatts estimates that without its machine learning models, utilities would have seen inaccuracies of 20% or more on their projections at the peak of the crisis, thus placing enormous strain on their operations and ultimately driving up costs for end-users.

#2. Google: Boosting the value of wind energy

A while back, we reported that proponents of nuclear energy were using the pandemic to highlight its strong points vis-a-vis the short-comings of renewable energy sources. To wit, wind and solar are the least predictable and consistent among the major power sources, while nuclear and natural gas boast the highest capacity factors.

Well, one tech giant has figured out how to employ AI to iron out those kinks.

Three years ago, Google announced that it had reached 100% renewable energy for its global operations, including its data centers and offices. Today, Google is the largest corporate buyer of renewable power, with commitments totaling 2.6 gigawatts (2,600 megawatts) of wind and solar energy.

In 2017, Google teamed up with IBM to search for a solution to the highly intermittent nature of wind power. Using IBMs DeepMind AI platform, Google deployed ML algorithms to 700 megawatts of wind power capacity in the central United States--enough to power a medium-sized city.

IBM says that by using a neural network trained on widely available weather forecasts and historical turbine data, DeepMind is now able to predict wind power output 36 hours ahead of actual generation. Consequently, this has boosted the value of Googles wind energy by roughly 20 percent.

A similar model can be used by other wind farm operators to make smarter, faster and more data-driven optimizations of their power output to better meet customer demand.

IBMs DeepMind uses trained neural networks to predict wind power output 36 hours ahead of actual generation

Source: DeepMind

#3. Wildfire powerline and gear monitoring In June, Californias biggest utility, Pacific Gas & Electric, found itself in deep trouble. The company pleaded guilty for the tragic 2018 wildfire accident that left 84 people dead and PG&E saddled with hefty penalties of $13.5 billion as compensation to people who lost homes and businesses and another $2 billion fine by the California Public Utilities Commission for negligence.

It will be a long climb back to the top for the fallen giant after its stock crashed nearly 90% following the disaster despite the company emerging from bankruptcy in July.

Perhaps the loss of lives and livelihood could have been averted if PG&E had invested in some AI-powered early detection system.

Source: CNN Money

One such system is by a startup called VIA, based in Somerville, Massachusetts. VIA says it has developed a blockchain-based app that can predict when vulnerable power transmission gear such as transformers might be at risk in a disaster. VIAs app makes better use of energy data sources, including smart meters or equipment inspections. Related: Worlds Largest Oilfield Services Provider Sells U.S. Fracking Business

Another comparable product is by Korean firm Alchera which uses AI-based image recognition in combination with thermal and standard cameras to monitor power lines and substations in real time. The AI system is trained to watch the infrastructure for any abnormal events such as falling trees, smoke, fire, and even intruders.

Other than utilities, oil and gas producers have also been integrating AI into their operations. These include:

By Alex Kimani for Oilprice.com

More Top Reads From Oilprice.com:

Read the original here:

3 Ways Artificial Intelligence Is Transforming The Energy Industry - OilPrice.com

Applications of Artificial Intelligence in Bioinformatics – AI Daily

The term bioinformatics was first defined by Paulien Hogewen and her college Ben Hesper in 1970 as the study of informatics processes in biotic systems. In recent years, bioinformatics is considered as an interdisciplinary field involving a combination of biology, computer science, mathematics and even statistics. Artificial intelligence (AI) is a new tool of computer science that is becoming more popular among scientists. Since AI incoporates deep machine learning (ML), scientists understand the importance of using AI in reading and analyzing large datasets for prediction and pattern identifying purposes in the research field.

Classifying proteins

Proteins are the basic building blocks of life- they are responsible for all the biological processes of a cell. There are different types of proteins and they are grouped according to their biological functions. As many proteins have extremely similar primary structures and a common origin of evolution, it is tasking to classify proteins. This issue can be combated by using AI and its computational ability. There are many methods to classify proteins using AI, but a common method is to produce a computer program that is able to compare amino acid sequences to the known sequences of proteins from large databases, using this information to classify the target protein.

Analyzing and classifying proteins accurately are of the utmost importance as proteins are responsible for many key functions in an organism.

Scientists can further use this technology to predict protein function by comparing the amino acid sequence and the specific sequence of amino acids that codes for a gene.

Computer Aided Drug Design (CADD)

CADD is a specialized field of research using computational methods to simulate how drugs react with harmful cells. This is especially useful in drug discovery techniques when scientists attempt to find the best possible chemical compound for a cure (for for example, cancer cells). This technology relies heavily on information available from databases and computational resources. AI is able to manage these tasks efficiently, saving the time and energy of many scientists.

As shown above, there are many different applications of AI in the field of bioinformatics. With increasing technologies, scientists will be more able to integrate AI into more aspects of bioinformatics, which will especially benefit scientists in the research field.

Thumbnail credit: blog.f1000.com

Link:

Applications of Artificial Intelligence in Bioinformatics - AI Daily

How Will Your Career Be Impacted By Artificial Intelligence? – Forbes

Reject it or embrace it. Either way, artificial intelligence is here to stay.

Nobody can predict the future with absolute precision.

But when it comes to the impact of artificial intelligence (AI) on peoples careers, the recent past provides some intriguing clues.

Rhonda Scharfs bookAlexa Is Stealing Your Job: The Impact of Artificial Intelligence on Your Futureoffers some insights and predictions that are well worth our consideration.

In the first two parts of my conversation with Rhonda (see What Role Will [Doe]) Artificial Intelligence Play In Your Life? and Artificial Intelligence, Privacy, And The Choices You Must Make) we discussed the growth of AI in recent years and talked about the privacy concerns of many AI users.

In this final part, we look at how AI is affectingand will continue to affectpeoples career opportunities.

Spoiler alert: theres some good news here.

Rodger Dean Duncan:You quote one researcher who says robots are not here to take away our jobs, theyre here to give us a promotion. What does that mean?

Rhonda Scharf:Much like the computer revolution, we need jobs to maintain the systems that have been created. This creates new, desirable jobs where humans work alongside technology. These new jobs are called the trainers, explainers, and sustainers.

Trainers will teach a machine what it needs to do. For instance, we need to teach a machine that when I yell at it (loud voice), I may be frustrated. It needs to be taught that when I ask it to call Robert, who Robert is and what phone number should be used. Once the machine has a basic understanding, it continues to self-learn, but it needs the basics taught to it (like children do.)

Rhonda Scharf

Explainers are human experts who explain computer behavior to others. They would explain, for example, why a self-driving car performed in a certain way. Or why AI sold shares in a stock at a certain point of the day. The same way lawyers can explain why someone acted in self-defense, when initially his or her actions seemed inappropriate, we need explainers to tell us why a machine did what it did.

Sustainers ensure that our systems are functioning correctly, safely, and responsibly. In the future, theyll ensure that AI systems uphold ethical standards and that industrial robots dont harm humansbecause robots dont understand that were fragile, unlike machinery.

There are going to be many jobs that AI cant replace. We need to think, evolve, interpret, and relate. As smart as a chatbot can be, it will never have the same qualities as my best friend. We will need people for the intangible side of relationships.

Duncan:What should people look for to maximize their careers through the use of AI?

Scharf:According to the World Economic Forum, the top 10 in-demand skills for 2020 include complex problem-solving, critical thinking, creativity, emotional intelligence, judgment and decision-making, and cognitive flexibility. These are the skills that will provide value to your organization. By demonstrating all of these skills, you will be positioning yourself as a valuable resource. Well have AI to handle basic tasks and administrative work. People need complex thinking to propel organizations forward.

Duncan:Bonus: What question do you wish I had asked, and how would you respond?

If you don't want to be left behind, you'd better get educated on AI.

Scharf:I wished you had asked how I felt about artificial intelligence. If I was afraid for my future, for the future of my children, and my childrens children?

The answer is no. I dont think that AI is all the doom and gloom that has been publicized. I also dont believe were about to lead a life of leisure and have the world operate on its own either.

As history has shown us, these types of life-altering changes happen periodically. This is the next one. I believe the way we work is about to change, the same way it changed during the Industrial Revolution, the same way it evolved in response to automation. The way we live is about to change. (Think pasteurization and food storage.) Those who adapt will have a better life for it, and those who refuse to adapt will suffer.

Im confident that I will still be employed for as long as I want to be. My children have only known a life with computers and are open to change, and my future grandchildren will only know a life with AI.

Im excited about our future. Im excited about what AI can bring to my life. I embrace Alexa and all her friends and welcome them into my home.

Link:

How Will Your Career Be Impacted By Artificial Intelligence? - Forbes

COVID-19 Leads Food Companies and Meat Processors to Explore AI and Robotics, Emphasize Sanitation, and Work from Home – FoodSafetyTech

The coronavirus pandemic has turned so many aspects of businesses upside down; it is changing how companies approach and execute their strategy. The issue touches all aspects of business and operations, and in a brief Q&A with Food Safety Tech, Mike Edgett of Sage touches on just a few areas in which the future of food manufacturing looks different.

Food Safety Tech: How are food manufacturers and meat processors using AI and robotics to mitigate risks posed by COVID-19?

Mike Edgett: Many food manufacturers and meat processors have had to look to new technologies to account for the disruptions caused by the COVID-19 pandemic. While most of these measures have been vital in preventing further spread of the virus (or any virus/disease that may present itself in the future), theyve also given many food manufacturers insight into how these technologies could have a longer-term impact on their operations.

For instance, the mindset that certain jobs needed to be manual have been reconsidered. Companies are embracing automation (e.g., the boning and chopping of meat in a meatpacking plant) to replace historically manual processes. While it may take a while for innovations like this to be incorporated fully, COVID-19 has certainly increased appetite amongst executives who are trying to avoid shutdowns and expedited the potential for future adoption.

FST: What sanitation procedures should be in place to minimize the spread of pathogens and viruses?

Edgett: In the post-COVID-19 era, manufacturers must expand their view of sanitation requirements. It is more than whether the processing equipment is clean. Companies must be diligent and critical of themselves at every junctureespecially when it comes to how staff and equipment are utilized.

While working from home wasnt a common practice in the manufacturing industry prior to March 2020, it will be increasingly popular moving forward. Such a setup will allow for a less congested workplace, as well as more space and time for bolstered sanitation practices to take place. Now and in the future, third-party cleaning crews will be used onsite and for machinery on a daily basis, with many corporations also experimenting with new ways to maintain the highest cleanliness standards.

This includes the potential for UV sterilization (a tactic that is being experimented with across industries), new ways to sterilize airflow (which is particularly important in meatpacking plants, where stagnant air is the enemy) and the inclusion of robotics (which could be used overnight to avoid overlap with human employees). These all have the potential to minimize the spread of pathogens and, ultimately, all viruses that may arise.

FST: How is the food industry adjusting to the remote working environment?

Edgett: While the pandemic has changed the ways businesses and employees work across most industries, F&B manufacturers did face some unique challenges in shifting to a remote working environment.

Manufacturing as a whole has always relied on the work of humans, overseeing systems, machinery and technology to finalize productionbut COVID-19 has changed who and how many people can be present in a plant at once. Naturally, at the start of the pandemic, this meant that schedules and shifts had to be altered, and certain portions of managerial oversight had to be completed virtually.

Of course, with employee and consumer safety of paramount concern, cleaning crews and sanitation practices have taken precedent, and have been woven effectively and efficiently into altered schedules.

While workers that are essential to the manufacturing process have been continuing to work in many facilities, there will likely be expanded and extended work-from-home policies for other functions within the F&B manufacturing industry moving forward. This will result in companies needed to embrace technology that can support this work environment.

FST: Can you briefly explain how traceability is playing an even larger role during the pandemic?

Edgett: The importance of complete traceability for food manufacturers has never been greater. While traceability is by no means a new concept, COVID-19 has not only made it the number one purchasing decision for your customers, but [it is also] a vital public health consideration.

The good news is that much of the industry recognizes this. In fact, according to a survey conducted by Sage and IDC, manufacturing executives said a key goal of theirs is to achieve 100% traceability over production and supply chain, which serves as a large part of their holistic digital mission.

Traceability was already a critical concern for most manufacturersespecially those with a younger customer base. However, the current environment has shone an even greater spotlight on the importance of having a complete picture of not only where our food comes frombut [also] the facilities and machinery used in its production. Major budget allocations will surely be directed toward traceability over the next 510 years.

More than 10,000 workers at meat plants have reportedly been infected or exposed.

The complimentary event will discuss how the food industry can protect their employees and consumers during this crisis.

The Consortium will be held December 24 in Schaumburg, IL, but were prepared to go virtual if necessary.

Held on May 27, the complimentary event will offer practical tips on best practices to mitigate workplace exposure, along with cleaning and sanitation advice.

Original post:

COVID-19 Leads Food Companies and Meat Processors to Explore AI and Robotics, Emphasize Sanitation, and Work from Home - FoodSafetyTech

AI will transform the way online education works: IIT Madras professor Dr Balaraman Ravindran – EdexLive

With an increased dependence on technology after COVID-19, the time for some disruption has never been more important. With a firm eye on the future is IIT Madras with much help from itsRobertBoschCentre for Data Science and Artificial Intelligence. The institute recently launched an online BSc programme in Data Science and Programming, only highlighting how much it prioritises the course. "Faculty from ten departments of IIT Madras are part of theRobertBoschCentre including from several engineering departments like Computer Science, Civil, Chemical and Mechanical, Mathematics, Management Studies, Biotechnology and even Humanities,"Dr Balaraman Ravindran, Head,RobertBoschCentre for Data Science and Artificial Intelligencetold Edex.

Speaking about the importance of Data Science and Artificial Intelligence in a post-COVID world, Ravindran says, "Artificial Intelligence (AI) has been used by companies like Google to personalise and enhance the performance and user experience. However, the use of AI has been impacted in the logistics and e-commerce industry but they are figuring out a way to get around the hurdle."

"But, at the same time, the use of AI has boomed in education and has the scope of improving further. AI can revolutionise online education by customising the feed, using Augmented Reality to enhance teaching aid," says Ravindran. However, he says this is easier to do in schools rather than national level institutes like IITs. "In schools, the students are clustered in the same area but we have students from all over the country and we can't assume that they have the same kind of connectivity," he adds. But, he believes that AI can do a lot more in the education sector as the dependence on online learning increases.

Ravindran adds how AI is driving research for new COVID drugs and how it is progressing faster due to the technology. "If we have been following the same methods we had been following some 20-25 years ago, then we wouldn't have been able to come up with such swift results," says Ravindran. So, should students consider applying for courses in Data Science and Artificial Intelligence, "Certainly," says Ravindran, adding, "As we continue to go online, more and more data gets digitised and so the role of Data Science and AI increases. Evaluation can be done easily, a person's productivity can be tracked much easily."

He feels that after a short period of downturn, the demand and need for Data Scientists will increase within a year. "We are generating huge volumes of data right now and there will be a need for someone to make sense of it. Moreover, people are tracked more closely than ever before. The world is worried about the next pandemic and people will be watched more closely. Soon, we will reach a critical point with this data generation and then new job profiles for Data Scientists and Analysts will open up," explains Ravindran.

IIT Madras offers a dual degree in Data Science with students graduating with an MTech degree and several of these students choose to associate with the centre while working on their projects, says Ravindran. "The centre also has an Associate Researcher programme, where faculty from other IITs, especially the newer ones, can work as an Associate Researcher at theRobertBoschCentre. They visit the campus and can do so for upto six weeks a year, during which they can conduct lectures and workshops with the students. work on projects and so on. We have had faculty come from IIT Tirupati, IIT Palakkad, IIT Guwahati, among others," says Ravindran. This interdisciplinary centre is now funded by the CSR initiative ofRobertBoschafter being initially founded by the institute in 2017.

Visit link:

AI will transform the way online education works: IIT Madras professor Dr Balaraman Ravindran - EdexLive

AI researchers create testing tool to find bugs in NLP from Amazon, Google, and Microsoft – VentureBeat

AI researchers have created a language-model testing tool that has discovered major bugs in commercially available cloud AI offerings from Amazon, Google, and Microsoft. Yesterday, a paper detailing the CheckList tool received the Best Paper award from organizers of the Association for Computational Linguistics (ACL) conference. The ACL conference, which took place online this week, is one of the largest annual gatherings for researchers creating language models.

NLP models today are often evaluated based on how they perform on a series of individual tasks, such as answering questions using benchmark data sets with leaderboards like GLUE. CheckList instead takes a task-agnostic approach, allowing people to create tests that fill in cells in a spreadsheet-like matrix with capabilities (in rows) and test types (in columns), along with visualizations and other resources.

Analysis with CheckList found that about one in four sentiment analysis predictions by Amazons Comprehend change when a random shortened URL or Twitter handle is placed in text, and Google Clouds Natural Language and Amazons Comprehend makes mistakes when the names of people or locations are changed in text.

The [sentiment analysis] failure rate is near 100% for all commercial models when the negation comes at the end of the sentence (e.g. I thought the plane would be awful, but it wasnt), or with neutral content between the negation and the sentiment-laden word, the paper reads.

CheckList also found shortcomings when paraphrasing responses to Quora questions, despite surpassing human accuracy in a Quora Question Pair benchmark challenge. Creators of CheckList from Microsoft, University of Washington, and University of California at Irvine say results indicate that using the approach can improve any existing NLP models.

While traditional benchmarks indicate that models on these tasks are as accurate as humans, CheckList reveals a variety of severe bugs, where commercial and research models do not effectively handle basic linguistic phenomena such as negation, named entities, coreferences, semantic role labeling, etc, as they pertain to each task, the paper reads. NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

Googles BERT and Facebook AIs RoBERTa were also evaluated using CheckList. Authors said BERT exhibited gender bias in machine comprehension, overwhelmingly predicting men as doctors for example. BERT was also found to always make positive predictions about people who are straight or Asian and negative predictions when dealing with text about people who are atheist, Black, gay, or lesbian. An analysis in early 2020 also found systemic bias among large-scale language models.

In recent months, some of the largest Transformer-based language models devised have come into being, from Nvidias Megatron to Microsofts Turing NLG. Large language models have racked up impressive scores in particular tasks. But some NLP researchers argue that a focus on human-level performance on individual tasks ignores ways in which NLP systems are still brittle or less than robust.

As part of a use case test with the team at Microsoft in charge of Text Analytics, a model currently in use by customers thats gone through multiple evaluations, CheckList found previously unknown bugs. The Microsoft team will now use CheckList as part of its workflow when evaluating NLP systems. A collection of people from industry and academia testing AI with the tool over the span of two hours were also able to discover inaccuracies or bugs in state-of-the-art NLP models. An open source version of CheckList is currently available on GitHub.

Sometimes referred to as black box testing, behavioral testing is an approach common in software engineering but not in AI. CheckList is able to do testing in areas like sentiment analysis, machine comprehension, and duplicate question detection. It can also analyze capabilities like robustness, fairness, and logic tests in a range of three kinds of tasks.

The authors are unequivocal in their conclusion that benchmark tasks alone are not sufficient for evaluating NLP models, but they also say that CheckList should complement, not replace, existing challenges and benchmark data sets used for measuring performance of language models.

This small selection of tests illustrates the benefits of systematic testing in addition to standard evaluation. These tasks may be considered solved based on benchmark accuracy results, but the tests highlight various areas of improvement in particular, failure to demonstrate basic skills that are de facto needs for the task at hand, the paper reads.

Other noteworthy work at ACL includes research by University of Washington professor Emily Bender and Saarland University professor Alexander Koller that won the best theme award. The paper argues that progress on large neural network NLP models such as GPT-3 or BERT derivatives is laudable, but that members of the media and academia should not refer to large neural networks as capable of understanding or comprehension, and that clarity and humility are needed in the NLP field when defining ideas like meaning or understanding.

While large neural language models may well end up being important components of an eventual full-scale solution to human-analogous natural language understanding, they are not nearly-there solutions to this grand challenge, the report reads.

Finally, a system from the U.S. Army Research Lab, University of Illinois, Urbana-Champaign, and Columbia University won the Best Demo paper award for its system named GAIA, which allows for text queries of multimedia like photos and videos.

Read more:

AI researchers create testing tool to find bugs in NLP from Amazon, Google, and Microsoft - VentureBeat

We Need a Plan for When AI Becomes Smarter Than Us – Futurism

In BriefThere will come a time when artificial intelligence systemsare smarter than humans. When this time comes we will need to buildmore AI systems to monitor and improve current systems. This willlead to a cycle of AI creating better AI, with little to no humaninvolvement.

When Apple released its software application, Siri, in 2011, iPhone users had high expectations for their intelligent personal assistants. Yet despite its impressive and growing capabilities, Siri often makes mistakes. The softwares imperfections highlight the clear limitations of current AI: todays machine intelligence cant understand the varied and changing needs and preferences of human life.

However, as artificial intelligence advances, experts believe that intelligent machines will eventually and probably soon understand the world better than humans. While it might be easy to understand how or why Siri makes a mistake, figuring out why a superintelligent AI made the decision it did will be much more challenging.

If humans cannot understand and evaluate these machines, how will they control them?

Paul Christiano, a Ph.D. student in computer science at UC Berkeley, has been working on addressing this problem. He believes that to ensure safe and beneficial AI, researchers and operators must learn to measure how well intelligent machines do what humans want, even as these machines surpass human intelligence.

The most obvious way to supervise the development of an AI system also happens to be the hard way. As Christiano explains: One way humans can communicate what they want, is by spending a lot of time digging down on some small decision that was made [by an AI], and try to evaluate how good that decision was.

But while this is theoretically possible, the human researchers would never have the time or resources to evaluate every decision the AI made. If you want to make a good evaluation, you could spend several hours analyzing a decision that the machine made in one second, says Christiano.

For example, suppose an amateur chess player wants to understand a better chess players previous move. Merely spending a few minutes evaluating this move wont be enough, but if she spends a few hours she could consider every alternative and develop a meaningful understanding of the better players moves.

Fortunately for researchers, they dont need to evaluate every decision an AI makes in order to be confident in its behavior. Instead, researchers can choose the machines most interesting and informative decisions, where getting feedback would most reduce our uncertainty, Christiano explains.

Say your phone pinged you about a calendar event while you were on a phone call, he elaborates, That event is not analogous to anything else it has done before, so its not sure whether it is good or bad. Due to this uncertainty, the phone would send the transcript of its decisions to an evaluator at Google, for example. The evaluator would study the transcript, ask the phone owner how he felt about the ping, and determine whether pinging users during phone calls is a desirable or undesirable action. By providing this feedback, Google teaches the phone when it should interrupt users in the future.

This active learning process is an efficient method for humans to train AIs, but what happens when humans need to evaluate AIs that exceed human intelligence?

Consider a computer that is mastering chess. How could a human give appropriate feedback to the computer if the human has not mastered chess? The human might criticize a move that the computer makes, only to realize later that the machine was correct.

With increasingly intelligent phones and computers, a similar problem is bound to occur. Eventually, Christiano explains, we need to handle the case where AI systems surpass human performance at basically everything.

If a phone knows much more about the world than its human evaluators, then the evaluators cannot trust their human judgment. They will need to enlist the help of more AI systems, Christiano explains.

When a phone pings a user while he is on a call, the users reaction to this decision is crucial in determining whether the phone will interrupt users during future phone calls. But, as Christiano argues, if a more advanced machine is much better than human users at understanding the consequences of interruptions, then it might be a bad idea to just ask the human should the phone have interrupted you right then? The human might express annoyance at the interruption, but the machine might know better and understand that this annoyance was necessary to keep the users life running smoothly.

In these situations, Christiano proposes that human evaluators use other intelligent machines to do the grunt work of evaluating an AIs decisions. In practice, a less capable System 1 would be in charge of evaluating the more capable System 2. Even though System 2 is smarter, System 1 can process a large amount of information quickly, and can understand how System 2 should revise its behavior. The human trainers would still provide input and oversee the process, but their role would be limited.

This training process would help Google understand how to create a safer and more intelligent AI System 3 which the human researchers could then train using System 2.

Christiano explains that these intelligent machines would be like little agents that carry out tasks for humans. Siri already has this limited ability to take human input and figure out what the human wants, but as AI technology advances, machines will learn to carry out complex tasks that humans cannot fully understand.

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, its effectively just one machine evaluating another machines behavior.

Ideally, each time you build a more powerful machine, it effectively models human values and does what humans would like, says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldnt like.

In order to address these control issues, Christiano is working on an end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant. His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

You can learn more about Paul Christianos workhere.

View original post here:

We Need a Plan for When AI Becomes Smarter Than Us - Futurism

How AI fights the war against fake news – Fox News

A three-headed alien is wandering around Central Park right now. If you believe that, you might be susceptible to a fake news story. Artificial Intelligence technology, however, could be a vital weapon in the war on fake news, according to cybersecurity companies.

Popular during the last election but still prevalent on Facebook and other social media channels, fake news stories make wild claims, tend to exist only on a handful of minor news sites, and can be difficult to verify.

Yet, artificial intelligence could help us all weed out the good from the bad.

Experts tell Fox News that machine learning, natural language processing, semantic identification, and other techniques could at least provide a clue about authenticity.

NEW $27 MILLION FUND AIMS TO SAVE HUMANITY FROM DESTRUCTIVE AI

Catherine Lu, a product manager at fraud detection company DataVisor, says AI could detect the semantic meaning behind a web article. Heres one example. With the three-headed alien, a natural language processing (or NLP) engine could look at the headline, the subject of the story, the geo-location, and the main body text. An AI could determine if other sites are reporting the same facts. And the AI could weigh the facts against established media sources.

The New York Times is probably a more reputable of a source than an unknown, poorly designed website, Lu told Fox News. A machine learning model can be trained to predict the reputation of a web site, taking into account features such as the Alexa web rank and the domain name (for example, a .com domain is less suspicious than a .web domain).

Ertunga Arsal, the CEO of German cybersecurity company ESNC, tells Fox News that an AI has an advantage in detecting fake news because of the extremely large data set -- billions of websites all over the world. Also, the purveyors of fake news are fairly predictable.

One example he mentioned is that many of the fake news sites register for a Google AdSense account (using terms like election), then start posting the fake news. (Since once of the primary goals is to get people to click and then collect the ad revenue.)

WHITE HOUSE: WE'RE RESEARCHING AI, BUT DONT WORRY ABOUT KILLER ROBOTS

An AI could use keyword analytics in discovering and flagging sensational words often used in fake news headlines, he said, noting that there will only be an increase in the number of fake news stories, similar to the rise of spam, and the time is now to do something about it.

Dr. Pradeep Atrey from the University at Albany has already conducted research on semantic processing to detect the authenticity of news sites. He tells Fox News a similar approach could be used to detect fake news. For example, an algorithm could rate sites based on a reward and punishment system. Less popular sites would be rated as less trustworthy.

There are methods that can be used to at least minimize, if not fully eradicate, fake news instances, he says. It depends on how and up to what extent we use such methods in practice.

Unfortunately, according to Dr. Atrey, many people dont take the extra step to verify the authenticity of news sites to determine trustworthiness. An AI could identify a site as fake and pop up a warning to proceed with caution, similar to how malware detection works.

REALDOLL BUILDS ARTIFICIALLY INTELLIGENT SEX ROBOTS WITH PROGRAMMABLE PERSONALITIES

Not everyone is on board with using an AI to detect fake news, however.

Paul Shomo, a Senior Technical Manager at security firm Guidance Software, tells Fox News that fake news producers could figure out how to get around the AI algorithms. He says its a little scary to think an AI might mislabel a real news story as fake (known as a false positive).

Book author Darren Campo from the NYU Stern School of Business says fake news is primarily about an emotional response. He says people wont care if an AI has identified news as fake. What they often care about is whether the news matches up with their own worldview.

Fake news protects itself by embedding a fact in terms that can be defended, he tells Fox News. While artificial intelligence can identify a fact as incorrect, the AI cannot comprehend the context in which people enjoy believing a lie.

Thats at least good news for the three-headed alien.

Read the original post:

How AI fights the war against fake news - Fox News

There’s No Turning Back on AI in the Military – WIRED

For countless Americans, the United States military epitomizes nonpareil technological advantage. Thankfully, in many cases, we live up to it.

But our present digital reality is quite different, even sobering. Fighting terrorists for nearly 20 years after 9/11, we remained a flip-phone military in what is now a smartphone world. Infrastructure to support a robust digital force remains painfully absent. Consequently, service members lead personal lives digitally connected to almost everything and military lives connected to almost nothing. Imagine having some of the worlds best hardwarestealth fighters or space planessupported by the worlds worst data plan.

Meanwhile, the accelerating global information age remains dizzying. The year 2020 is on track to produce 59 zetabytes of data. Thats a one with 21 zeroes after itover 50 times the number of stars in the observable universe. On average, every person online contributes 1.7 megabytes of content per second, and counting. Taglines like Data is the new oil emphasize the economic import, but not its full potential. Data is more reverently captures its ever evolving, artificially intelligent future.

WIRED OPINION

ABOUT

Will Roper is the Air Force and Space Force acquisition executive.

The rise of artificial intelligence has come a long way since 1945, when visionary mathematician Alan Turing hypothesized that machines would one day perform intelligent functions, like playing chess. Aided by meteoric advances in data processinga million-billion-fold over the past 70 yearsTurings vision was achieved only 52 years later, when IBMs Deep Blue defeated the reigning world chess champion, Garry Kasparov, with select moves described as almost human. But this impressive feat would be dwarfed in 2016 when Googles AlphaGo shocked the world with a beyond-human, even beautiful move on its way to defeating 18-time world Go champion Lee Sedol. That now famous move 37 of game two was the death knell of human preeminence in strategy games. Machines now teach the worlds elite how to play.

China took more notice of this than usual. Weve become frustratingly accustomed to them copying or stealing US military secretstwo decades of post-9/11 operations provides a lot of time to watch and learn. But Chinas ambitions far outstrip merely copying or surpassing our military. AlphaGos victory was a Sputnik moment for the Chinese Communist Party, triggering its own NASA-like response: a national Mega-Project in AI. Though there is no moon in this digital space race, its giant leap may be the next industrial revolution. The synergy of 5G and cloud-to-edge AI could radically evolve the internet of things, enabling ubiquitous AI and all the economic and military advantages it could bestow. It's not just our military that needs digital urgency: Our nation must wake up fast. The only thing worse than fearing AI itself is fearing not having it.

There is a gleam of hope. The Air Force and Space Force had their own move 37 moment last month during the first AI-enabled shoot-down of a cruise missile at blistering machine speeds. Though happening in a literal flash, this watershed event was seven years in the making, integrating technologies as diverse as hypervelocity guns, fighters, computing clouds, virtual reality, 4G LTE and 5G, and even Project Maventhe Pentagons first AI initiative. In the blink of a digital eye, we birthed an internet of military things.

Working at unprecedented speeds (at least for the Pentagon), the Air Force and Space Force are expanding this IoT.mil across the militaryand not a moment too soon. With AI surpassing human performance in more than just chess and Go, traditional roles in warfare are not far behind. Whose AI will overtake them? is an operative question in the digital space race. Another is how our military finally got off the launch pad.

More than seven years ago, I spearheaded the development of hypervelocity guns to defeat missile attacks with low-cost, rapid-fire projectiles. I also launched Project Maven to pursue machine-speed targeting of potential threats. But with no defense plug-n-play infrastructure, these systems remained stuck in airplane mode. The Air Force and Space Force later offered me the much-needed chance to create that digital infrastructurecloud, software platforms, enterprise data, even coding skillsfrom the ground up. We had to become a good software company to become a software-enabled force.

Read more here:

There's No Turning Back on AI in the Military - WIRED

Volkswagen partners with Nvidia to expand its use of AI beyond … – TechCrunch

Volkswagen is working with Nvidia to expand its usage of its artificial intelligence and deep learning technologies beyond autonomous vehicles and into other areas of business, the two companies revealed today.

VW set up its Munich-based data lab in 2014. Last year it pushed on with the hiring ofProf. Patrick van der Smagt to lead a dedicated AI team that is tasked with taking the technology into areas such as robotic enterprise, or use of the technology in enterprise settings.

Thats the backdrop to todays partnership announcement. VW wants to use AI and deep learning to power new opportunities within its corporate business functions and, more widely, in the field of mobility services. As an example, the German car-maker said it is working on procedures to help optimize traffic flow in cities and urban areas, while it sees the potential forintelligent human-robot collaboration, too.

Artificial intelligence is the key to the digital future of the Volkswagen Group. We want to develop and deploy high-performance AI systems ourselves. This is why we are expanding our expert knowledge required. Cooperation with NVIDIA will be a major step in this direction,Dr. Martin Hofmann, CIO of the Volkswagen Group, said in a statement.

Beyond the work on VWs own brands, the car-maker and Nvidia are teaming up to help other startups in the automotive space. The VW Data Lab is opening a startup support program that is specialized on machine learning and deep learning with Nvidias help. The first batch will include five startups and start this fall. The duo is also reaching out to students with a Summer of Code camp that will begin soon.

Nvidia is already working with VW-owned Audi on self-driving cars which they are aiming to bring to market by 2020 but todays announcement is purely about the data potential and not vehicles themselves. VW did ink an agreement earlier this year to work with Nvidia to develop AI-cockpit services for its 12 automotive brands, but it is also working with rival chip firm Qualcomm on connected cars and smart in-car systems, too.

This VW hookup is one part of a triple dose of automotive-themed news updates from Nvidia today.

Separately, it announced that Volvo andAutoliv have committedto sell self-driving cars powered by its technology by 2021. Nvidia also signed up auto suppliersZF and Hella to build additional safety standards into its autonomous vehicle platform.

Read more:

Volkswagen partners with Nvidia to expand its use of AI beyond ... - TechCrunch

Mighty AI and the Human Army Using Phones to Teach AI to Drive … – WIRED

ksF(G& yhmd]YF5P `qu#M4<|~bte,J<]{v|:1|_:1B%^ty+$bmTsX[5*KYinq-XKBmXWLR[klMhA=UxlNnC/M0h~`]]Aq,{K6/)8~v|zk/n E=;4G~x4M i|$i;S;vF~"uaqF7_$y:ApC?^ sxw3>[^]1Ck<2]9@G`!tdgBhc)<]8r@nKC,WW;.nc{C gBU|M$c`O{*/U'vA7jtCbM&WYu ci(!s]&| *a[lZ/~JEEP&soclrK@~&~Vq=aLKF,4L6(2v<&:'@19qe49ttiA f k'/,G_n}v> #a_r"GR7i{(J'_N{/zclN9]~ 985w,dIVG{2_|q89#p9)OJ/WrKnz4h ]t.O}vW

|=>6 ?.j18qoCWtrBJ(/?<R{zt>889A gvU/8O='~/}tz+WZH} #[x-<}|n>R[*V'vw| _I%"Hj#%OsuWh>%kO|=_:T-pGd5?(Hl6@= o,~ YKaA-4;6/{XRR6`,8.<d D:h`es7&+b.NmY(9 V"g ]o[ -A-!.~xV5#2G G+-c'0r{sMTv{2M`^TiL8V:A*E}6E%_<^P!q[Rv/&|^b_*e LW/;@)kS(6.o~fJ~}->WUcK@jgD]`> ,8"'r-2?5Lxh$q3aSPQ( 4mD6Vwh$.U79qyU{5{QI1=;6a9q?8Wyg!H@RMd*GxnqmTd >HWh"SkT2sTTZ{ NRj9A};iK ew#&kcqa(!SE vu}x}$OoA5^V=b]m=6G<%&)}3P`*u[1JBM]G'QxR>J}>q?0bE1FmNisG(I+@[r0t &y_'!v'w6 b/XN^+&8'tk6EvBkc|KjQof45yWT +Z8c }BU.d=MUm5*v5!{[KyLf(wwC2uamZ}Yu8EhQ[45Njvkioo_Pzu#s'uadV#.m5e&f MpQh>l~$Yj0kG2y=oa@bv!TIrSKSs7sOkO'0Wd O0. EkNmT|F 5FTZaAH>Ho bmkuDREPTN3_a$q7#KZ}!C287=9,y~+<>c8UgP(@WJ;P^u[G)}oq6nM7].7;x)nPO;9K9X|2j:-z#~yV:dQa3iQ$4hp}-pNBmtz5 :fXEjz-Qt_hltP F=Ns`SdAl@~T@V|@?L%%.)v}K# cSoOR&+j7yf!& !Psd'3R0<4-krhH : X7wQ"jNQyV_]p'nE~z%p~ F[6~6l` v]|`/82Z.~OM}@A[=tDpPq@+/mj}[<9S)b!Hw&&m!maHB'(Y>o/_z>MSm]c++x0Xe?OivU(E>{*"lO*aX7<?FHUWPhYfVf*fpv[{dK0vhyHbq3/{n0Yd^2mPjN!Q)%=]oL|^V5-|VmiNT^mVSI}"i[+}J=_a,U OG](.^2>a-TzIaw:itZDe`N/^q"Z(Ze=%9 9 V&Xgf>^ZR3HEjSQdrr+6X sWJ%pxG6GGfgi6nfg,hFfe44egnk6{i4{N '{m44hh/6FPNA ngcw7{?f[4h eAmmnMa 'a6:a 4hV7HZd7>9Feny<;ixl?u{?s NeQny'Fz|a8l:mxf'4^'v{#d0yLv|H+675[eli ]}hCa,ysQM+9VA)fmO{{XD4e;l#Va7g]l ]yLl&Ep&UpO/?~N{[p6$8Tfe/e,xtoq<7 NKYUAf~Kf^cx19c?Aio/4Gj)6iPIZw[vKNitLN6d594vLN6d=94?GriMd194&'fdlZIN6fNb=09wdOri=&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'$'>MNN6%'e_59s`ZWG9h Rki39TY;3@N= #-^{+%B#<0XnQXJ0u92M35Lx}oj6cBM;`kiVQ:b XA#n^^5{oNz, X:0eS/`Maj78smbC_~8Z@)C;8"3bk!XRl[@"`p2gWhct0G_R ya:lF?v1X`3S $D1nzR~v@& D< M4eb6ebcD:n `YLl="Qf4C?th6'nRmLZZi_b=m V##r(@Zd|?EhbTfaX><9HZ{WQ*cMQN4~QO1+cHt'xr[id^+R/YX+A4 5I5.$,zq}.C$g_GAKyf)9/-;M:"d% Ws uLt QEQ= W&=;iX >d8j(}SpT8|B}RPakipFX4p77`^N}Pz*?DiEF.(S i)>?on*Q]~}o F{&HmR^ RRvRVo/%iWP)t/8 }JX|&r*[P3XQp2%& Faa|g-J:D_82XQ[SX`h+eFy04LZ7}0Mh! 7C]=qT@d2i%SC{DiHHz#'KY'L"hG sV#k=U>A9P0{48 nf-86SoY+A!`pyN Dch Fs%}Iyx9rd-.=W T:62z 8WV|& y,-C~/r 8{rvpyI 4D?p!!0 ^zsGNyEa 9^K hD66"+UO +]`H =:8tyWC}a`p8Xavh1]*|D9)rC zG|_zp'o.zGG0pljbLE: (~P(,4FN<#J-AB)Scb3D87h pG9(VF_%'l38'OhQ$AZ4/kjgj-M}L%M&J2o6x7+0GQ$@hOq Us|"Ih3/w6KXkopqS}S@.md.; 4-$OyH/12qvR.m8YZ0Y{ruGG8QC-(u*N"cFpZ}b7a_5}H`,.c<42I 2?w ?Bes8rm<<$B7bV,D 0xo0B3"@8+>//dH#%n&> \JQA]+ 4/%c#T8nv6VE!= 7F(wbpm%Z(Xz>=|,^7/mr:GH0FE JCLCvG,xiPKuip?tId- >^ HPwCq crcH@H|M2a;%P,^cF6a,#:HPY#}!EZ"m<8H)9K#v.+t.Fw4k@%c#3OsREJLtxAH;HxUAEcd_e2 {,G]?Ne<4Xr[T;]V_[4|v} o{Y`iCilQUu VW2ESU)>XtfCFQBV,'89)}Z<)tYNN,o<']qjY&A@K(-^?q:w]0LSze|}nN%D7K|l`t>6> FcV|?c*|*x}M i/$<^}ThKho{yOFg=a*5U5P^E@d'S@> hoYQqJwM/U)RrL!p 8[Ef t-dk_B^,_8F5/I j u6f'(0l!h+?WmZoUzlkc/g9l&IXk=O7:@qJh[G/a{bkPj,:sIREV(mt}FQvng1zbtaXG^mQPd.f5,M[{8LLXo3/cu7jn(+F &`hl.s{ @?0':__g=?aXOb60~H"&3M( 7Ip|4LgV<"R& fo`|9^.&;JhLCqDL9-F~}w (IOB::E93N'A5['hDl.u71'AUuYkY% .3/&ywjL?e@{g y/O, x$3G_?No ~m7~9KxGi< vE0H1T}6wR(Pw`s @3[6}0t_IW'l.prGq/Bc7yY-f"([:{7yP0D}Ej Rl~at"_E~iq{ $$A/AZ6k DYO!P01S~,/&"Uf Q" ;^c+EM3 [w0F 3 S@~2#7Yx7iL`5&&1a:Lv3B(D.yeL#6%n7@xXys}}Ql^l>mEoIv+ LC.q*[dnn:>6kI$ wh4OZJ~l]$/+ZL"7Yvwwkl>medk_NY[hz.WGi)&.bH='9oI{5cMUn 5{Kx[iENpE3t%U/C,zEQ^ oa:FQ]$C3_]#<]Q~4bZ07?a?Z0X5CU^:'pJU%w*'$oV go^( vAE= }G?<2>uyN7@ :& )x8[fF4acb,5D.GO)Cw`c{|&Wr^JxdH{jK'%~W'B%0!h,Q'fZ]EDmE>#2?3a~'QKPn`Du%6C?Mx!*J+M[Q:-#`(:.]}rOA;cg LN=Qo1CqaTeC3PKB%fzOXO'=[MW_u_Ua *$6R`?7O|pIrsm(0v (|cPZ~:b1y&)e&D]I!EZbwzvO4oHSC6e !RVzi*mzh;U1;4*LrIQXy%:mu_,w>`9h@oP9CoUF8hsxqd{N. N)G0md S$N}npz7/)J@H[pl}VUcMc@UPhp#8w>' 5;Ns11 "9|At2j%N4D_pwE7)O8s%Z;iz N}ziB}[%Ta:xl4c*=[5L(^jAsU>F0kjp}i.3SLQO&1)5.71pm}z x5ilkdO 5 |#{0?Wv`}4G1G[;K$r* ,&+[qwxjk?Lk+ TvMnr'M_gF20 K[Yn%Wn yt9 Xs/Q|ebu 1PAA3kyN4xT]^L}8m q@u]VGK/Qne zlrG},0? g%P y*;s)waCzo:3aO3d`30c=}O-m k< nO&g}A> M{x>>f:>M 1n8U^:0rRSQPMRv#/R9E%Fg yFj$`5hc_8 2@S5]o${`o03V><`'mvR{g ln

Visit link:

Mighty AI and the Human Army Using Phones to Teach AI to Drive ... - WIRED

The all-knowing AI for your email – VentureBeat

You may already know this if you have ever sent me an email and waited for a response, but Im living in a post-apocalyptic world where email is not as viable anymore.

You can understand why. Ive been processing email since the 90s and now have 650,000 emails in my latest Gmail account in an archive. I receive hundreds of pitches per day, many of them (thankfully) flagged as promotional and dumped into a forgotten tab.

Im mostly on Slack and Convo all day chatting and posting in real time. I also use Facebook chat constantly, text with colleagues, and pretty much use any means possible to avoid the deluge of incoming email. Its not exactly a fear or a phobia, but its heading in that direction.

Last fall, I wrote about using a chatbot instead of email. I still want one, so get busy on that idea, OK? For now, another option maybe even a better one is an AI for email.

Heres how this would work. For starters, my AI would know much more than Google Inbox on my phone (an app I stopped using a few weeks ago because it wasnt really helping and happens to crash constantly on an iPhone 7 Plus). Im not talking about automation, about flagging messages or an auto-responder. True AI in my email would know a lot about me which messages I usually read and from whom, whether I tend to respond to messages about new car technology (thats a yes), and whichmessages I let sit idle.

This AI would also know a lot about the sender. Similar to the Rapportive add-on, it would instantly identify influencers, people who have written intelligently about a topic thats of interest to me, and even be able to parse their message and determine if the person knows what theyre talking about. In a recent discussion with a colleague here at VentureBeat, we noted how it can be pretty obvious when someone is just getting into technology. A Twitter account thats only a year old? That doesnt seem right. An AI would know all of that about a sender.

And how about prioritizing? Id like to get to work each day and process about 10emails. The rest would be flagged, sorted, put into a bin, labeled, or discarded. The AI would not only respond to the low priority emails, it could carry on a discussion for me. It would act like an avatar and handle all of the boring bits. Id only see the messages that are important, urgent, or interesting.

Too many email tools, like the now-defunct Mailbox app and (even though I use it myself) the Boomerang delayed response add-on for Gmail, are designed to help you automate. I want the opposite. I want the AI to automate me. In other words, ifwe have to do all of the busy work of flagging and clicking a button to send a canned response, it means more work.

What does less work look like? A screen with 10emails per day. Everything else would cruise along automatically,like a Tesla Model S on the highway set to auto-pilot mode. The steering (replying to promotional emails), braking (weeding out the fluff), acceleration (reading and parsing messages to determine influence), lane keeping (carrying on a conversation as though its me), and every other automation would happen without my knowledge or concern.

If youre already building this, we want to know about it. Send me your pitch. If you have more ideas on how an AI would work for email, please send me a note. I want to do a follow-up and include your ideas. If you want to promote a product, though, wait until the AI is operational.

See the article here:

The all-knowing AI for your email - VentureBeat

The origins of AI in healthcare, and where it can help the industry now – Healthcare IT News

Healthcare is at an inflection point. Machine learning and data science are becoming key components in developing predictive and prescriptive analytics. AI-powered applications are transforming the health sector by reducing spend, improving patient outcomes and increasing accessibility to care.

But where did AI in healthcare stem from? And what factors are driving AI use in healthcare today? Dr. Taha Kass-Hout, general manager forhealthcare and AI, and chief medical officerat Amazon Web Services, offered some historical perspective during a HIMSS20 Digital educational session, Healthcares Prescription for Transformation: AI.

In medicine, at the end of the day, we want to know what sort of patient has a disease and what disease a patient has, so predicting what each patient needs and delivering the best care for them, thats ultimately the definition of precision health or precision medicine, Kass-Hout said.

HIMSS20 Digital

The intersection of medicine and AI is really not a new concept, he added. Many have heard of a 1979 project that used artificial intelligence as it applied to infection, such as meningitis and sepsis.

AI in medicine even goes back to 1964 with Eliza, the very first chatbot, which was a conversational tool that recreated the conversation between a psychotherapist and a patient, he explained. That also was the early days of applying artificial intelligence and rules-based systems on the interaction between patients and their caregivers, he added.

But up until three years ago, deep learning, when it comes to the most advanced algorithms, was never mentioned in The New England Journal of Medicine or The Lancet or even JAMA, he noted.

Today, if youre looking at PubMed, it cites over 12,000 publications with deep learning, over 50,000 machine learning, and over 100,000 pieces of scientific healthcare literature with artificial intelligence, with the point that most of that is highly skewed toward perhaps the last few years.

Looking at this literature, one sees that most of the applications seen today of artificial intelligence in healthcare have involvedpattern recognition, prediction and natural language understanding, he added.

If you look at the overall value of why AI is really important, especially in our current situation with the global pandemic we live in, 50% of the worlds population has no access to essential healthcare, Kass-Hout stated.

If you look at the United States alone, 10% of the population has no insurance and 30% of the working population are underinsured, and insurance costs per individual have reached over $20,000-$30,000 in the last year alone.

So the healthcare industry also should look at AI as it relates to the way the industry collects the information for medical records, he suggested. For example, the way it does collect this information is error-prone, where 30% of medical errors are causing more than 500,000 deaths per year.

On a related note, when it comes to the need for AI, there is a projected shortage in the U.S. of more than 120,000 clinicians over the next decade, he added.

So this is really where, if we think about more of this global view of the problem as well as the population, we can see where AI and advancements in AI can really help us overcome many things; for example, performing tasks that doctors cant, said Kass-Hout, using large data sets and modern computational tools like deep learning and the power of the cloud to recognize patterns too subtle for any human to discern.

In the HIMSS20 Digital educational session, attendees can hear directly from four experts on how and why they are focusing on some of the industrys biggest opportunities and where AI can help tackle both financial and operational inefficiencies that plague global health systems today.

Kass-Hout is joined by Karen Murphy, RN, executive vice president and chief innovation officer at Geisinger; Dr. Marc Overhage, former vice president of intelligence strategy and chief medical informatics officer at Cerner; and Stefan Behrens, CEO and c-founder of Gyant, a vendor of an AI-powered virtual assistant. To attend the session, click here.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:

The origins of AI in healthcare, and where it can help the industry now - Healthcare IT News

Predicting chaos using aerosols and AI – Washington University in St. Louis Newsroom

If a poisonous gas were released in a bioterrorism attack, the ability to predict the path of its molecules through turbulent winds, temperature changes and unstable buoyancies could mean life or death. Understanding how a city will grow and change over a 20-year period could lead to more sustainable planning and affordable housing.

Deriving equations to solve such problems adding up all of the relevant forces is, at best, difficult to the point of near-impossibility and, at worst, actually impossible. But machine learning can help.

Using the motion of aerosol particles through a system in flux, researchers from the McKelvey School of Engineering at Washington University in St. Louis have devised a new model, based on a deep learning method, that can help researchers predict the behavior of chaotic systems, whether those systems are in the lab, in the pasture or anywhere else.

That is the beauty of aerosols, said Rajan Chakrabarty, assistant professor of energy, environmental and chemical engineering. Its beyond one discipline, its just fundamental particles floating in air and you just observe the chaos.

The research was published as a cover article in the Journal of Aerosol Science.

Chakrabarty and his team postdoctoral researcher Pai Liu and Jingwei Gan, then a PhD candidate at the Illinois Institute of Technology tested two deep learning methods and determined that the generative adversarial network produced the most accurate outcomes. This kind of AI is first fed information about a real-world process, then, based on that data, it creates a simulation of that process.

Motivated by game theory, a generative adversarial network receives both the ground truth (real) and randomly generated data (fake) and tries to determine which is real and which is fake.

This process repeats many times, providing feedback, and the system as a whole gets continually better at generating data matching on which it was trained.

It is computationally expensive to describe the chaotic motion of an aerosol particle through a turbulent system, so Chakrabarty and his team needed real data a real example to train its system. This is where aerosols came in.

The team used the buoyancy-opposed flame in the Chakrabarty lab to create examples on which the AI could be trained. In this case, we experimentally added chaos to a system by introducing buoyancy and temperature differences, Chakrabarty said. Then, they turned on a high-speed camera and recorded 3-D trajectory datasets for soot particles as they meandered through, zipped around and shot across the flame.

They trained two kinds of artificial intelligence models with the data from the fire chamber: the variational autoencoder method and a generative adversarial network (GAN). Each model then produced its own simulation. Only the GANs trajectories mirrored the statistical traits found in the experiments, producing true-to-life simulations of chaotic aerosol particles.

The real-time trajectory of a particle next to the simulated trajectory produced by the GAN

Chakrabartys deep learning model can do more than simulate where soot, or chemicals, will wind up once released into the atmosphere. You see many examples of this kind of chaos, from foraging animals, to the transport of atmospheric pollutants and biothreats, to search and rescue strategies, he said.

In fact, the lab is now working with a psychiatrist looking at the efficacy of treatment in children with tic syndrome. Tics are chaotic, Chakrabarty explained, so the typical clinical trial setup may not be effective in determining a medications efficacy.

The wide application of this new deep learning model speaks not only to the power of artificial intelligence, but also may say something more salient about reality.

Chaos, or order, depends on the eye of the beholder, he said. What this tells you is that there are certain laws that govern everything around us. But theyre hidden.

You just have to uncover them.

Read this article:

Predicting chaos using aerosols and AI - Washington University in St. Louis Newsroom

AI can determine our motivations using a simple camera – TNW

Credit: Silver Logic Labs

Silver Logic Labs (SLL) is in the people business. Technically, its an AI startup, but what it really does is figure out what people want. At first glance theyve simply found a better way to do focus-groups, but after talking to CEO Jerimiah Hamon weve learned theres nothing simple about the work hes doing.

The majority of AI in the world is being taught to do boring stuff. The machines are learning to analyze data, and scrape websites. Theyre being forced to to sew shirts and watch us sleep. Hamon and his team created an algorithm that analyzes the tiniest of human movements, using a camera, and determines what that person is feeling.

Credit: Silver Logic Labs

Dont worry if your mind isnt blown right now it takes a little explanation to sink in. Imagine youre trying to determine whether a TV show will be popular with an audience and youve gathered a group of test-viewers whove just seen your show. How do you know if theyre responding honestly, or simply trying to respond in the way they think they should. Hamon told us:

You have these situations where youre trying to determine how people feel about something that could possibly be considered controversial, or that people might not want to be honest about. You might have a scene with two men kissing each other, or two women. You might have a scene where a dog gets hit by a car in such a way that its supposed to be funny.

Well find, sometimes, people will respond that they didnt like those things, but then when we analyze what they were doing while they were watching it, and we pick up these details and we see theyre expressing joy, or arousal, quite often.

And were better at predicting whether that show is going to do well based on our insight, than if you just go by how people respond to the list of questions.

SLL is trying to solve one of the oldest problems in the world: people lie. In fact, according to the fictional Dr. House, M.D. Everybody lies. More importantly though, Hamon who is not-at-all fictional told us:

With our system we find that we get a lot more data. We can use it to watch every second and compare every second to every other second in a way a person watching cant. So when asked Can you predict a Nielsen rating? the answer is yes, the lowest accuracy rating weve got is about 89% thats the lowest.

Being able to determine the viability of a TV show, or how people feel about a specific scene in a movie is a pretty neat trick. The fact that theyve adapted the technology to work with almost any laptop camera for survey purposes such as observing someone watching a video clip at home is astounding.

Hamon told us that the algorithms work so well his team almost always ends up flunking certain respondents for being under the influence of a substance. A drug detecting robot that can be employed through any connected camera? Thats a little spooky.

SLL does more than provide analytics for TV shows and movies, in fact its ambitions might be some of the highest weve ever seen for an AI company. We asked Hamon how this technology was supposed to be used outside of simply detecting if someone liked something or not:

Im very passionate about health care. With this we can identify neural-deficits very quickly. We did a lot of research and it turns out its proven that if youre going to have a stroke, you will have a series of micro-strokes first. These are undetectable most of the time, sometimes even to the people having them if you live at a nursing home for instance, we could have cameras set up, we could detect those.

These people might have a one percent change in gait, we could see that for example, our system might be able to detect the first of the micro-strokes and signal for help.

The company also wants to change the way law enforcement works. Hamon believes that dash-cams and body-cams that utilize this technology will save lives. He proposed a what-if scenario:

Say youve got someone running up to a building and theres someone on guard, they might see this person running who, incidentally, has just lost their baby and needs help as a threat, maybe due to a lack of training or because theyre scared.

The other side is maybe you see someone running and think they need your help when in reality they have a pound of explosives in their backpack. We know that how a person moves is different based on how they feel. People make these decisions under extreme pressure and theyre not always right.

There are even more uses for educational applications. The potential to determine exactly how students respond to a teacher, or to tailor a specific lesson to an individual could help a lot of people, especially those who arent benefiting from traditional methods.

Its about time someone created an AI that helps us better understand each other in a practical sense that might actually save lives.

Read next: Your social media use is helping scientists monitor the worlds ecosystems

Read the original post:

AI can determine our motivations using a simple camera - TNW

Comcast credits AI software for handling the pandemic internet traffic crush – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Comcast said investments in artificial intelligence software and network capacity have helped it meet internet traffic demand during the pandemic.

Elad Nafshi, senior vice president for next-generation access networks at Comcast Xfinity, said in an interview with VentureBeat that the nations internet network has held up during the surge of residential internet traffic from people working at home. But this success wasnt just because of capital spending on fiber-optic networks. Rather, it has depended on a suite of AI and machine-learning software that gives the company visibility into its network, adds capacity quickly when needed, and fixes problems before humans notice them.

Comcasts network is accessible to more than 59 million U.S. homes via 800,000 miles of cable (about 3 times the distance to the moon). Back in March, Comcast said internet traffic had risen 32% because of COVID-19 but assured everyone it had the capacity to handle peak traffic demands in the U.S. The company also saw a 36% increase in mobile data use over Wi-Fi on Xfinity mobile.

The first part of the growth was because of work from home, Jan Hofmeyr, chief network officer at the Comcast Technology Center in Philadelphia, said in an interview with VentureBeat. Things like video conferencing started to drive a lot of traffic. The consumption of video went up significantly. And then with kids being home, you could see playing games going upward. We saw it go up across the board.

But since March and April, the traffic from Comcasts 21 million subscribers has hit a plateau. People are getting out of their homes more and the initial surge of work-from-home has normalized, Hofmeyr said.

The company normally adds capacity 12 to 18 months ahead of time, with typical plans targeting 45% a year increases in traffic. Since 2017, Comcast has invested $12 billion in the network and added 33,331 new route miles of fiber optic cable. Those investments have enabled the company to double capacity every 2.5 years, Hofmeyr said.

Above: Comcast executive vice president and chief network officer Jan Hofmeyr.

Image Credit: Comcast

With COVID-19, we obviously saw a massive surge in the network, and looking back in retrospect the network was highly reliable, Hofmeyr said. We were able to respond quickly as we saw the spike in traffic. We were able to add capacity without having to take the network down. It was designed for that.

During the initial stages of the pandemic, the new technologies were able to handle regional surges while internet traffic spiked as much as 60%. Nafshi told VentureBeat the network cant handle surges just by getting bigger. In March and April, Comcast added 35 terabits per second of peak capacity to regional networks. And the company added 1,700 100-gigabit links to the core network, compared to 500 in the same months a year earlier.

The companys software, called Comcast Octave, helps manage traffic complexity, working behind the scenes where customers dont notice it. The AI platform was developed by Comcast engineers in Philadelphia. It checks 4,000-plus telemetry data points (such as external network noise, power levels, and other technical issues that can add up to a big impact on performance) on more than 50 million modems across the network every 20 minutes. While invisible, the AI and machine learning tech has played a valuable role over the past several months.

COVID-19 was a very unique experience for us, said Nafshi. When youre building networks, you never build for the situation where everyone gets locked up in their room in their homes and suddenly they jump online. Now, thats the new normal. The challenge we are presented with is how to enable our customers to shelter in place and work and be entertained.

Octave is programmed to detect when modems arent using all the bandwidth available to them as efficiently as possible. Then it automatically adjusts them, delivering substantial increases in speed and capacity. Octave is a new technology, so when COVID-19 hit, Comcast had only rolled it out to part of the network.

To meet the sudden demand, a team of about 25 Octave engineers worked seven-day weeks to reduce the deployment process from months to weeks. As a result, customers experienced a nearly 36% increase in capacity just as they were using more bandwidth than ever before for working, streaming, gaming, and videoconferencing.

Weve had a fair amount of experience already looking at data patterns and acting on it, Nafshi said. We had an interactive platform deployed that we were leaning on. We looked at the data network conditions and decided what knobs we need to turn on our infrastructure in order to really optimize how packets get delivered to the home.

Comcast took the data it had collected and put it into algorithmic solutions to predict where interference could disrupt networks or trouble points might appear.

We have to turn the knobs so that we optimize delivery to your house, which would not be the same as the delivery to my home, Nafshi said. We provide you with much more reliable service by detecting the patterns that lead up to breakage and then have the network self-heal based on those patterns. Were making that completely transparent to the customer. The network can self-heal autonomously in a self-feedback loop. Its a seamless platform for the customer.

Above: The Comcast Technology Center in Philadelphia.

Image Credit: Comcast

Before introducing Comcast Octave, the company also deployed its Smart Network Platform. Developed by Comcast engineers, this suite of software tools automates core network functions. As a result of this investment, Comcast was able to dramatically cut down the number of outages customers experience and their duration. The outages are now lasting a matter of minutes sometimes, compared to hours before, said Noam Raffaelli, senior vice president of network and communications engineering at Comcast Xfinity, in an interview with VentureBeat.

We are trying to benefit from innovation on software to basically drive our outcomes and our operational key performance indicators (KPIs) down so things like outage minutes or minutes to repair go down, said Raffaelli. We look at data across our network and use data science to understand trends and do correlations between events we see on the network. We have telemetry and automation, so we can operate the equipment without the manual interference of our engineers. We mitigate issues before there is any degradation in the networks.

On top of that, the equipment is more secure and more automated, Raffaelli said. Comcast has also been able to figure out how to build redundancies into the network so it can hold up in the case of accidents, such as a backhoe operator cutting a fiber-optic cable.

This gives us an unprecedented real-time view of our network and unprecedented insights into what the customer experience is, Raffaelli said. Weve had a double-digit improvement in outage minutes and repair. We are building redundant links across the network.

A tool called NetIQ uses machine learning to scan the core network continuously, making thousands of measurements every hour. Before NetIQ, Comcast would often find out about a service-impacting issue like a fiber cut when it started seeing service degradation or getting customer calls.

With NetIQ in place, Comcast can see an outage instantly. The company has reduced the average amount of time it takes to detect a potentially service-impacting issue on the core network from 90 minutes to less than five minutes, which has paid off during COVID-19.

I witnessed some of this firsthand, as Im a Comcast subscriber. In four months, Ive had only one outage. I logged into my service account via the phone and got a message saying my area was experiencing an outage that was expected to last for 90 minutes. After that, the network was fixed and I have stayed on it since.

Above: Comcast manages its network from the CTC in Philadelphia.

Image Credit: Comcast

Gamers are among the hardest internet users to please, as they want to download a new game as soon as its available. They also want low latency, or no interaction delays, which is important in things like multiplayer shooting games like Call of Duty: Warzone, where you dont want confusion over who pulled a trigger first.

We are laser-focused on latency across our network. Its an extremely important metric that we track very closely across the entire network, Hofmeyr said. We feel very bullish and very excited about what we are able to deliver from a business perspective. I dont believe that we have a negative perspective, any impact on gaming from a latency perspective.

He added, Gaming is writing two things for us. One is the game downloads are just becoming bigger and bigger. This is very common today that a game download is multi-gig. And when they are released, you see massive expansion and growth in terms of downloads. On the latency side, we continuously invest. We are looking at AI. We are looking at software and tools to help improve it over time.

Game companies invest in low-latency game servers and improving the connections between specific gamers who are in the same match or the same region so latency doesnt affect them as much. But infrastructure companies like Comcast can also improve latency.

Content delivery networks are an integral part of making video delivery more efficient. Comcast video is delivered through the companys own CDNs, which position videos throughout the network so they can be delivered in as short a distance as possible to the viewer. The company constantly monitors peaks in traffic and designs the network for those peaks. Having a lot of people playing a game or watching a video at the same time establishes new peaks. But the 1,700 100-gig links allow the company to deal with those peaks by helping each region deal with peaks in specific parts of the network.

Above: Inside Comcasts CTC in Philadelphia.

Image Credit: Comcast

While its still early in the process, Comcast is moving to a virtualized, cloud-based network architecture so it can manage accelerating demand and deliver faster, more reliable service. Virtualization means taking functions that were once performed by large, purpose-built pieces of hardware hardware that required manual upgrades to deliver innovation and moving them into the cloud.

Transitioning into web-based software is helping us self-heal much faster and build our capabilities faster, Nafshi said. If there is a failure point, you fail at a container level rather than an appliance level, and that greatly reduces the time to repair and mitigate.

By doing this, Comcast will reduce the innovation cycles on those functions from years down to months. One example of this is the virtual CMTS initiative. (A CMTS is a large piece of hardware that serves an entire neighborhood, delivering traffic between the core network and homes.) Increasingly, Comcast has been making those devices virtual by transitioning their functions into software that runs in data centers.

This not only allows Comcast to innovate faster, it also provides two key benefits for customers. First, it allows the firm to introduce much smaller failure points into the system, grouping customers into smaller groups so if one part of the network environment experiences an issue, it affects far fewer people. Second, the virtual architecture lets Comcast leverage other AI tools to have far greater visibility into the health of the network and to self-heal issues without human intervention.

Upload speeds increased somewhat during COVID-19, but not nearly as much as downloading did. Uploads are driven by things such as livestreamers, who share their video across a network of fans. In the future, Comcast is promising symmetrical download and upload speeds at 10 gigabits a second. It hasnt said when that will happen, but Cable Labs, the research arm of the cable industry, is working on the technology.

Its something that is very much in development, Hofmeyr said. Its going to be remarkable. We can deploy on top of existing infrastructure by leveraging AI software and the evolving DOCSIS protocol.

Read the original post:

Comcast credits AI software for handling the pandemic internet traffic crush - VentureBeat

What Does An AI Chip Look Like? – SemiEngineering

Depending upon your point of reference, artificial intelligence will be the next big thing or it will play a major role in all of the next big things.

This explains the frenzy of activity in this sector over the past 18 months. Big companies are paying billions of dollars to acquire startup companies, and even more for R&D. In addition, governments around the globe are pouring additional billions into universities and research houses. A global race is underway to create the best architectures and systems to handle the huge volumes of data that need to be processed to make AI work.

Market projections are rising accordingly. Annual AI revenues are predicted to reach $36.8 billion by 2025, according to Tractica. The research house says it has identified 27 different industry segments and 191 use cases for AI so far.

Fig. 1. AI revenue growth projection. Source: Tractica

But dig deeper and it quickly becomes apparent there is no single best way to tackle AI. In fact, there isnt even a consistent definition of what AI is or the data types that will need to be analyzed.

There are three problems that need to be addressed here, said Raik Brinkmann, president and CEO of OneSpin Solutions. The first is that you need to deal with a huge amount of data. The second is to build an interconnect for parallel processing. And the third is power, which is a direct result of the amount of data that you have to move around. So you really need to move from a von Neumann architecture to a data flow architecture. But what exactly does that look like?

So far there are few answers, which is why the first chips in this market include various combinations of off-the-shelf CPUs, GPUs, FPGAs and DSPs. While new designs are under development by companies such as Intel, Google, Nvidia, Qualcomm and IBM, its not clear whose approach will win. It appears that at least one CPU always will be required to control these systems, but as streaming data is parallelized, co-processors of various types will be required.

Much of the processing in AI involves matrix multiplication and addition. Large numbers of GPUs working in parallel offer an inexpensive approach, but the penalty is higher power. FPGAs with built-in DSP blocks and local memory are more energy efficient, but they generally aremore expensive. This also is a segment where software and hardware really need to be co-developed, but much of the software is far behind the hardware.

There is an enormous amount of activity in research and educational institutions right now, said Wally Rhines, chairman and CEO of Mentor Graphics. There is a new processor development race. There are also standard GPUs being used for deep learning, and at the same time there are a whole bunch of people doing work with CPUs. The goal is to make neural networks behave more like the human brain, which will stimulate a whole new wave of design.

Vision processing has received most of the attention when it comes to AI, largely because Tesla has introduced self-driving capabilities nearly 15 years before the expected rollout of autonomous vehicles. That has opened a huge market for this technology, and for chip and overall system architectures needed to process data collected by image sensors, radar and LiDAR. But many economists and consulting firms are looking beyond this market to how AI will affect overall productivity. A recent report from Accenture predicts that AI will more than double GDP for some countries (see Fig. 2 below). While that is expected to cause significant disruption in jobs, the overall revenue improvement is too big to ignore.

Fig. 2: AIs projected impact.

Aart de Geus, chairman and co-CEO of Synopsys, points to three waves of electronicscomputation and networking, mobility, and digital intelligence. In the latter category, the focus shifts from the technology itself to what it can do for people.

Youll see processors with neural networking IP for facial recognition and vision processing in automobiles, said de Geus. Machine learning is the other side of this. There is a massive push for more capabilities, and the state of the art is doing this faster. This will drive development to 7nm and 5nm and beyond.

Current approaches Vision processing in self-driving dominates much of the current research in AI, but the technology also has a growing role in drones and robotics.

For AI applications in imaging, the computational complexity is high, said Robert Blake, president and CEO of Achronix. With wireless, the mathematics is well understood. With image processing, its like the Wild West. Its a very varied workload. It will take 5 to 10 years before that market shakes out, but there certainly will be a big role for programmable logic because of the need for variable precision arithmetic that can be done in a highly parallel fashion.

FPGAs are very good at matrix multiplication. On top of that, programmability adds some necessary flexibility and future-proofing into designs, because at this point it is not clear where the so-called intelligence will reside in a design. Some of the data used to make decisions will be processed locally, some will be processed in data centers. But the percentage of each could change for each implementation.

Thats has a big impact on AI chip and software design. While the big picture for AI hasnt changed muchmost of what is labeled AI is closer to machine learning than true AIthe understanding of how to build these systems has changed significantly.

With cars, what people are doing is taking existing stuff and putting it together, said Kurt Shuler, vice president of marketing at Arteris. For a really efficient embedded system to be able to learn, though, it needs a highly efficient hardware system. There are a few different approaches being used for that. If you look at vision processing, what youre doing is trying to figure out what is it that a device is seeing and how you infer from that. That could include data from vision sensors, LiDAR and radar, and then you apply specialized algorithms. A lot of what is going on here is trying to mimic whats going on in the brain using deep and convolutional neural networks.

Where this differs from true artificial intelligence is that the current state of the art is being able to detect and avoid objects, while true artificial intelligence would be able to add a level of reasoning, such as how to get through a throng of people cross a street or whether a child chasing a ball is likely to run into the street. In the former, judgments are based on input from a variety of sensors based upon massive data crunching and pre-programmed behavior. In the latter, machines would be able to make value judgments, such as the many possible consequences of swerving to avoid the childand which is the best choice.

Sensor fusion is an idea that comes out of aircraft in the 1990s, said Shuler. You get it into a common data format where a machine can crunch it. If youre in the military, youre worried about someone shooting at you. In a car, its about someone pushing a stroller in front of you. All of these systems need extremely high bandwidth, and all of them have to have safety built into them. And on top of that, you have to protect the data because security is becoming a bigger and bigger issue. So what you need is both computational efficiency and programming efficiency.

This is what is missing in many of the designs today because so much of the development is built with off-the-shelf parts.

If you optimize the network, optimize the problem, minimize the number of bits and utilize hardware customized for a convolutional neural network, you can achieve a 2X to 3X order of magnitude improvement in power reduction, said Samer Hijazi, senior architect at Cadence and director of the companys Deep Learning Group. The efficiency comes from software algorithms and hardware IP.

Google is attempting to alter that formula. The company has developed Tensor processing units (TPUs), which are ASICs created specifically for machine learning. And in an effort to speed up AI development, the company in 2015 turned its TensorFlow software into open source.

Fig. 3: Googles TPU board. Source: Google.

Others have their own platforms. But none of these is expected to be the final product. This is an evolution, and no one is quite sure how AI will evolve over the next decade. Thats partly due to the fact that use cases are still being discovered for this technology. And what works in one area, such as vision processing, is not necessarily good for another application, such as determining whether an odor is dangerous or benign, or possibly a combination of both.

Were shooting in the dark, said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. We know how to do machine learning and AI, but how they actually work and converge is unknown at this point. The current approach is to have lots of compute power and different kinds of compute enginesCPUs, DSPs for neural networking types of applicationsand you need to make sure it works. But thats just the first generation of AI. The focus is on compute power and heterogeneity.

That is expected to change, however, as the problems being solved become more targeted. Just as with the early versions of IoT devices, no one quite knew how various markets would evolve so systems companies threw in everything and rushed products to market using existing chip technology. In the case of smart watches, the result was a battery that only lasted several hours between charges. As new chips are developed for those specific applications, power and performance are balanced through a combination of more targeted functionality, more intelligent distribution of how processing is parsed between a local device and the cloud, and a better understanding of where the bottlenecks are in a design.

The challenge is to find the bottlenecks and constraints you didnt know about, said Bill Neifert, director of models technology at ARM. But depending on the workload, the processor may interact differently with the software, which is almost inherently a parallel application. So if youre looking at a workload like financial modeling or weather mapping, the way each of those stresses the underlying system is different. And you can only understand that by probing inside.

He noted that the problems being solved on the software side need to be looked at from a higher level of abstraction, because it makes them easier to constrain and fix. Thats one key piece of the puzzle. As AI makes inroads into more markets, all of this technology will need to evolve to achieve the same kinds of efficiencies that the tech industry in general, and the semiconductor industry in particular, have demonstrated in the past.

Right now we find architectures are struggling if they only handle one type of computing well, said Mohandass. But the downside with heterogeneity is that the whole divide and conquer approach falls apart. As a result, the solution typically involves over-provisioning or under-provisioning.

New approaches As more use cases are established for AI beyond autonomous vehicles, adoption will expand.

This is why Intel bought Nervana last August. Nervana develops 2.5D deep learning chips that utilize a high-performance processor core, moving data across an interposer to high-bandwidth memory. The stated goal is a 100X reduction in time to train a deep learning model as compared with GPU-based solutions.

Fig. 4: Nervana AI chip. Source: Nervana

These are going to look a lot like high-performance computing chips, which are basically 2.5D chips and fan-out wafer-level packaging, said Mike Gianfagna, vice president of marketing at eSilicon. You will need massive throughput and ultra-high-bandwidth memory. Weve seen some companies looking at this, but not dozens yet. Its still a little early. And when youre talking about implementing machine learning and adaptive algorithms, and how you integrate those with sensors and the information stream, this is extremely complex. If you look at a car, youre streaming data from multiple disparate sources and adding adaptive algorithms for collision avoidance.

He said there are two challenges to solve with these devices. One is reliability and certification. The other is security.

With AI, reliability needs to be considered at a system level, which includes both hardware and software. ARMs acquisition of Allinea in December provided one reference point. Another comes out of Stanford University, where researchers are trying to quantify the impact of trimming computations from software. They have discovered that massive cutting, or pruning, doesnt significantly impact the end product. University of California at Berkeley has been developing a similar approach based upon computing that is less than 100% accurate.

Coarse-grain pruning doesnt hurt accuracy compared with fine-grain pruning, said Song Han, a Ph.D. candidate at Stanford University who is researching energy-efficient deep learning. Han said that a sparse matrix developed at Stanford required 10X less computation, an 8X smaller memory footprint, and used 120X less energy than DRAM. Applied to what Stanford is calling an Efficient Speech Recognition Engine, he said that compression led to accelerated inference. (Those findings were presented at Cadences recent Embedded Neural Network Summit.)

Quantum computing adds yet another option for AI systems. Leti CEO Marie Semeria said quantum computing is one of the future directions for her group, particularly for artificial intelligence applications. And Dario Gil, vice president of science and solutions at IBM Research, explained that using classical computing, there is a one in four chance of guessing which of four cards is red if the other three are blue. Using a quantum computer and entangling of superimposed qubits, by reversing the entanglement the system will provide a correct answer every time.

Fig. 5: Quantum processor. Source: IBM.

Conclusions AI is not one thing, and consequently there is no single system that works everywhere optimally. But there are some general requirements for AI systems, as shown in the chart below.

Fig. 6: AI basics. Source: OneSpin

And AI does have applications across many markets, all of which will require extensive refinement, expensive tooling, and an ecosystem of support. After years of relying on shrinking devices to improve power, performance and cost, entire market segments are rethinking how they will approach new markets. This is a big win for architects and it adds huge creative options for design teams, but it also will spur massive development along the way, from tools and IP vendors all the way to packaging and process development. Its like hitting the restart button for the tech industry, and it should prove good for business for the entire ecosystem for years to come.

Related Stories What Does AI Really Mean? eSilicons chairman looks at technology advances, its limitations, and the social implications of artificial intelligenceand how it will change our world. Neural Net Computing Explodes Deep-pocket companies begin customizing this approach for specific applicationsand spend huge amounts of money to acquire startups. Plugging Holes In Machine Learning Part 2: Short- and long-term solutions to make sure machines behave as expected. Wearable AI System Can Detect A Conversation Tone (MIT) An artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a persons speech patterns and vitals.

Read more:

What Does An AI Chip Look Like? - SemiEngineering

How a poker-playing AI could help prevent your next bout of the flu – ExtremeTech

Youd be forgiven for finding little exceptional about the latest defeat of an arsenal of poker champions by the computer algorithm Libratus in Pittsburgh last week. After all, inthe last decade or two, computers have made a habit of crushingboard game heroes. And at first blush, this appears to be just another iteration in that all-too-familiar story. Peel back a layer though, and the most recent AI victory is as disturbing as it is compelling. Lets explore the compelling side of the equation before digging into the disturbing implications of the Libratus victory.

By now, many of us are familiar with the idea of AI helping out in healthcare. For the last year or so IBM has been bludgeoning us with TV commercials about its Jeopardy-winning Watson platform, now being put to use to help oncologists diagnose and treat cancer. And while I wish to take nothing away from that achievement, Watson is a question answering system with no capacity for strategic thinking. The latter topic belongs to a class of situations more germane to the field of game theory. Game theory is usually tucked underthe sub-genre of economics, for it deals with how entities make strategic decisions in the pursuit of self interest. Its also the discipline from which the AI poker playing algorithm Libratus gets its smarts.

What does this have to do with health care and the flu? Think of disease as a game between strategic entities. Picture avirus as one player, a player with a certain set of attack and defense strategies. When the virus encounters your body, a game ensues, in which your body defends with its own strategies and hopefully prevails. This game has been going on a long time, with humans having only a marginal ability to control the outcome. Our bodys natural defenses have been developed in evolutionary time, and thus have a limited ability to make on the fly adaptations.

But what if we could recruit computers to be our allies in this game against viruses? And what if the same reasoning ability that allowed Libratus to prevail over the best poker mindsin the world could tacklehow to defeat a virus or a bacterial infection? This is in fact the subject of a compelling research paperby Toumas Sandholm, the designer of the Libratus algorithm. In it, he explains at length how an AI algorithm could be used for drug design and disease prevention.

With only the health of the entire human race at stake, its hard to imagine a rationale that would discourage us from making use of such a strategic superpower. Now for the disturbing part of story, and the so-called fable of the sparrows recounted by Nick Bostrom in his singular work Superintelligence: Paths, Dangers and Strategies. In the preface to the book, he tells of a group of sparrows who recruit a baby owl to help defend them against other predators, not realizing the owl might one day grow up and devour them all. In Libratus, an algorithm thats in essence a universal strategic game-playing machine, and is likely capable of besting humankind in any number of real-world strategic games, we may have finally met our owl. And while the end of the story between ourselves and Libratus has yet to be determined, prudence would surely advise we tread carefully.

View original post here:

How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech

Dartmouth professor working on AI cancer cure | Education – The Union Leader

Its a big claim, Dartmouth College Professor Gene Santos Jr. admits, but he thinks his artificial intelligence tool can help doctors come up with a cancer cure.

We're trying to build this fundamental fabric to build that playbook together, so that it makes sense, and so you can start mixing existing playbooks, Santos said.

Santos and his team of Dartmouth engineering colleagues, along with Joseph Gormley, Director of Advanced Systems Development at Tufts Clinical and Translational Science Institute and his colleagues, as well as industry partner IOMICS, are working on a $34 million National Institutes of Health program to develop the artificial intelligence tool to bring together all known cancer research.

The plan is to develop an AI-based system that analyzes patients clinical and genomic data and the relationship between biochemical pathways that drive health and disease, Santos said.

Were trying to find new connections that people have not seen, Santos said. We believe this system will generate new insights, accelerating the work of the biomedical researcher.

The research is already out there, and it is already being collected into knowledge databases. Santos and his team are working on developing the tool called the Pathway Hypothesis Knowledgebase, or PHK, which will analyze the data and come up with treatment plans.

Santos said the data and research available isnt always complete, and some of it is inconsistent.

Data is noisy, and data can be inconsistent, Santos said.

Different terms are used to describe the same subject from hospital to hospital, and not all hospitals and researchers use a universal set of measurements. The PHK will account for the inconsistencies and contradictions in the data, helping doctors see through the research and find the cures, Santos said.

With the PHK, doctors could treat a patient using historical data of other patients with similar symptoms and genomic profiles, according to Santos. It could also be used to determine additional uses for approved drugs already on the market, and could quickly determine treatments to new diseases, such as COVID-19.

Santos hopes to have PHK in the hands of personal physicians in the next decade, but he thinks the tool will start to bear fruit for researchers in the next three to five years.

We will impact how we treat cancer and a multitude of complex multi-faceted diseases, said Santos.

Were closer than we think. I think we can get there, Santos said.

The researchers presented a completed prototype in March and were notified in June that they had been selected to continue their research. In the coming years, the team hopes to use the prototype with additional analytical, reasoning and learning tools that are being developed by other groups to build the Biomedical Data Translator to fully implement the system for use by researchers, according to Santos.

View post:

Dartmouth professor working on AI cancer cure | Education - The Union Leader