Daily Archives: May 9, 2017

AI Is the Future of Cybersecurity, for Better and for Worse – Harvard Business Review

Posted: May 9, 2017 at 3:31 pm

Executive Summary

In the near future, as Artificial Intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. The rise of AI-enabled cyber-attacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses. Ironically, our best hope to defend against AI-enabled hacking is by using AI. But this is also very likely to lead to an AI arms race, the consequences of which may be very troubling in the long term, especially as big government actors join in the cyberwars. Business leaders would be well advised to familiarize themselves with the state-of-the-art in AI safety and security research. Armed with more knowledge, they can then rationally consider how the addition of AI to their product or service will enhance user experiences, while weighing the costs of potentially subjecting users to additional data breaches and other possible dangers.

In the near future, as artificial intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. The rise of AI-enabled cyberattacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses. Ironically, our best hope to defend against AI-enabled hacking is by using AI. But this is very likely to lead to an AI arms race, the consequences of which may be very troubling in the long term, especially as big government actors join the cyber wars.

My research isat the intersection of AI and cybersecurity. In particular, I am researching how we can protect AI systems from bad actors, as well as how we can protect people from failed or malevolent AI. This work falls into a larger framework of AI safety,attempts to create AI that is exceedingly capable but also safe and beneficial.

A lot has been written about problems thatmight arise with the arrival of true AI, either as a direct impact of such inventions or because of a programmers error. However, intentional malice in design and AI hacking have not been addressed to a sufficient degree in the scientific literature. Its fair to say that when it comes to dangers from a purposefully unethical intelligence, anything is possible. According to Bostroms orthogonality thesis, an AI system can potentially have any combination of intelligence and goals. Such goals can be introduced either throughthe initial design or throughhacking, or introduced later, in case of an off-the-shelf software just add your own goals. Consequently, depending on whose bidding the system is doing (governments, corporations, sociopaths, dictators, military industrial complexes, terrorists, etc.), it may attempt to inflict damage thats unprecedented in the history of humankind or thats perhaps inspired by previous events.

Even today, AI can be used to defend and to attack cyber infrastructure, as well as to increase the attack surface that hackers can target,that is, the number of ways for hackers to get into a system. In the future, as AIs increase in capability, I anticipate that they will first reach and then overtake humans in all domains of performance, as we have already seen with games like chessandGoand are now seeing with important human tasks such asinvestinganddriving. Its important for business leaders to understand how that future situation will differ from our current concerns and what to do about it.

If one of todays cybersecurity systems fails, the damage can be unpleasant, but is tolerable in most cases: Someone loses money orprivacy. But for human-level AI (or above), the consequences could be catastrophic. A single failure of a superintelligent AI (SAI) system could cause an existential risk event an event that has the potential to damage human well-being on a global scale. The risks are real, as evidenced by the fact that some of the worlds greatest minds in technology and physics, includingStephen Hawking, Bill Gates, and Elon Musk, have expressed concerns about the potential for AI to evolve to a point where humans could no longer control it.

When one of todays cybersecurity systems fails, you typically get another chance to get it right, or at least to do better next time. But with an SAI safety system, failure or success is a binary situation: Either you have a safe, controlled SAIor you dont. The goal of cybersecurity in general is to reduce the number of successful attacks on a system; the goal of SAI safety, in contrast, is to make sure noattacks succeed in bypassing the safety mechanisms in place. The rise of brain-computer interfaces, in particular, will create a dream target for human and AI-enabled hackers. And brain-computer interfaces are not so futuristic theyre already being used in medical devices and gaming, for example. If successful, attacks onbrain-computer interfaces would compromise not only critical information such as social security numbers or bank account numbers but also our deepest dreams, preferences, and secrets. There is the potential to create unprecedented new dangers for personal privacy, free speech, equal opportunity, and any number of human rights.

Business leaders are advised to familiarize themselves with the cutting edge ofAI safety and security research, which at the moment is sadly similar to the state of cybersecurity in the 1990s, andour current situation with the lack of security forthe internet of things. Armed with more knowledge, leaderscan rationally consider how the addition of AI to their product or service will enhance user experiences, while weighing the costs of potentially subjecting users to additional data breaches and possible dangers. Hiring a dedicated AI safety expert may be an important next step, as most cybersecurity experts are not trained in anticipating or preventing attacks against intelligent systems. I am hopeful that ongoing research will bring additional solutions for safely incorporatingAI into the marketplace.

More:

AI Is the Future of Cybersecurity, for Better and for Worse - Harvard Business Review

Posted in Ai | Comments Off on AI Is the Future of Cybersecurity, for Better and for Worse – Harvard Business Review

Rich professionals could be replaced by AI, shrieks Gartner – The Register

Posted: at 3:31 pm

An AI lawyer weighs up a particularly tricky contract law dispute while pondering how to kill Arnie Schwarzenegger

Rise of the Machines Ball-gazers* at Gartner reckon robots could replace doctors, lawyers and IT workers in the next five years. Panic, all ye faithful.

"The economics of AI and machine learning will lead to many tasks performed by professionals today becoming low-cost utilities," said Stephen Prentice, Gartner Fellow and veep.

"AI's effects on different industries will force the organisation to adjust its business strategy," he continued presumably talking about others rather than his outfit of mystic mages. "Many competitive, high-margin industries will become more like utilities as AI turns complex work into a metered service that the enterprise pays for, like electricity."

Inevitably, the semi-mythical beast known as the CIO must prepare for this, apparently by devising Soviet-style five-year plans that "achieve the right balance of AI and human skills".

Prentice intoned: "The CIO should commission the enterprise architecture team to identify which IT roles will become utilities and create a timeline for when these changes become possible."

We are told that machine learning means an expensively trained lawyer could easily be replaced by an AI system capable of learning, which can then be cheaply cloned across law firms looking to create an army of electronic Rumpoles of the Bailey.

Lawyers appear particularly worried that AI and/or robots might replace them, though AI advocates are keen to insist that it will displace them sideways rather than resulting in layoffs. Feisty lawyerly blog Legal Cheek spotted a study earlier this year which reckoned that adoption of AI by law firms would be slow and that it would mainly be focused on "drudgery" such as reviewing documents for disclosure purposes in commercial litigation.

*We are assured that Gartner's balls are crystal, not hairy.

Follow this link:

Rich professionals could be replaced by AI, shrieks Gartner - The Register

Posted in Ai | Comments Off on Rich professionals could be replaced by AI, shrieks Gartner – The Register

How Facebook’s AI Ambitions Will Boost NVIDIA – Motley Fool

Posted: at 3:31 pm

Facebook (NASDAQ:FB) has been doubling down on artificial intelligence (AI) to process the large amount of content users post to its platform, trying to make sense of all of that data to make communication easier. For instance, AI helps the social media specialist classify live videos in real time, while also helping in speech and text translations.

One of the ways to take advantage of Facebook's increasing AI adoption is through NVIDIA (NASDAQ:NVDA), as its graphics processing units (GPUs) are playing a mission-critical role in the fast processing of huge data sets.

Back in March, Facebook announced that it is using NVIDIA's GPUs to power its next-generation GPU server -- Big Basin -- so it can train bigger machine learning models for faster processing of photos, text, and videos. NVIDIA supplied eight of its Tesla P100 GPU accelerators for the server, along with its high-speed NVLink technology that enables ultra-fast communication between the GPUs by removing any connection-related bottlenecks.

NVIDIA's Tesla GPUs and the NVLink interconnect technology are allowing Facebook to train 30% larger AI models, thanks to a 33% jump in the bandwidth memory as compared to the previous generation -- Big Sur processor. As it turns out, Big Basin can perform 100% faster than Big Sur in certain scenarios, processing more complex models in a shorter time frame.

Image source: NVIDIA.

What's more, NVIDIA and Facebook have now taken their AI relationship further with the Caffe2, a scalable deep learning AI framework that gives developers more power in training and iterating AI models. Caffe2 connects eight of Facebook's Big Basin servers, giving users the capability of using 64 NVIDIA Tesla GPU accelerators and allowing them to train AI models seven times faster with the help of a supercomputer.

Facebook will need more high-performance servers going forward thanks to booming mobile data traffic and a huge user base. The social media specialist has 1.23 billion daily active users who post 300 million photos a day and 510,000 comments each second. What's more, the company is betting big on video, and its "Live" service has seen a 400% surge in streaming since launch.

Facebook's growth is not going to stop anytime soon as its emerging markets user base is growing at a terrific pace. Research firm eMarketer forecasts that countries such as India, Indonesia, Mexico, and the Philippines will become its fastest-growing markets until 2020, leading to a spurt in content posted onto the platform, especially due to growing smartphone penetration.

Facebook, therefore, will need more capable servers to tackle the growing data volume and complexity. This is good news for NVIDIA's professional visualization business, which houses the Tesla GPU unit. The Tesla GPUs are aimed at accelerating high-performance computing and hyperscale data center workloads -- allowing them to crunch huge amounts of data at a fast pace -- so Facebook is going to need more of them as its workload grows.

As the likes of Facebook and others start using AI to train their analytics models, NVIDIA will find a bigger market to sell its GPU accelerators. Markets and Markets forecasts that the AI chipset market will grow at over 60% a year until 2022, hitting a size of $16 billion. GPU accelerators could make up a big part of this market thanks to the crucial role they play in the AI space.

This should supercharge NVIDIA's professional visualization business, which is already reaping the benefits of growing data center workloads. In fact, the Tesla GPUs are being used by cloud service providers such as Amazon Web Services, Google Cloud, and Microsoft Azure, and Facebook will further boost the segment's growth thanks to its growing AI bets.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fools board of directors. LinkedIn is owned by Microsoft. Harsh Chauhan has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Amazon, Facebook, and Nvidia. The Motley Fool has a disclosure policy.

Follow this link:

How Facebook's AI Ambitions Will Boost NVIDIA - Motley Fool

Posted in Ai | Comments Off on How Facebook’s AI Ambitions Will Boost NVIDIA – Motley Fool

NVIDIA’s AI may keep watch over smart cities of the future – Engadget

Posted: at 3:31 pm

According to NVIDIA, there are already hundreds of millions of surveillance cameras around the globe, with the number expected to rise to the 1 billion mark by 2020. Human beings have a hard time sifting through the flood of moving images, storing the majority of it on hard drives for later viewing. NVIDA thinks that deep learning AI can help video analytics much more accurately than humans or even real-time computer monitoring. The company has partnered with more than 50 companies that make security cameras, including Hikvision. "The benefit of GPU deep learning is that data can be analyzed quickly and accurately to drive deeper insights," said Shiliang Pu, president at the Hikvision Research Institute in China.

A city with cloud-connected, AI-powered surveillance systems in place could find missing persons, notify residents of nearby emergencies, alert police to crimes in progress or even send out traffic congestion warnings. It could also track and monitor our behavior both legal and otherwise along with gathering personal data for advertisers. Tomorrow can be both exciting and scary at the same time. Whether the city of the future keeps us safe, keeps us in line or something in-between will depend on how we implement emerging technology like this now.

Go here to see the original:

NVIDIA's AI may keep watch over smart cities of the future - Engadget

Posted in Ai | Comments Off on NVIDIA’s AI may keep watch over smart cities of the future – Engadget

Nvidia aims to train 100000 developers in deep learning, AI technologies – ZDNet

Posted: at 3:31 pm

Nvidia said it plans to train 100,000 developers through its Deep Learning Institute.

For Nvidia, the Deep Learning Institute, an effort to train developers in machine learning and artificial intelligence, is a way to create a well of expertise that can ultimately lead to more sales of GPUs.

The bet for Nvidia is that IDC estimates that 80 percent of all applications will have AI as a component by 2020.

Nvidia's Deep Learning Institute launched a year ago and has held training events at academic institutions, companies and government agencies. So far, Nvidia's efforts have trained more than 10,000 developers who use Amazon Web Services (AWS) EC2 P2 GPU instances.

How to Implement AI and Machine Learning

The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.

Greg Estes, vice president of developer programs at Nvidia, acknowledged that training 100,000 developers in 2017 is ambitious, but added that there is strong demand and expanded content can broaden the audience.

In an effort to train 100,000 developers in the next year, Nvidia has stepped up its offerings with the following:

Estes told journalists at Nvidia GTC it made sense for the company to partner with larger companies.

"They are going to help us expand our reach ... because these companies are much bigger than we are, and they have a lot of worldwide reach," he said.

"I think most people would agree that we are at the very leading edge of artificial intelligence and deep learning -- so if we take our knowledge and expertise there, and we work with these other companies, they can help bring that out into the community -- it's a win for everybody."

In the coming year, Nvidia is also planning to certify engineer competence.

"Today when you go through and you take these learning courses, we give you a certificate that you have attended the course, but we don't have the testing at the end," he said. "That is on our roadmap, and we plan to do that this year."

See more here:

Nvidia aims to train 100000 developers in deep learning, AI technologies - ZDNet

Posted in Ai | Comments Off on Nvidia aims to train 100000 developers in deep learning, AI technologies – ZDNet

Baidu is Using AI to Improve Its Products — and Its Products to Improve Its AI – Madison.com

Posted: at 3:31 pm

Baidu, Inc. (NASDAQ: BIDU) is the largest online search engine in China, and since its entry into the realm of artificial intelligence (AI), the company has been integrating its AI know-how into nearly every facet of its business. What investors may not know is that this process produces a virtuous cycle that continues to feed itself.

In the case of Baidu, it has been engaged in an area of AI known as deep learning. Algorithms and software models are used to develop artificial neural networks that mimic the human brain's ability to learn. Vast amounts of data are required to train the system, and Baidu's online search engine provides a vast depository of information on which to draw. Once trained, these AI systems are then used to process data at much faster rates than their human counterparts can and are skilled at detecting patterns. One key aspect of deep learning is that these systems become more useful the more they are used.

Baidu is in a virtuous cycle with its AI. Image source: Baidu.

The ability to detect patterns can be used in a wide array of areas, and these systems are particularly skilled at tasks such as image recognition, making more precise online search recommendations, and more accurately predicting traffic conditions for users of its Maps service.

The company has also applied its AI acumen to better estimating delivery times for its Baidu Delivery service and making customized restaurant suggestions for its recommendation platform, Nuomi. It uses AI to recommend content to its millions of users of its iQiyi video streaming service and provide more relevant content for its news feed.

The process of improving products with AI is a two-way street. When it makes recommendations, the AI system receives feedback from these diners and drivers and streamers, which allows the system to make more reliable recommendations in the future. The system improves with each interaction.

Baidu is using its AI system to develop self-driving car technology in China. It recently announced the acquisition of xPerception, a start-up in the field of computer vision. The company focuses on object recognition and depth perception that can be used in the area of autonomous vehicles and can also be used for drones.

This virtuous cycle is the secret sauce of Baidu's AI system. By using AI, it improves its products and recommendations. By integrating the feedback into the AI system, it improves the relevance of future recommendations. This holds true across the plethora of ways that Baidu is using its AI.

Baidu also announced that it would open-source its platform for autonomous driving, in a move that was said to be inspired by Google's open sourcing of its Android platform. The Alphabet Inc. (NASDAQ: GOOGL) (NASDAQ: GOOG) division came to dominate the smartphone market with its Android operating system, by making it available to all comers, thereby maintaining its supremacy in mobile search.

This is a smart move, as the cumulative data that's acquired from self-driving cars will make the systems safer, and Baidu hopes to become the de facto leader of such data in its native China.

Baidu's strategy regarding AI has yet to bear fruit. While the company had been investing large sums to bolster its AI capability, and it has been seeing incremental improvements in a variety of areas, it has yet to produce any significant increase in revenue.

In the most recent quarter, the company marked its third consecutive quarter of declining revenue, though Baidu hopes to return to growth later this year. The revenue from the company's most recent quarter increased 6.8% over the previous year quarter to $2.45 billion, and earnings of $258 million were down by 10.6% year-over-year.

The area of artificial intelligence offers exciting possibilities, but investors would more likely be excited by a return to growth in revenue and earnings.

10 stocks we like better than Baidu

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Baidu wasn't one of them! That's right -- they think these 10 stocks are even better buys.

*Stock Advisor returns as of May 1, 2017

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena owns shares of Alphabet (A shares) and Baidu. Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Baidu. The Motley Fool has a disclosure policy.

Visit link:

Baidu is Using AI to Improve Its Products -- and Its Products to Improve Its AI - Madison.com

Posted in Ai | Comments Off on Baidu is Using AI to Improve Its Products — and Its Products to Improve Its AI – Madison.com

Three key challenges that could derail your AI project – ZDNet

Posted: at 3:31 pm

Microsoft wants AI to help, not replace humans.

It's been abundantly clear for a while that in 2017, artificial intelligence (AI) is going to be front and center of vendor and enterprise interest. Not that AI is new - it's been around for decades as a computer science discipline. What's different now is that advances in technology have made it possible for companies ranging from search engine providers to camera and smartphone manufacturers to deliver AI-enabled products and services, many of which have become an integral part of many people's daily lives.

More than that, those same AI techniques and building blocks are increasingly available for enterprises to leverage in their own products and services without needing to bring on board AI experts, a breed that's rare and expensive.

How to Implement AI and Machine Learning

The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.

Sentient systems capable of true cognition remain a dream for the future. But AI today can help organizations transform everything from operations to the customer experience. The winners will be those who not only understand the true potential of AI but are also keenly aware of what's needed to deploy a performant AI-based system that minimizes rather than creates risk and doesn't result in unflattering headlines.

These are the three key challenges all AI projects must tackle:

Interested in a deeper dive? I'll be covering this topic at Forrester's Digital Transformation Europe Forum in London on June 8-9. Click here to register today.

Martha Bennett is principal analyst at Forrester. Follow Martha on Twitter: @martha_bennett.

Here is the original post:

Three key challenges that could derail your AI project - ZDNet

Posted in Ai | Comments Off on Three key challenges that could derail your AI project – ZDNet

Jeff Bezos and Elon Musk Have Vastly Different Views on Artificial Intelligence – Inc.com

Posted: at 3:31 pm

While artificial intelligence continues to become a part of everyday life for consumers, the technology has gathered a pretty impressive collection of critics and fear mongers.

Don't count Jeff Bezos among them. The Amazon founder and CEO paints a fairly rosy picture when it comes to A.I.--and he thinks there should be much more of it.

The comments came at a gala Friday put on by the Internet Association, a Washington, D.C.-based lobbyist group. During a fireside chat with the group's CEO, Michael Beckerman, Bezos said we are currently in the "golden age" of machine learning.

"We are solving problems with machine learning and artificial intelligence that were in the realm of science fiction for the last several decades," he said. "Natural language understanding, machine vision problems--it really is an amazing renaissance."

Bezos spoke of a future in which A.I. has application far beyond technology products. "Machine learning and A.I. is a horizontal enabling layer," he said. "It will empower and improve every business, every government organization, every philanthropy. Basically, there's no institution in the world that cannot be improved with machine learning."

That's in stark contrast to some tech leaders, most notably Elon Musk, who have warned about the dangers of the technology. Musk recently presented a futuristic scenario in which even the most benign forms of A.I. could have catastrophic effects on humanity.

"Let's say you create a self-improving A.I. to pick strawberries," Musk told Vanity Fair, "and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever."

While some organizations, like Google, have proposed developing a kill switch to shut down overly aggressive A.I., Musk doesn't believe this is plausible.

"I'm not sure I'd want to be the one holding the kill switch for some super-powered A.I.," he said, "because you'd be the first thing it kills."

Musk's school of thought is shared by Y Combinator president Sam Altman and venture capitalist Peter Thiel, with whom he co-founded OpenAI, a non-profit meant to ensure A.I. is used for good. The three are part of a group of tech elites that have pledged $1 billion dollars toward the company.

Tim Berners-Lee, the inventor of the World Wide Web, recently laid out a scenario in which A.I. that's used in business settings eventually becomes so smart, it runs entire companies and financial institutions on its own--and thus controls entire economies. "You have survival of the fittest going on between these A.I. companies," he said, "until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair--and how do you describe to a computer what that means, anyway?"

Stephen Hawking has spoken out against A.I. too, saying that he fears it "could spell the end of the human race." Last year, Hawking opened a research center at Cambridge University, meant to nurture ideas for using A.I. to solve world problems--and for regulating its use.

Amazon has leaned more heavily into artificial intelligence in recent years. As Bezos pointed out during his talk, the company is continually improving its website's search feature and product recommendations using machine learning. More visibly, the Echo home assistant relies on A.I. and uses machine learning to improve its capabilities.

The company is also developing drones for delivering goods, which the company hopes will someday fly autonomously. "Those things use a tremendous amount of machine learning, machine vision systems," Bezos said.

Read the original post:

Jeff Bezos and Elon Musk Have Vastly Different Views on Artificial Intelligence - Inc.com

Posted in Artificial Intelligence | Comments Off on Jeff Bezos and Elon Musk Have Vastly Different Views on Artificial Intelligence – Inc.com

Facebook created a faster, more accurate translation system using artificial intelligence – Popular Science

Posted: at 3:31 pm

Facebooks billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if youre an English speaker confronted with German, or a French speaker seeing Spanish, youll see a link that says See Translation.

But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.

The scientists who developed the new system work at the social networks FAIR group, which stands for Facebook A.I. Research.

Neural networks are modeled after the human brain, says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.

But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.

It doesnt go left to right, Auli says, of their translator. [It can] look at the data all at the same time. For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.

Graham Neubig, an assistant professor at Carnegie Mellon Universitys Language Technologies Institute, researches natural language processing and machine translation. He says that this isnt the first time this kind of neural network has been used to translate text, but that this seems to be the best hes ever seen it executed with a convolutional neural network.

What this Facebook paper has basically showedits revisiting convolutional neural networks, but this time theyve actually made it really work very well, he says.

Facebook isnt yet saying how it plans to integrate the new technology with its consumer-facing product yet; thats more the purview of a department there call the applied machine learning group. But in the meantime, theyve released the tech publicly as open-source, so other coders can benefit from it

Thats a point that pleases Neubig. If its fast and accurate, he says, itll be a great additional contribution to the field.

More:

Facebook created a faster, more accurate translation system using artificial intelligence - Popular Science

Posted in Artificial Intelligence | Comments Off on Facebook created a faster, more accurate translation system using artificial intelligence – Popular Science

Is Artificial Intelligence the Key to Personalized Education? – Smithsonian

Posted: at 3:31 pm

For Joseph Qualls, it all started with video games.

That got him messing around with an AI program, and ultimately led to a PhD in electrical and computer engineering from the University of Memphis. Soon after, he started his own company, called RenderMatrix, which focused on using AI to help people make decisions.

Much of the companys work has been with the Defense Department, particularly during the wars in Iraq and Afghanistan, when the military was at the cutting edge in the use of sensors and seeing how AI could be used to help train soldiers to function in a hostile, unfamiliar environment.

Qualls is now a clinical assistant professor and researcher at the University of Idaho's college of engineering, and he hasnt lost any of his fascination with the potential of AI to change many aspects of modern life. While the military has been the leading edge in applying AIwhere machines learn by recognizing patterns, classifying data, and adjusting to mistakes they makethe corporate world is now pushing hard to catch up. The technology has made fewer inroads in education, but Qualls believes its only a matter of time before AI becomes a big part of how children learn.

Its often seen as being a key component of the concept of personalized education, where each student follows a unique mini-curriculum based on his or her particular interests and abilities. AI, the thinking goes, can not only help children zero in on areas where theyre most likely to succeed, but also will, based on data from thousands of other students, help teachers shape the most effective way for individual students to learn.

Smithsonian.com recently talked to Qualls about how AI could profoundly affect education, and also some of the big challenges it faces.

So, how do you see artificial intelligence affecting how kids learn?

People have already heard about personalized medicine. Thats driven by AI. Well, the same sort of thing is going to happen with personalized education. I dont think youre going to see it as much at the university level. But do I see people starting to interact with AI when theyre very young. It could be in the form of a teddy bear that begins to build a profile of you, and that profile can help guide how you learn throughout your life. From the profile, the AI could help build a better educational experience. Thats really where I think this is going to go over the next 10 to 20 years.

You have a very young daughter. How would you foresee AI affecting her education?

Its interesting because people think of them as two completely different fields, but AI and psychology are inherently linked now. Where the AI comes in is that it will start to analyze the psychology of humans. And Ill throw a wrench in here. Psychology is also starting to analyze the psychology of AI. Most the projects I work on now have a full-blown psychology team and theyre asking questions like 'Why did the AI make this decision?'

But getting back to my daughter. What AI would start doing is trying to figure out her psychology profile. Its not static; it will change over time. But as it sees how shes going to change, the AI could make predictions based on data from my daughter, but also from about 10,000 other girls her same age, with the same background. And, it begins to look at things like Are you really an artist or are you more mathematically inclined?

It can be a very complex system. This is really pie-in-the-sky artificial intelligence. Its really about trying to understand who you are as an individual and how you change over time.

More and more AI-based systems will become available over the coming years, giving my daughter faster access to a far superior education than any we ever had. My daughter will be exposed to ideas faster, and at her personalized pace, always keeping her engaged and allowing her to indirectly influence her own education.

What concerns might you have about using AI to personalize education?

The biggest issue facing artificial intelligence right now is the question of 'Why did the AI make a decision?' AI can make mistakes. It can miss the bigger picture. In terms of a student, an AI may decide that a student does not have a mathematical aptitude and never begin exposing that student to higher math concepts. That could pigeonhole them into an area where they might not excel. Interestingly enough, this is a massive problem in traditional education. Students are left behind or are not happy with the outcome after university. Something was lost.

Personalized education will require many different disciplines working together to solve many issues like the one above. The problem we have now in research and academia is the lack of collaborative research concerning AI from multiple fieldsscience, engineering, medical, arts. Truly powerful AI will require all disciplines working together.

So, AI can make mistakes?

It can be wrong. We know humans make mistakes. Were not used to AI making mistakes.

We have a hard enough time telling people why the AI made a certain decision. Now we have to try to explain why AI made a mistake. You really get down to the guts of it. AI is just a probability statistics machine.

Say, it tells me my child has a tendency to be very mathematically oriented, but she also shows an aptitude for drawing. Based on the data it has, the machine applies a weight to certain things about this person. And, we really cant explain why it does what it does. Thats why Im always telling people that we have to build this system in a way that it doesnt box a person in.

If you go back to what we were doing for the military, we were trying to be able to analyze if a person was a threat to a soldierout in the field. Say one person is carrying an AK-47 and another is carrying a rake. Whats the difference in their risk?

That seems pretty simple. But you have to ask deeper questions. Whats the likelihood of the guy carrying the rake becoming a terrorist? You have to start looking at family backgrounds, etc.

So, you still have to ask the question, 'What if the AIs wrong?' Thats the biggest issue facing AI everywhere.

How big a challenge is that?

One of the great engineering challenges now is reverse engineering the human brain. You get in and then you see just how complex the brain is. As engineers, when we look at the mechanics of it, we start to realize that there is no AI system that even comes close to the human brain and what it can do.

Were looking at the human brain and asking why humans make the decisions they do to see if that can help us understand why AI makes a decision based on a probability matrix. And were still no closer.

Actually, what drives reverse engineering of the brain and the personalization of AI is not research in academia, its more the lawyers coming in and asking 'Why is the AI making these decisions?' because they dont want to get sued.

In the past year, most of the projects Ive worked on, weve had one or two lawyers, along with psychologists, on the team. More people are asking questions like 'Whats the ethics behind that?' Another big question that gets asked is 'Whos liable?'

Does that concern you?

The greatest part of AI research now is that people are now asking that question 'Why?' Before, that question relegated to the academic halls of computer science. Now, AI research is branching out to all domains and disciplines. This excites me greatly. The more people involved in AI research and development, the better chance we have at alleviating our concerns and more importantly, our fears.

Getting back to personalized education. How does this affect teachers?

With education, whats going to happen, youre still going to have monitoring. Youre going to have teachers who will be monitoring data. Theyll become more data scientists who understand the AI and can evaluate the data about how students are learning.

Youre going to need someone whos an expert watching the data and watching the student. There will need to be a human in the loop for some time, maybe for at least 20 years. But I could be completely wrong. Technology moves so fast these days.

It really is a fascinating time in the AI world, and I think its only going to accelerate more quickly. Weve gone from programming machines to do things to letting the machines figure out what to do. That changes everything. I certainly understand the concerns that people have about AI. But when people push a lot of those fears, it tends to drive people away. You start to lose research opportunities.

It should be more about pushing a dialogue about how AI is going to change things. What are the issues? And, how are we going to push forward?

Link:

Is Artificial Intelligence the Key to Personalized Education? - Smithsonian

Posted in Artificial Intelligence | Comments Off on Is Artificial Intelligence the Key to Personalized Education? – Smithsonian