Elon Musk Is Wrong Again. AI Isn’t More Dangerous Than North Korea. – Fortune

Elon Musk's recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side.

If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world. This logic has taken root in Musk's mind and, as someone who turns ideas into action for a living, he wants to make sure you get on board too. But hes wrong, and you shouldnt believe his apocalyptic warnings.

Here's the story Musk wants you to know but hasn't been able to boil down to a single tweet. By dint of clever ideas, hard work, and significant investment, computers are getting faster and more capable. In the last few years, some famously hard computational problems have been mastered, including identifying objects in images, recognizing the words that people say, and outsmarting human champions in games like Go. If machine learning researchers can create programs that can replace captioners, transcriptionists, and board game masters, maybe it won't be long before they can replace themselves. And, once computer programs are in the business of redesigning themselves, each time they make themselves better, they make themselves better at making themselves better.

The resulting intelligence explosion would leave computers in a position of power, where they, not humans, control our future. Their objectives, even if benign when the machines were young, could be threatening to our very existence in the hands of an intellect dwarfing our own. That's why Musk thinks this issue is so much bigger than war with North Korea. The loss of a handful of major cities wouldn't be permanent, whereas human extinction by a system seeking to improve its own capabilities by turning us into computational components in its mega-brainthat would be forever.

Musks comparison, however, grossly overestimates the likelihood of an intelligence explosion. His primary mistake is in extrapolating from recent successes of machine learning the eventual development of general intelligence. But machine learning is not as dangerous as it might look on the surface.

For example, you may see a machine perform a task that appears to be superhuman and immediately be impressed. When people learn to understand speech or play games, they do so in the context of the full range of human experiences. Thus when you see something that can respond to questions or beat you soundly in a board game, it is not unreasonable to infer that it also possesses a range of other human capacities. But that's not how these systems work.

In a nutshell, here's the methodology that has been successful for building advanced systems of late: First, people decide what problem they want to solve and they express it in the form of a piece of code called an objective functiona way for the system to score itself on the task. They then assemble perhaps millions of examples of precisely the kind of behavior they want their system to exhibit. After that they design the structure of their AI system and tune it to maximize the objective function through a combination of human insight and powerful optimization algorithms.

At the end of this process, they get a system that, often, can exhibit superhuman performance. But the performance is on the particular task that was selected at the beginning. If you want the system to do something else, you probably will need to start the whole process over from scratch. Moreover, the game of life does not have a clear objective functioncurrent methodologies are not suited to creating a broadly intelligent machine.

Someday we may inhabit a world with intelligent machines. But we will develop together and will have a billion decisions to make that shape how that world develops. We shouldn't let our fears prevent us from moving forward technologically.

Michael L. Littman is a professor of computer science at Brown University and co-director of Brown's Humanity Centered Robotics Initiative.

Read more:

Elon Musk Is Wrong Again. AI Isn't More Dangerous Than North Korea. - Fortune

Related Posts

Comments are closed.