Maybe We're Making It Too Easy For The Machines To Take Over

Posted: November 7, 2013 at 8:41 pm

Machines that can think for themselves attached to a global brain with the ability to self replicate? Yeah, we're making that happen.

This article is part of ReadWrite Future Tech, an annual series in which we explore how technologies that will shape our lives in the years to come are grounded in the innovation and research of today.

We have seen the future, and it's starting to look a lot likeSkynet.

That self-aware computer systemyes, the one that tries to exterminate the human race in the Terminator movies (and one TV show)is a potent symbol of Frankensteinian hubris. It is mirrored in the Singularity, the idea that technological progress will soon hit exponential growth, leading to self-aware robots and artificial intelligence that seize control of their own destiny, rendering humans irrelevant if not extinct. (Unless people go transhumanfirst, although that's another article entirely.)

The Singularity may never happen. Artificial intelligencelong predicted, never realizedmay be much harder to achieve than we think. An emerging computer consciousness might pass through a period of infancy, during which humanity might be able to take countermeasures of one sort or another. Self-aware robots might turn out to be benevolent, or even completely uninterested in humanity. It's impossible to predict.

Here, we'll just assume the worst comes to pass. And this scenario is based on technologies that we're feverishly developing today.

What if computer code could write itself? What if robots could think for themselves and continuously learn from their environment while being fed contextual information from a vast global network of data? What if the machines could build themselves and propagate, much in the same way that mammals give birth to new mammals?

Scientists are alreadyresearching computer chips and networks that act like the human brain. These chips could allow computers to learn and act on their own in ways that we never thought possible. I saw researchers demonstrate a simple robot with one of these chips that was given an order to stand up. It squirmed, it stumbled and it stood, having learned that behavior on its own.

We may look back one day and see this as the first step towards our doom. Matt Grob, executive vice president of Qualcomm Technologies, wondered whether it was ethical to turn the robot off after having imbued it with a certain degree of sentience.

See the original post here:
Maybe We're Making It Too Easy For The Machines To Take Over

Related Posts