Artificial intelligence – wonderful and terrifying – will change life as we know it – CBC.ca

Sunday June 04, 2017 more stories from this episode

"The year 2017 has arrived and we humans are still in charge. Whew!"

That reassuring proclamation came from a New Year's editorial in the Chicago Tribune.

If you haven't been paying attention to the news about artificial intelligence, and particularly its newest iteration called deep learning, then it's probably time you started. This technology is poised to completely revolutionize just about everything in our lives.

If it hasn't already.

Experts say Canadian workers could be in for some major upheaval over the next decade as increasingly intelligent software, robotics and artificial intelligence perform more sophisticated tasks in the economy. (CBC News)

Today, machines are able to "think" more like humans than most of us, even the scientists who study it, ever imagined.

They are moving into our workplaces, homes, cars, hospitals and schools, and they are making decisions for us. Big ones.

Artificial intelligence has enormous potential for good.But its galloping development has also given rise to fears of massive economic dislocation, even fears that these sentient computers might one day get so smart, we will no longer be able to control them.

To use an old fashioned card playing analogy, this is not a shuffle. It's a whole new deck and with a mind of its own.

Sunday Edition contributor Ira Basen has been exploring the frontiers of this remarkable new technology. His documentary is called"Into the Deep: The Promise and Perils of Artificial Intelligence."

Ira Basen June 2, 2017

Remember HAL?

The HAL 9000 computer was the super smart machine in charge of the Discovery One space station in Stanley Kubrick's classic 1968 movie 2001: A Space Odyssey. For millions of moviegoers, it was their first look at a computer that could think and respond like a human, and it did not go well.

In one of the film's pivotal scenes, the two astronauts living in the space station try to return from a mission outside the spacecraft, only to discover that HAL won't allow them back in.

"Open the pod bay doors, please, HAL," Dave, one of astronauts, demands several times.

"I'm sorry Dave, I'm afraid I can't do that," HAL finally replies. "I know that you and Frank were planning to disconnect me, and I'm afraid that's something that I can't allow to happen."

The astronauts were finally able to re-enter the spacecraft and disable HAL, but the image of a sentient computer going rogue and trying to destroy its creators has haunted many people's perceptions of artificial intelligence ever since.

For most of the past fifty years, those negative images haven't really mattered very much. Machines with the cognitive powers of HAL lay in the realm of science fiction. But not anymore. Today, artificial intelligence (AI) is the hottest thing going in the field of computer science.

Governments and industry are pouring billions of dollars into AI research. The most recent example isthe Vector Institute, a new Toronto-based AI research lab announced with much fanfare in March and backed by about $170 million in funding from the Ontario and federal governments, and gig tech companies like Google and Uber.

The Vector Institute will focus on a particular subset of AI called "deep learning."It was pioneered by U of T professor Geoffrey Hinton, who is now the Chief Scientific Advisor at the Institute. Hinton and other deep learning researchers have been able to essentially mimic the architecture of the human brain inside a computer. They created artificial neural networks that work in much the same way as the vast networks of neurons in our brain, that when triggered, allow us to think.

"Once your computer is pretending to be a neural net," Hinton explained in a recent interview in the Toronto office of Google Canada, where he is currently an Engineering Fellow, "you get it to be able to do a particular task by just showing it a whole lot of examples."

So if you want your computer to be able to identify a picture of a cat, you show it lots of pictures of cats. But it doesn't need to see every picture of a cat to be able to figure out what a cat looks like. This is not programming the way computers have been traditionally been programmed. "What we can do," Hinton says, "is show it a lot of examples and have it just kind of get it. And that's a new way of getting computers to do things."

For people haunted by memories of HAL, or Skynet in the Terminator movies another AI computer turned killing machinethe idea of computers being able to think for themselves, to "just kind of get it", in ways that even people like Geoffrey Hinton can't really explain, is far from re-assuring.

They worry about "superintelligence"the point at which computers become more intelligent than humans, and we lose control of our creations. It's this fear that has people like Elon Musk, the man behind the Tesla electric car, declaring that the "biggest existential threat" to the planet today is artificial intelligence. "With artificial intelligence," he asserts, "we are summoning the demon".

SHODAN, the malevolent artificial intelligence from System Shock 2. (Irrational Games/Electronic Arts)

People who work in AI believe these fears of superintelligence are vastly overblown. They argue we are decades away from superintelligence, and we may, in fact, never get there. And even if we do, there's no reason to think that our machines will turn against us.

Yoshua Bengio of the University of Montreal, one of the world's leading deep learning researchers, believes we should avoid projecting our own psychology onto the machines we are building.

"Our psychology is really a defensive one," he argued in a recent interview. "We are afraid of the rest of the world, so we try to defend from potential attacks." But we don't have to build that same defensive psychology into our computers. HAL was a programming error, not an inevitable consequence of artificial intelligence.

"It's not like by default an intelligent machine also has a will to survive against anything else,"Bengio concludes. "This is something that would have to be put in. So long as we don't put that in, they will be as egoless as a toaster, even though it could be much, much smarter than us.

So if we decide to build machines that have an ego and would kill rather than be killed then, well, we'll suffer from our own stupidity. But we don't have to do that."

Humans suffering from our own stupidity? When has that ever happened?

Feeling better?

Click 'listen' above to hear Ira Basen'sdocumentary on artificial intelligence.

Read this article:

Artificial intelligence - wonderful and terrifying - will change life as we know it - CBC.ca

Related Posts

Comments are closed.