AI Doomsayer Says His Ideas Are Catching On

Philosopher Nick Bostrom says major tech companies are listening to his warnings about investing in AI safety research.

Nick Bostrom

Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research.

Many people working on AI remain skeptical of or even hostile to Bostroms ideas. But since his book on the subject, Superintelligence, appeared last summer, some prominent technologists and scientistsincluding Elon Musk, Stephen Hawking, and Bill Gateshave echoed some of his concerns. Google is even assembling an ethics committee to oversee its artificial intelligence work.

Bostrom met last week with MIT Technology Reviews San Francisco bureau chief, Tom Simonite, to discuss his effort to get artificial intelligence researchers to consider the dangers of their work (see Our Fear of Artificial Intelligence).

How did you come to believe that artificial intelligence was a more pressing problem for the world than, say, nuclear holocaust or a major pandemic?

A lot of things could cause catastrophes, but relatively few could actually threaten the entire future of Earth-inhabiting intelligent life. I think artificial intelligence is one of the biggest, and it seems to be one where the efforts of a small number of people, or one extra unit of resources, might make a nontrivial difference. With nuclear war, a lot of big, powerful groups are already interested in that.

What about climate change, which is widely seen as the biggest threat facing humanity at the moment?

Its a very, very small existential risk. For it to be one, our current models would have to be wrongeven the worst scenarios [only] mean the climate in some parts of the world would be a bit more unfavorable. Then we would have to be incapable of remediating that through some geoengineering, which also looks unlikely.

Certain ethical theories imply that existential risk is just way more important. All things considered, existential risk mitigation should be much bigger than it is today. The world spends way more on developing new forms of lipstick than on existential risk.

More:

AI Doomsayer Says His Ideas Are Catching On

Related Posts

Comments are closed.