This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the worlds first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing theyve already doomed us all.
Before we get into the details of this galaxy-destroying blunder, its worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it calculations per second per $1,000, a number that continues to grow. If computing power maps to intelligence a big if, some have argued weve only so far built technology on par with an insect brain. In a few years, maybe, well overtake a mouse brain. Around 2025, some predictions go, we might have a computer thats analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because theres no reason to think artificial intelligence wouldnt surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, Machine intelligence is the last invention that humanity will ever need to make.
Thats how profoundly things could change. But we cant really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.
Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators havent considered the full ramifications of what theyre building; they havent built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.
But the superintelligence doesnt want to be turned off. It doesnt want to stop making paper clips. Acting quickly, its already plugged itself into another power source; maybe its even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: theyll have to be eliminated so the mission can continue. And earth wont be big enough for the superintelligence: itll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.
Galaxies reduced to paper clips: thats a worst-case scenario. It may sound absurd, but it probably sounds familiar. Its Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (Its also The Terminator, WarGames (arguably), and a whole host of others.) In this particular case, its a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.
Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence the idea that eats smart people. Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.
Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. But even if you find them persuasive, he said, there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously. He suggests there are more subtle ways to think about the problems of A.I.
Some of those problems are already in front of us, and we might miss them if were looking for a Skynet-style takeover by hyper-intelligent machines. While youre focused on this, a bunch of small things go unnoticed, says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at whats already happening with our comparatively rudimentary A.I.
Shes focusing on large-area effects, the unnoticed flaws in our systems that can do massive damage damage thats often unnoticed until after the fact. If you were building a bridge and you screw up and it collapses, thats a tragedy. But it affects a relatively small number of people, she says. Whats different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily.
Take the recent rise of so-called fake news. What caught many by surprise should have been completely predictable: when the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebooks driving ethos).
The incentives were all wrong; exacerbated by algorithms., they led to a state of affairs few would have wanted. For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured, says Doshi-Velez. Thats a very simple application of A.I. having large effects that may have been unintentional.
In fact, fake news is a cousin to the paperclip example, with the ultimate goal not manufacturing paper clips, but monetization, with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but monetization as the driving force led to deleterious side effects such as the proliferation of fake news.
In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.
The ideal was that the softwares underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica it was likely to falsely flag black defendants as future criminals while [w]hite defendants were mislabeled as low risk more often than black defendants. Race was not part of the questionnaire, but it did ask whether the respondents parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.
Its that kind of error that most worries Doshi-Velez. Not superhuman intelligence, but human error that affects many, many people, she says. You might not even realize this is happening. Algorithms are complex tools; often they are so complex that we cant predict how theyll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.
In 2015, Elon Musk donated $10 million to, as Wired put it, to keep A.I. from turning evil. That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyers example of inadvertent algorithmic cruelty Facebooks Year in Review app showing him pictures of his daughter, whod died that year.
If theres a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making todays algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms weve granted great power. I see teaching as this moral imperative, says Doshi-Velez. You know, with great power comes great responsibility.
Whats the worst that can happen? Vocativ is exploring the power of negative thinking with our look at worst case scenarios in politics, privacy, reproductive rights, antibiotics, climate change, hacking, and more. Read more here.
More here:
The Moment When Humans Lose Control Of AI - Vocativ
- Ethical Issues In Advanced Artificial Intelligence - December 14th, 2016 [December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom - December 21st, 2016 [December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... - January 4th, 2017 [January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why - January 25th, 2017 [January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press - January 27th, 2017 [January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert - February 7th, 2017 [February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think - February 7th, 2017 [February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse - February 7th, 2017 [February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal - February 8th, 2017 [February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin - February 9th, 2017 [February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic - February 10th, 2017 [February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News - February 13th, 2017 [February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism - February 13th, 2017 [February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan - February 13th, 2017 [February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think - February 27th, 2017 [February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017 [February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017 [February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017 [February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017 [March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017 [March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017 [March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017 [March 3rd, 2017]
- Supersentience - March 7th, 2017 [March 7th, 2017]
- Superintelligence | Guardian Bookshop - March 7th, 2017 [March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017 [March 8th, 2017]
- Who is afraid of AI? - The Hindu - April 8th, 2017 [April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) - April 8th, 2017 [April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017 [April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017 [June 6th, 2017]
- A reply to Wait But Why on machine superintelligence - June 6th, 2017 [June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech - June 7th, 2017 [June 7th, 2017]
- Using AI to unlock human potential - EJ Insight - June 9th, 2017 [June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017 [June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017 [June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017 [June 18th, 2017]
- The bots are coming - The New Indian Express - June 20th, 2017 [June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017 [June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017 [June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017 [July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017 [July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017 [July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017 [July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017 [July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017 [July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017 [August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017 [August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017 [August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017 [August 25th, 2017]
- Friendly artificial intelligence - Wikipedia - August 25th, 2017 [August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017 [August 25th, 2017]
- Steam Workshop :: Superintelligence - February 15th, 2018 [February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018 [February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... - June 3rd, 2018 [June 3rd, 2018]
- Superintelligence survey - Future of Life Institute - June 23rd, 2018 [June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018 [August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018 [October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019 [March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019 [March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019 [May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019 [October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019 [October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019 [October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019 [October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019 [October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019 [October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019 [December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019 [December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019 [December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019 [December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019 [December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020 [February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020 [February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020 [March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020 [April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020 [June 21st, 2020]
- The Shadow of Progress - Merion West - July 13th, 2020 [July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020 [July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel - July 17th, 2020 [July 17th, 2020]