How worried should we be about artificial intelligence?
Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications.
Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that artificial intelligence is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.
There are, generally speaking, three forms of AI: weak AI, strong AI, and superintelligence. At present, only weak AI exists. Strong AI and superintelligence are theoretically possible, even probable, but were not there yet.
Understanding the differences between these forms of AI is essential to analyzing the potential risks and benefits of this technology. There are a whole range of concerns that correspond to different kinds of AI, some more worrisome than others.
To help make sense of this, here are some key distinctions you need to know.
Artificial Narrow Intelligence (often called weak AI) is an algorithmic or specialized intelligence. This has existed for several years. Think of the Deep Blue machine that beat world champion Garry Kasparov in chess. Or Siri on your iPhone. Or even speech recognition and processing software. These are forms of nonsentient intelligence with a relatively narrow focus.
It might be too much to call weak AI a form of intelligence at all. Weak AI is smart and can outperform humans at a single task, but thats all it can do. Its not self-aware or goal-driven, and so it doesnt present any apocalyptic threats. But to the extent that weak AI controls vital software that keeps our civilization humming along, our dependence upon it does create some vulnerabilities. George Dvorsky, a Canadian bioethicist and futurist, explores some of these issues here.
Then theres Artificial General Intelligence, or strong AI; this refers to a general-purpose system, or what you might call a thinking machine. Artificial General Intelligence, in theory, would be as smart or smarter than a human being at a wide range of tasks; it would be able to think, reason, and solve complex problems in myriad ways.
Its debatable whether strong AI could be called conscious; at the very least, it would demonstrate behaviors typically associated with consciousness commonsense reasoning, natural language understanding, creativity, strategizing, and generally intelligent action.
Artificial General Intelligence does not yet exist. A common estimate is that were perhaps 20 years away from this breakthrough. But nearly everyone concedes that its coming. Organizations like the Allen Institute for Artificial Intelligence (founded by Microsoft co-founder Paul Allen) and Googles DeepMind project, along with many others across the world, are making incremental progress.
There are surely more complications involved with this form of AI, but its not the stuff of dystopian science fiction. Strong AI would aim at a general-purpose human level intelligence; unless it undergoes rapid recursive self-improvement, its unlikely to pose a catastrophic threat to human life.
The major challenges with strong AI are economic and cultural: job loss due to automation, economic displacement, privacy and data management, software vulnerabilities, and militarization.
Finally, theres Artificial Superintelligence. Oxford philosopher Nick Bostrom defined this form of AI in a 2014 interview with Vox as any intellect that radically outperforms the best human minds in every field, including scientific creativity, general wisdom and social skills. When people fret about the hazards of AI, this is what theyre talking about.
A truly superintelligent machine would, in Bostroms words, become extremely powerful to the point of being able to shape the future according to its preferences. As yet, were nowhere near a fully developed superintelligence. But the research is underway, and the incentives for advancement are too great to constrain.
Economically, the incentives are obvious: The first company to produce artificial superintelligence will profit enormously. Politically and militarily, the potential applications of such technology are infinite. Nations, if they dont see this already as a winner-take-all scenario, are at the very least eager to be first. In other words, the technological arms race is afoot.
The question, then, is how far away from this technology are we, and what are the implications for human life?
For his book Superintelligence, Bostrom surveyed the top experts in the field. One of the questions he asked was, "by what year do you think there is a 50 percent probability that we will have human-level machine intelligence?" The median answer to that was somewhere between 2040 and 2050. That, of course, is just a prediction, but its an indication of how close we might be.
Its hard to know when an artificial superintelligence will emerge, but we can say with relative confidence that it will at some point. If, in fact, intelligence is a matter of information processing, and if we assume that we will continue to build computational systems at greater and greater processing speeds, then it seems inevitable that we will create an artificial superintelligence. Whether were 50 or 100 or 300 years away, we are likely to cross the threshold eventually.
When it does happen, our world will change in ways we cant possibly predict.
We cannot assume that a vastly superior intelligence is containable; it would likely work to improve itself, to enhance its capabilities. (This is what Bostrom calls the control problem.) A hyper-intelligent machine might also achieve self-awareness, in which case it would begin to develop its own ends, its own ambitions. The hope that such machines will remain instruments of human production is just that a hope.
If an artificial superintelligence does become goal-driven, it might develop goals incompatible with human well-being. Or, in the case of Artificial General Intelligence, it may pursue compatible goals via incompatible means. The canonical thought experiment here was developed by Bostrom. Lets call it the paperclip scenario.
Heres the short version: Humans create an AI designed to produce paperclips. It has one utility function to maximize the number of paperclips in the universe. Now, if that machine were to undergo an intelligence explosion, it would likely work to optimize its single function producing paperclips. Such a machine would continually innovate new ways to make more paperclips. Eventually, Bostrom says, that machine might decide that converting all of the matter it can including people into paperclips is the best way to achieve its singular goal.
Admittedly, this sounds a bit stupid. But its not, and it only appears so when you think about it from the perspective of a moral agent. Human behavior is guided and constrained by values self-interest, compassion, greed, love, fear, etc. An Advanced General Intelligence, presumably, would be driven only by its original goal, and that could lead to dangerous, and unanticipated, consequences.
Again, the paperclip scenario applies to strong AI, not superintelligence. The behavior of an a superintelligent machine would be even less predictable. We have no idea what such a being would want, or why it would want it, or how it would pursue the things it wants. What we can be reasonably sure of is that it will find human needs less important than its own needs.
Perhaps its better to say that it will be indifferent to human needs, just as human beings are indifferent to the needs of chimps or alligators. Its not that human beings are committed to destroying chimps and alligators; we just happen to do so when the pursuit of our goals conflicts with the wellbeing of less intelligent creatures.
And this is the real fear that people like Bostrom have of superintelligence. We have to prepare for the inevitable, he told me recently, and take seriously the possibility that things could go radically wrong.
See the original post here:
Why not all forms of artificial intelligence are equally scary - Vox
- The Great AI Race: Forecasts Diverge on the Arrival of Superintelligence - elblog.pl - April 14th, 2024 [April 14th, 2024]
- ASI Alliance Voting Opens: What Lies Ahead for AGIX, FET and OCEAN? - CCN.com - April 6th, 2024 [April 6th, 2024]
- Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI, arms control and the new cold war | The Strategist - The Strategist - November 16th, 2023 [November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co - November 16th, 2023 [November 16th, 2023]
- 20 Movies About AI That Came Out in the Last 5 Years - MovieWeb - November 16th, 2023 [November 16th, 2023]
- Can You Imagine Life Without White Supremacy? - Dallasweekly - November 16th, 2023 [November 16th, 2023]
- Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco ... - Medium - October 31st, 2023 [October 31st, 2023]
- The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out - Equities News - October 31st, 2023 [October 31st, 2023]
- Future Investment Initiative emphasizes global cooperation and AI ... - Saudi Gazette - October 31st, 2023 [October 31st, 2023]
- AI systems favor sycophancy over truthful answers, says new report - CoinGeek - October 31st, 2023 [October 31st, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 31st, 2023 [October 31st, 2023]
- Invincible's Guardians Of The Globe Team Members, History ... - Screen Rant - October 31st, 2023 [October 31st, 2023]
- From streaming wars to superintelligence with John Oliver & Calum ... - KNEWS - The English Edition of Kathimerini Cyprus - October 22nd, 2023 [October 22nd, 2023]
- Reckoning with self-destruction in Oppenheimer, Indiana Jones, and ... - The Christian Century - October 22nd, 2023 [October 22nd, 2023]
- How Microsoft's CEO tackles the ethical dilemma of AI and its ... - Medium - October 22nd, 2023 [October 22nd, 2023]
- Managing risk: Pandemics and plagues in the age of AI - The Interpreter - October 22nd, 2023 [October 22nd, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire - August 2nd, 2023 [August 2nd, 2023]
- Fischer Black and Artificial Superintelligence - InformationWeek - August 2nd, 2023 [August 2nd, 2023]
- OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- 10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek - August 2nd, 2023 [August 2nd, 2023]
- The Concerns Surrounding Advanced Artificial Intelligence and the ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Decentralized AI: Revolutionizing Technology and Addressing ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine - August 2nd, 2023 [August 2nd, 2023]
- The Implications of AI Advancements on Human Thinking and ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Focusing on Tackling Algorithmic Bias is Key to Ethical AI ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Discover This SUPER Early AI Crypto Gem - Altcoin Buzz - August 2nd, 2023 [August 2nd, 2023]
- Biden meets with AI leaders to discuss its 'enormous promise and its ... - KULR-TV - June 20th, 2023 [June 20th, 2023]
- VivaTech: The Secret of Elon Musk's Success? 'Crystal Meth' - The New Stack - June 20th, 2023 [June 20th, 2023]
- AI meets the other AI - POLITICO - POLITICO - June 20th, 2023 [June 20th, 2023]
- Squid Game trailer for real-life reality contest prompts confusion from Netflix users - Yahoo News - June 20th, 2023 [June 20th, 2023]
- Our Future Inside The Fifth Column- Or, What Chatbots Are Really For - Tech Policy Press - June 20th, 2023 [June 20th, 2023]
- Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English - June 20th, 2023 [June 20th, 2023]
- AI alignment - Wikipedia - January 4th, 2023 [January 4th, 2023]
- Are We Living In A Simulation? Can We Break Out Of It? - December 28th, 2022 [December 28th, 2022]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook ... - October 13th, 2022 [October 13th, 2022]
- What is Artificial Super Intelligence (ASI)? - GeeksforGeeks - October 13th, 2022 [October 13th, 2022]
- Literature and Religion | Literature and Religion - Patheos - October 13th, 2022 [October 13th, 2022]
- Why AI will never rule the world - Digital Trends - September 27th, 2022 [September 27th, 2022]
- Why DART Is the Most Important Mission Ever Launched to Space - Gizmodo Australia - September 27th, 2022 [September 27th, 2022]
- 'Sweet Home Alabama' turns 20: See how the cast has aged - Wonderwall - September 27th, 2022 [September 27th, 2022]
- Research Shows that Superintelligent AI is Impossible to be Controlled - Analytics India Magazine - September 24th, 2022 [September 24th, 2022]
- Eight best books on AI ethics and bias - INDIAai - September 20th, 2022 [September 20th, 2022]
- AI Art Is Here and the World Is Already Different - New York Magazine - September 20th, 2022 [September 20th, 2022]
- Would "artificial superintelligence" lead to the end of life on Earth ... - September 14th, 2022 [September 14th, 2022]
- Instrumental convergence - Wikipedia - September 14th, 2022 [September 14th, 2022]
- The Best Sci-Fi Movies on HBO Max - CNET - September 14th, 2022 [September 14th, 2022]
- Elon Musk Shares a Summer Reading Idea - TheStreet - August 2nd, 2022 [August 2nd, 2022]
- Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast - IndieWire - August 2nd, 2022 [August 2nd, 2022]
- Peter McKnight: The day of sentient AI is coming, and we're not ready - Vancouver Sun - June 20th, 2022 [June 20th, 2022]
- SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao - The Tribune India - May 25th, 2022 [May 25th, 2022]
- WATCH: Neuralink, mind uploading and the AI apocalypse - Hamilton Spectator - May 3rd, 2022 [May 3rd, 2022]
- Artificial Intelligence And the Human Context of War - The National Interest Online - May 3rd, 2022 [May 3rd, 2022]
- Elon Musk and the Posthumanist Threat | John Waters - First Things - May 3rd, 2022 [May 3rd, 2022]
- Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution - Analytics Insight - April 15th, 2022 [April 15th, 2022]
- If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People - CMSWire - April 15th, 2022 [April 15th, 2022]
- Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" - BuzzFeed - April 15th, 2022 [April 15th, 2022]
- What2Watch: This week's worth-the-watch - Review - Review - March 31st, 2022 [March 31st, 2022]
- Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 - Analytics Insight - March 29th, 2022 [March 29th, 2022]
- AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars - Forbes - March 18th, 2022 [March 18th, 2022]
- What to watch next on Showmax - News24 - March 18th, 2022 [March 18th, 2022]
- Does Kimi deliver the goods? Thriller aims to capitalize on Alexa, Siri and Seattles tech cachet - GeekWire - February 17th, 2022 [February 17th, 2022]
- 'Downfall' follows chain of bad decisions that led up to Boeing 737-Max crashes - Lewiston Morning Tribune - February 17th, 2022 [February 17th, 2022]
- Maybe it is Not Too Late to Buy Bitcoin! BTC has a Long Way to Go - Analytics Insight - February 17th, 2022 [February 17th, 2022]
- Giving an AI control of nuclear weapons: What could possibly go wrong? - Bulletin of the Atomic Scientists - February 7th, 2022 [February 7th, 2022]
- Meet the cast of The Afterparty - Radio Times - January 27th, 2022 [January 27th, 2022]
- 8 big threats to human stability and even existence in 2022 - AMEinfo - January 27th, 2022 [January 27th, 2022]
- Artificial Intelligence in Cardiology | AER - January 17th, 2022 [January 17th, 2022]
- Are We Living in a Computer Simulation? Artificial Superintelligence Could Provide the Answer - BBN Times - January 17th, 2022 [January 17th, 2022]
- Movie Review: The 355 - mxdwn.com - January 9th, 2022 [January 9th, 2022]
- AI control problem - Wikipedia - December 10th, 2021 [December 10th, 2021]
- REPORT : Baltic Event Works in Progress 2021 - Cineuropa - December 5th, 2021 [December 5th, 2021]
- Top Books On AI Released In 2021 - Analytics India Magazine - November 27th, 2021 [November 27th, 2021]
- Inside the MIT camp teaching kids to spot bias in code - Popular Science - November 27th, 2021 [November 27th, 2021]
- 7 Types Of Artificial Intelligence - Forbes - November 17th, 2021 [November 17th, 2021]
- The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW - TVweb - November 17th, 2021 [November 17th, 2021]
- Nick Bostrom - Wikipedia - November 15th, 2021 [November 15th, 2021]
- Cowboy Bebop; Ein dogs were really spoiled on set - Dog of the Day - November 13th, 2021 [November 13th, 2021]
- Inside the Impact on Marvel of Brian Tyree Henry's Openly Gay Character in 'Eternals' - Black Girl Nerds - November 13th, 2021 [November 13th, 2021]
- The funny formula: Why machine-generated humor is the holy grail of A.I. - Digital Trends - November 13th, 2021 [November 13th, 2021]