A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.
The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:
Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism designto define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.
'Friendly' is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.
The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict. By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics" - principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allow them to come to harm.
In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:
Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'
Ryszard Michalski, a pioneer of machine learning, taught his Ph.D. students decades ago that any truly alien mind, including a machine mind, was unknowable and therefore dangerous to humans.
More recently, Eliezer Yudkowsky has called for the creation of friendly AI to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, because of the intrinsic nature of goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.
Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.
Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.
Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, coherent extrapolated volition is people's choices and the actions people would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."
Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer. The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
Ben Goertzel, an artificial general intelligence researcher, believes that friendly AI cannot be created with current human knowledge. Goertzel suggests humans may instead decide to create an "AI Nanny" with "mildly superhuman intelligence and surveillance powers", to protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved.
Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.
Stefan Pernar argues along the lines of Meno's paradox to point out that attempting to solve the FAI problem is either pointless or hopeless depending on whether one assumes a universe that exhibits moral realism or not. In the former case a transhuman AI would independently reason itself into the proper goal system and assuming the latter, designing a friendly AI would be futile to begin with since morals can not be reasoned about.
James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about securitysomething like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.
John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI aren't necessarily clear, he suggests a model more like the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.
According to Gary Marcus, the annual amount of money being spent on developing machine morality is tiny.
Some critics believe that both human-level AI and superintelligence are unlikely, and that therefore friendly AI is unlikely. Writing in The Guardian, Alan Winfeld compares human-level artificial intelligence with faster-than-light travel in terms of difficulty, and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.
Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful. Other critics question whether it is possible for an artificial intelligence to be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes.
Read the original here:
- Can Apocalypse Be Dealt With? The Diplomat - The Diplomat - October 19th, 2020
- Thirty books to help us understand the world in 2020 - The Guardian - October 19th, 2020
- These Are The 10 Highest-Paid Actresses Of 2020 - Marie Claire - October 19th, 2020
- Forbes 10 highest-paid actresses of 2020 have been revealed - The Independent - October 10th, 2020
- Hulk Just Exploded Into Bits and Pieces in The Immortal Hulk #35 - Screen Rant - August 10th, 2020
- Should You Get Down (And Occasionally Dirty) With Star Trek: Lower Decks? - PRIMETIMER - August 7th, 2020
- The Era Of Autonomous Army Bots is Here - Forbes - August 6th, 2020
- AI Could Overtake Humans in 5 Years, Says Elon Musk, Whose 'Top Concern' is Google-Owned DeepMind - International Business Times, Singapore Edition - August 5th, 2020
- Superintelligence - Wikipedia - July 31st, 2020
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom - July 31st, 2020
- Artificial Intelligence - A Way To Superintelligence ... - July 31st, 2020
- AI governance and the future of humanity with the Rockefeller Foundations senior VP of innovation - The Sociable - July 31st, 2020
- neXt Trailer: It's Not Paranoia If The Threat Is Real - Bleeding Cool News - July 27th, 2020
- The Famous AI Turing Test Put In Reverse And Upside-Down, Plus Implications For Self-Driving Cars - Forbes - July 21st, 2020
- Scoop: Coming Up on a Rebroadcast of MATCH GAME on ABC - Sunday, July 26, 2020 - Broadway World - July 17th, 2020
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel - July 17th, 2020
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020
- The Shadow of Progress - Merion West - July 13th, 2020
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019
- The Artificial Intelligence Revolution: Part 1 - Wait But Why - November 3rd, 2018
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018
- Superintelligence survey - Future of Life Institute - June 23rd, 2018
- Artificial Superintelligence: The Coming Revolution ... - June 3rd, 2018
- Steam Workshop :: Superintelligence - February 15th, 2018
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017
- The bots are coming - The New Indian Express - June 20th, 2017
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017
- Using AI to unlock human potential - EJ Insight - June 9th, 2017
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech - June 7th, 2017
- A reply to Wait But Why on machine superintelligence - June 6th, 2017
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) - April 8th, 2017
- Who is afraid of AI? - The Hindu - April 8th, 2017
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017
- Superintelligence | Guardian Bookshop - March 7th, 2017
- Supersentience - March 7th, 2017
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017