It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen.
They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk.
Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues - everything from how scientists should work with governments to how lethal weapons should be handled.
On that point: "An arms race in lethal autonomous weapons should be avoided," says principle 18. You can read the full listbelow.
"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years," write the organisers of the Beneficial AI 2017 conference, where the principles were worked out.
For a principle to be included, at least 90 percent of the 100+ conference attendees had to agree to it. Experts at the event included academics, engineers, and representatives from tech companies, including Google co-founder Larry Page.
Perhaps the most telling guideline is principle 23, entitled 'Common Good': "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation."
Other principles in the list suggest that any AI allowed to self-improve must be strictly monitored, and that developments in the tech should be "shared broadly" and "benefit all of humanity".
"To think AI merely automates human decisions is like thinking electricity is just a replacement for candles," conference attendee Patrick Lin, from California Polytechnic State University, told George Dvorsky at Gizmodo.
"Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that's fixated mostly on efficiency and profit... shape AI."
Meanwhile the principles also call for scientists to work closely with governments and lawmakers to make sure our society keeps pace with the development of AI.
All of which sounds very good to us - let's just hope the robots are listening.
The guidelines also rely on a certain amount of consensus about specific terms - such as what's beneficial to humankind and what isn't - but for the experts behind the list it's a question of getting something recorded at this early stage of AI research.
With artificial intelligence systems now beating us at poker and getting smart enough to spot skin cancers, there's a definite need to have guidelines and limits in place that researchers can work to.
And then we also need to decide what rights super-smart robots have when they're living among us.
For now the guidelines should give us some helpful pointers for the future.
"No current AI system is going to 'go rogue' and be dangerous, and it's important that people know that," conference attendee Anthony Aguirre, from the University of California, Santa Cruz, told Gizmodo.
"At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change."
"So how seriously we take AI's opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts - without the press, industry or research hype that often accompanies advances - would be a good starting point."
The principles have been published by the Future Of Life Institute.
You can see them in full and add your support over on their site.
Research issues
1. Research Goal:The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding:Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3. Science-Policy Link:There should be constructive and healthy exchange between AI researchers and policy-makers.
4. Research Culture:A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5. Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and values
6. Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency:If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency:Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility:Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment:Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
11. Human Values:AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy:People should have the right to access, manage and control the data they generate, given AI systems power to analyse and utilise that data.
13. Liberty and Privacy:The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.
14. Shared Benefit:AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity:The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16. Human Control:Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17. Non-subversion:The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. AI Arms Race:An arms race in lethal autonomous weapons should be avoided.
Longer term issues
19. Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20. Importance:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks:Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement:AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good:Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.
Read the original:
Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert
- Ethical Issues In Advanced Artificial Intelligence - December 14th, 2016 [December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom - December 21st, 2016 [December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... - January 4th, 2017 [January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why - January 25th, 2017 [January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press - January 27th, 2017 [January 27th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think - February 7th, 2017 [February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse - February 7th, 2017 [February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal - February 8th, 2017 [February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ - February 8th, 2017 [February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin - February 9th, 2017 [February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic - February 10th, 2017 [February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News - February 13th, 2017 [February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism - February 13th, 2017 [February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan - February 13th, 2017 [February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think - February 27th, 2017 [February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017 [February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017 [February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017 [February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017 [March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017 [March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017 [March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017 [March 3rd, 2017]
- Supersentience - March 7th, 2017 [March 7th, 2017]
- Superintelligence | Guardian Bookshop - March 7th, 2017 [March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017 [March 8th, 2017]
- Who is afraid of AI? - The Hindu - April 8th, 2017 [April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) - April 8th, 2017 [April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017 [April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017 [June 6th, 2017]
- A reply to Wait But Why on machine superintelligence - June 6th, 2017 [June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech - June 7th, 2017 [June 7th, 2017]
- Using AI to unlock human potential - EJ Insight - June 9th, 2017 [June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017 [June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017 [June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017 [June 18th, 2017]
- The bots are coming - The New Indian Express - June 20th, 2017 [June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017 [June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017 [June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017 [July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017 [July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017 [July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017 [July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017 [July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017 [July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017 [August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017 [August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017 [August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017 [August 25th, 2017]
- Friendly artificial intelligence - Wikipedia - August 25th, 2017 [August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017 [August 25th, 2017]
- Steam Workshop :: Superintelligence - February 15th, 2018 [February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018 [February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... - June 3rd, 2018 [June 3rd, 2018]
- Superintelligence survey - Future of Life Institute - June 23rd, 2018 [June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018 [August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018 [October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019 [March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019 [March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019 [May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019 [October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019 [October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019 [October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019 [October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019 [October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019 [October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019 [December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019 [December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019 [December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019 [December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019 [December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020 [February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020 [February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020 [March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020 [April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020 [June 21st, 2020]
- The Shadow of Progress - Merion West - July 13th, 2020 [July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020 [July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel - July 17th, 2020 [July 17th, 2020]