What if at the dawn of the industrial revolution in 1817 we had known the dangers of global warming? We would have created institutions to study mans impact on the environment. We would have enshrined national laws and international treaties, agreeing to constrain harmful activities and to promote sound onesfor the good of humanity. If we had been able to predict our future, the world as it exists 200 years later would have been very different.
In 2017, we are at the same critical juncture in the development of artificial intelligenceexcept, this time, we have the foresight of seeing the horizons dangers.
AI is the rare case where I think we need to be proactive in regulation instead of reactive, Elon Musk recently cautioned at the US National Governors Association annual meeting. AI is a fundamental existential risk for human civilizationbut until people see robots going down the street killing people, they dont know how to react.
However, not all think the future is that dire, or that close. Mark Zuckerberg responded to Musks dystopian statement in a Facebook Live post. I think people who are naysayers and try to drum up these doomsday scenariosI just, I dont understand it, he said while casually smoking brisket in his backyard. Its really negative and in some ways I actually think it is pretty irresponsible. (Musk snapped back on Twitter the next day: Ive talked to Mark about this. His understanding of the subject is limited.)
So, which of the two tech billionaires is right? Actually, both are.
Musk is correct that there are real dangers to AIs advances, but his apocalyptic predictions distract from the more mundane but immediate issues that the technology presents. Zuckerberg is correct to emphasize the enormous benefits of AI, but he goes too far in terms of complacency, focusing on the technology that exists now rather than what might exist in 10 or 20 years.
This isnt just about stopping shady corporations or governments building autonomous killer robots in secret underground laboratories.We need to regulate AI before it becomes a problem, not afterward. This isnt just about stopping shady corporations or governments building autonomous killer robots in secret underground laboratories: We also need a global governing body to answer all sorts of questions, such as who is responsible when AI causes harm, and whether AIs should be given certain rights, just as their human counterparts have.
Weve made it work before: in space. The 1967 Outer Space Treaty is a piece of international law that restricts the ability of countries to colonize or weaponize celestial bodies. At the height of the Cold War, and shortly after the first space flight, the US and USSR realized an agreement was desirable given the shared existential risks of space exploration. Following negotiations over several years, the treaty was adopted by the UN before being ratified by governments worldwide.
This treaty was employed many years before we developed the technology to undertake the actions concerned as a precautionary measure, not as a reaction to solve a problem that already existed. AI governance needs to be the same.
In the middle of the 20th century, science-fiction writer Isaac Asimov wrote four Laws of Robotics.
Asimovs fictional laws would arguably be a good basis for an AI-ethics treaty, but he started in the wrong place. We need to begin by asking not what the laws should be, but who should write them.
Some federal and private organizations are making early attempts to regulate AI more systematically. Google, Facebook, Amazon, IBM, and Microsoft recently announced they have formed the Orwellian-sounding Partnership on Artificial Intelligence to Benefit People and Society, whose goals include supporting best practices and creating an open platform for discussion. Its partners now include various NGOs and charities such as UNICEF, Human Rights Watch, and the ACLU. In September 2016, the US government released its first ever guidance on self-driving cars. A few months later, the UKs Royal Society and British Academy, two of the worlds oldest and most respected scientific organizations, published a report that called for the creation of a new national body in the UK to steward the evolution of AI governance.
These kinds of reports show there is a growing consensus in favor of oversight of AIbut theres still little agreement on how this should actually be implemented beyond academic whitepapers circulating governmental inboxes.
Some tech companies will try to operate their businesses from wherever the law is the least restrictive, just as they do already with tax havens.In order to be successful, AI regulation needs to be international. If its not, we will be left with a messy patchwork of different rules in different countries that will be complicated (and expensive) for AI designers to navigate. If there isnt a legally binding global approach, some tech companies will also try to operate their businesses from wherever the law is the least restrictive, just as they do already with tax havens.
The solution also needs to involve players from both the public and private sector. Although the tech worlds Partnership on Artificial Intelligence plans to invite academics, non-profits, and specialists in policy and ethics to the table, it would benefit from the involvement of elected governments, too. While the tech companies are answerable to their shareholders, governments are answerable to their citizens. For example, the UKs Human Fertilization and Embryology Authority is a great example of an organization that brings together lawyers, philosophers, scientists, government, and industry players in order to set rules and guidelines for the fast-developing fields of fertility treatment, gene editing, and biological cloning.
Creating institutions and forming laws are only part of the answer: The other big issue is deciding who can and should enforce them.
For example, even if organizations and governments can agree which party should be liable if AI causes harmthe company, the coder, or the AI itselfwhat institution should hold the perpetrator to the crime, police the policy, deliver a verdict, and cast a sentence? Rather than create a new international police force for AI, a better solution is for countries to agree to regulate themselves under the same ethical banner.
The EU manages the tension between the need to set international standards and the desire of individual countries to set their own laws by setting directives that are binding as to the result to be achieved, but leave room for national governments to choose how to get there. This can mean setting regulatory floors or ceilings, like a maximum speed limit, for instance, by which member states can then set any limit below that level.
Another solution is to write model laws for AI, where experts from around the world pool their talents in order to come up with a set of regulations that countries can then take from and apply as much or as little as they want. This is helpful to less-wealthy nations as it saves them the cost of developing fresh legislation, but at the same time respects their autonomy by not forcing them to adopt all parts.
* * *
The world needs a global treaty on AI, as well as other mechanisms for setting common laws and standards. We should be thinking less about how to survive a robot apocalypse and more about how to live alongside themand thats going to require some rules that everyone plays by.
Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.
Read the original here:
Elon Musk and Mark Zuckerberg are both wrong about AI and the robot apocalypse - Quartz
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks - November 8th, 2009 [November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations - November 8th, 2009 [November 8th, 2009]
- Software environments for working on AI projects - November 8th, 2009 [November 8th, 2009]
- New version of my NLP toolkit - November 8th, 2009 [November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS - November 8th, 2009 [November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL - November 8th, 2009 [November 8th, 2009]
- Defining AI and Knowledge Engineering - November 8th, 2009 [November 8th, 2009]
- Great Overview of Knowledge Representation - November 8th, 2009 [November 8th, 2009]
- Something like Google page rank for semantic web URIs - November 8th, 2009 [November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems - November 8th, 2009 [November 8th, 2009]
- The URL for this blog has changed - November 8th, 2009 [November 8th, 2009]
- I have a new page on Knowledge Management - November 8th, 2009 [November 8th, 2009]
- N-GRAM analysis using Ruby - November 8th, 2009 [November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web - November 8th, 2009 [November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby - November 8th, 2009 [November 8th, 2009]
- Machines Like Us - November 8th, 2009 [November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool - November 8th, 2009 [November 8th, 2009]
- texai.org - November 8th, 2009 [November 8th, 2009]
- NLTK: The Natural Language Toolkit - November 8th, 2009 [November 8th, 2009]
- My OpenCalais Ruby client library - November 8th, 2009 [November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data - November 8th, 2009 [November 8th, 2009]
- Protégé OWL Ontology Editor - November 8th, 2009 [November 8th, 2009]
- New version of Numenta software is available - November 8th, 2009 [November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs - November 8th, 2009 [November 8th, 2009]
- Verison 2.0 of OpenCyc is available - November 8th, 2009 [November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] - November 8th, 2009 [November 8th, 2009]
- Minimax Search [Knowledge] - November 8th, 2009 [November 8th, 2009]
- Decision Tree [Knowledge] - November 8th, 2009 [November 8th, 2009]
- More AI Content & Format Preference Poll [Article] - November 8th, 2009 [November 8th, 2009]
- New Planners Solve Rescue Missions [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] - November 8th, 2009 [November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] - November 8th, 2009 [November 8th, 2009]
- Mining Data for the Netflix Prize [News] - November 8th, 2009 [November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] - November 8th, 2009 [November 8th, 2009]
- Decision Making for Medical Support [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Creates Music CD [News] - November 8th, 2009 [November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] - November 8th, 2009 [November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] - November 8th, 2009 [November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] - November 8th, 2009 [November 8th, 2009]
- What Would You do With 80 Cores? [News] - November 8th, 2009 [November 8th, 2009]
- Software Finds Learning Language Child's Play [News] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence in Games [Article] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence Resources - November 8th, 2009 [November 8th, 2009]
- Alan Turing: Mathematical Biologist? - April 25th, 2012 [April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video - April 30th, 2012 [April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video - April 30th, 2012 [April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video - April 30th, 2012 [April 30th, 2012]
- Science Breakthroughs - April 30th, 2012 [April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video - April 30th, 2012 [April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video - April 30th, 2012 [April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video - April 30th, 2012 [April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner - May 4th, 2012 [May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course - May 4th, 2012 [May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... - May 4th, 2012 [May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster - May 4th, 2012 [May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course - May 5th, 2012 [May 5th, 2012]
- Why Your Brain Isn't A Computer - May 5th, 2012 [May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course - May 7th, 2012 [May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... - May 10th, 2012 [May 10th, 2012]
- Google Driverless Car Ok'd by Nevada - May 10th, 2012 [May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar - May 10th, 2012 [May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award - May 13th, 2012 [May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB - May 16th, 2012 [May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... - May 16th, 2012 [May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International - May 16th, 2012 [May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm - May 23rd, 2012 [May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets - May 23rd, 2012 [May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying - May 23rd, 2012 [May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? - May 25th, 2012 [May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] - May 25th, 2012 [May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants - May 27th, 2012 [May 27th, 2012]
- Artificial intelligence: science fiction or simply science? - May 28th, 2012 [May 28th, 2012]
- Exetel taps artificial intelligence - May 29th, 2012 [May 29th, 2012]
- Software offers brain on the rain - May 29th, 2012 [May 29th, 2012]
- New Dean of Science has high hopes for his faculty - May 30th, 2012 [May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App - May 31st, 2012 [May 31st, 2012]
- A Rat is Smarter Than Google - June 5th, 2012 [June 5th, 2012]