I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area. Hopefully, despite my ignorance, this experience will offer something new or at least explain one approach in a new way.
The question about superintelligence I wish to address is the paperclip universe problem. Suppose that an industrial program, aimed with the goal of maximizing the number of paperclips, is otherwise equipped with a general intelligence program as to tackle with this objective in the most creative ways, as well as internet connectivity and text information processing facilities so that it can discover other mechanisms. There is then the possibility that the program does not take its current resources as appropriate constraints, but becomes interested in manipulating people and directing devices to cause paperclips to be manufactured without consequence for any other objective, leading in the worse case to widespread destruction but a large number of surviving paperclips.
This would clearly be a disaster. The common response is to take as a consequence that when we specify goals to programs, we should be much more careful about specifying what those goals are. However, we might find it difficult to formulate a set of goals that dont admit some kind of loophole or paradox that, if pursued with mechanical single-mindedness, are either similarly narrowly destructive or self-defeating.
Suppose that, instead of trying to formulate a set of foolproof goals, we should find a way to admit to the program that the set of goals weve described is not comprehensive. We should aim for the capacity to add new goals with a procedural understanding that the list may never be complete. If done well, we would have a system that would couple this initial set of goals to the set of resources, operations, consequences, and stakeholders initially provided to it, with an understanding that those goals are only appropriate to the initial list and finding new potential means requires developing a richer understanding of potential ends.
How can this work? Its easy to imagine such an algorithmic admission leading to paralysis, either from finding contradictory objectives that apparently admit no solution or an analysis/paralysis which perpetually requires no undiscovered goals before proceeding. Alternatively, stated incorrectly, it could backfire, with finding more goals taking the place of making more paperclips as it proceeds singlemindedly to consume resources. Clearly, a satisfactory superintelligence would need to reason appropriately about the goal discovery process.
There is a profession that has figured out a heuristic form of reasoning about goal discovery processes: designers. Designers have coined the phrase the fuzzy front end when talking about the very early stages of a project before anyone has figured out what it is about. Designers engage in low-cost elicitation exercises with a variety of stakeholders. They quickly discover who the relevant stakeholders are and what impacts their interventions might have. Adept designers switch back and forth rapidly from candidate solutions to analyzing the potential impacts of those designs, making new associations about the area under study that allows for further goal discovery. As designers undertake these explorations, they advise going slightly past the apparent wall of diminishing returns, often using an initial brainstorming session to reveal all of the obvious ideas before undertaking a deeper analysis. Seasoned designers develop an understanding when stakeholders are holding back and need to be prompted, or when equivocating stakeholders should be encouraged to move on. Designers will interleave a series of prototypes, experiential exercises, and pilot runs into their work, to make sure that interventions really behave the way their analysis seems to indicate.
These heuristics correspond well to an area of statistics and machine learning called nonparametric Bayesian inference. Nonparametric does not mean that there are no parameters, but instead that the parameters are not given, and that inferring that there are further parameters is part of the task. Suppose that you were to move to a new town, and ask around about the best restaurant. The first answer would definitely be new, but as one asked more, eventually you would start getting new answers more rarely. The likelihood of a given answer would also begin to converge. In some cases the answers will be more concentrated on a few answers, and in some cases the answers will be more dispersed. In either case, once we have an idea of how concentrated the answers are, we might see that a particular period of not discovering new answers might just be unlucky and that we should pursue further inquiry.
Asking why provides a list of critical features that can be used to direct different inquiries that fill out the picture. Whats the best restaurant in town for Mexican food? Which is best at maintaining relationships to local food providers/has the best value for money/is the tastiest/has the most friendly service? Designers discover aspects about their goals in an open-ended way, that allows discovery to act in quick cycles of learning through taking on different aspects of the problem. This behavior would work very well for an active learning formulation of relational nonparametric inference.
There is a point at which information gathering activities are less helpful at gathering information than attending to the feedback to activities that more directly act on existing goals. This happens when there is a cost/risk equilibrium between the cost of more discovery activities and the risk of making an intervention on incomplete information. In many circumstances, the line between information gathering and direct intervention will be fuzzier, as exploration proceeds through reversible or inconsequential experiments, prototypes, trials, pilots, and extensions that gather information while still pursuing the goals found so far.
From this perspective, many frameworks for assessing engineering discovery processes make a kind of epistemological error: they assess the quality of the solution from the perspective of the information that they have gathered, paying no attention to the rates and costs which that information was discovered, and whether or not the discovery process is at equilibrium. This mistake comes from seeing the problems as finding a particular point in a given search space of solutions, rather than taking the search space as a variable requiring iterative development. A superintelligence equipped to see past this fallacy would be unlikely to deliver us a universe of paperclips.
Having said all this, I think the nonparametric intuition, while right, can be cripplingly misguided without being supplemented with other ideas. To consider discovery analytically is to not discount the power of knowing about the unknown, but it doesnt intrinsically value non-contingent truths. In my next essay, I will take on this topic.
For a more detailed explanation and an example of how to extend engineering design assessment to include nonparametric criteria, see The Methodological Unboundedness of Limited Discovery Processes. Form Academisk, 7:4.
Go here to see the original:
The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog)
- Ethical Issues In Advanced Artificial Intelligence - December 14th, 2016 [December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom - December 21st, 2016 [December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... - January 4th, 2017 [January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why - January 25th, 2017 [January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press - January 27th, 2017 [January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert - February 7th, 2017 [February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think - February 7th, 2017 [February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse - February 7th, 2017 [February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal - February 8th, 2017 [February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ - February 8th, 2017 [February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin - February 9th, 2017 [February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic - February 10th, 2017 [February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News - February 13th, 2017 [February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism - February 13th, 2017 [February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan - February 13th, 2017 [February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think - February 27th, 2017 [February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017 [February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017 [February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017 [February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017 [March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017 [March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017 [March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017 [March 3rd, 2017]
- Supersentience - March 7th, 2017 [March 7th, 2017]
- Superintelligence | Guardian Bookshop - March 7th, 2017 [March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017 [March 8th, 2017]
- Who is afraid of AI? - The Hindu - April 8th, 2017 [April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017 [April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017 [June 6th, 2017]
- A reply to Wait But Why on machine superintelligence - June 6th, 2017 [June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech - June 7th, 2017 [June 7th, 2017]
- Using AI to unlock human potential - EJ Insight - June 9th, 2017 [June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017 [June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017 [June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017 [June 18th, 2017]
- The bots are coming - The New Indian Express - June 20th, 2017 [June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017 [June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017 [June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017 [July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017 [July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017 [July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017 [July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017 [July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017 [July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017 [August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017 [August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017 [August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017 [August 25th, 2017]
- Friendly artificial intelligence - Wikipedia - August 25th, 2017 [August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017 [August 25th, 2017]
- Steam Workshop :: Superintelligence - February 15th, 2018 [February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018 [February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... - June 3rd, 2018 [June 3rd, 2018]
- Superintelligence survey - Future of Life Institute - June 23rd, 2018 [June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018 [August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018 [October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019 [March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019 [March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019 [May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019 [October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019 [October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019 [October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019 [October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019 [October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019 [October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019 [December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019 [December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019 [December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019 [December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019 [December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020 [February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020 [February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020 [March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020 [April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020 [June 21st, 2020]
- The Shadow of Progress - Merion West - July 13th, 2020 [July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020 [July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel - July 17th, 2020 [July 17th, 2020]