by William Bryk
The science fiction writer Arthur Clarke famously wrote, Any sufficiently advanced technology is indistinguishable from magic. Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. Its known as artificial super-intelligence (ASI), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.
Weve all encountered artificial intelligence (AI) in the media. We hear about it in science fiction movies like Avengers Age of Ultron and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.
In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence could progress from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast could read all of the books in the Library of Congress the first millisecond you press enter on the program, and then integrate all that knowledge into a comprehensive analysis of humanitys 4,000 year intellectual journey before your next blink.
The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on Turing Machines, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switchesons and offs, 0s and 1scould think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBMs AI bot Watson sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBMs Watson will analyze your medical records and become your personal, all-knowing doctor.3
While these soon-to-come achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called intelligence to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, shell say Starting Facetime with Matt Soffer. A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.
This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)roughly defined as a machine intelligence that outperforms humans in all intellectual tasks the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check never if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI will be created within just 65 years.4
HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I.J. Good said it best: The first ultraintelligent machine is the last invention that man need ever make .5
There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6
The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of Whole Brain Emulation is to copy or simulate our brains neural networks, taking advantage of natures millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switchit either fires or it doesnt. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White Houses Brain initiative8 and the EUs Human Brain Project.9 In reality, these two routes to human level machine intelligencealgorithmic and emulationare not black and white. Whatever technology achieves HLMI will probably be a combination of the two.
Once HLMI is achieved, the rate of advancement could increase very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (roughly defined as an intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4
Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMIs. Thus, once HLMIs truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMIs will set the smarter HLMIs to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.
Here is a reimagining of a human-computer dialogue taken from the collection of short stories, Angels and Spaceships:11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted Program Complete. The programmer types, What have you read? and the program responds, The entire internet. Ask me anything. After deliberating for a few seconds, one of the programmers types, hands trembling, Do you think theres a God? The computer instantly responds, There is now.
This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10
Another understandable doubt may be that its hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0s and 1s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by scienceit is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent propertya result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0s and 1s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why cant it be on another?
Thus, under the assumption that superintelligence is possible and may happen within a century or so, the world is reaching a critical point in history. First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.
The implications of this intelligence for society would be far-reachingin some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence. The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12
When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% extremely good, 28% good, 17% neutral, 13% bad, and 18% extremely bad (existential catastrophe).4 18% is not a statistic to take lightly.
Although artificial superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Platos cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we dont even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that, within a century, we could bear witness to the greatest answers of all time. Are we ready to take the risk?
William Bryk 19 is a freshman in Canaday Hall.
Works Cited
Like Loading...
Related
Read the original post:
Artificial Superintelligence: The Coming Revolution ...
- Ethical Issues In Advanced Artificial Intelligence - December 14th, 2016 [December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom - December 21st, 2016 [December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... - January 4th, 2017 [January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why - January 25th, 2017 [January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press - January 27th, 2017 [January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert - February 7th, 2017 [February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think - February 7th, 2017 [February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse - February 7th, 2017 [February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal - February 8th, 2017 [February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ - February 8th, 2017 [February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin - February 9th, 2017 [February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic - February 10th, 2017 [February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News - February 13th, 2017 [February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism - February 13th, 2017 [February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan - February 13th, 2017 [February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think - February 27th, 2017 [February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News - February 27th, 2017 [February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch - February 28th, 2017 [February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes - February 28th, 2017 [February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News - March 2nd, 2017 [March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism - March 2nd, 2017 [March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine - March 2nd, 2017 [March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC - March 3rd, 2017 [March 3rd, 2017]
- Supersentience - March 7th, 2017 [March 7th, 2017]
- Superintelligence | Guardian Bookshop - March 7th, 2017 [March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect - March 8th, 2017 [March 8th, 2017]
- Who is afraid of AI? - The Hindu - April 8th, 2017 [April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) - April 8th, 2017 [April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine - April 8th, 2017 [April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) - June 6th, 2017 [June 6th, 2017]
- A reply to Wait But Why on machine superintelligence - June 6th, 2017 [June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech - June 7th, 2017 [June 7th, 2017]
- Using AI to unlock human potential - EJ Insight - June 9th, 2017 [June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox - June 13th, 2017 [June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering - June 17th, 2017 [June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... - June 18th, 2017 [June 18th, 2017]
- The bots are coming - The New Indian Express - June 20th, 2017 [June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard - June 20th, 2017 [June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint - June 30th, 2017 [June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) - July 4th, 2017 [July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune - July 16th, 2017 [July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW - July 18th, 2017 [July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) - July 22nd, 2017 [July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour - July 22nd, 2017 [July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro - July 27th, 2017 [July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard - August 4th, 2017 [August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) - August 25th, 2017 [August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian - August 25th, 2017 [August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders - August 25th, 2017 [August 25th, 2017]
- Friendly artificial intelligence - Wikipedia - August 25th, 2017 [August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse - August 25th, 2017 [August 25th, 2017]
- Steam Workshop :: Superintelligence - February 15th, 2018 [February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) - February 15th, 2018 [February 15th, 2018]
- Superintelligence survey - Future of Life Institute - June 23rd, 2018 [June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... - August 18th, 2018 [August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... - October 11th, 2018 [October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... - March 6th, 2019 [March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum - March 6th, 2019 [March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia - May 3rd, 2019 [May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think - October 1st, 2019 [October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute - October 22nd, 2019 [October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs - October 22nd, 2019 [October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan - October 24th, 2019 [October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi - October 24th, 2019 [October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction - October 24th, 2019 [October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline - October 24th, 2019 [October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes - December 5th, 2019 [December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge - December 18th, 2019 [December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence - December 21st, 2019 [December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends - December 21st, 2019 [December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom - December 21st, 2019 [December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com - February 18th, 2020 [February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria - February 18th, 2020 [February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes - March 4th, 2020 [March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk - April 3rd, 2020 [April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian - June 21st, 2020 [June 21st, 2020]
- The Shadow of Progress - Merion West - July 13th, 2020 [July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India - July 13th, 2020 [July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel - July 17th, 2020 [July 17th, 2020]