A beginners guide to the AI apocalypse: Artificial stupidity – The Next Web

Welcome to the latest article in TNWs guide to the AI apocalypse. In this series well examine some of the most popular doomsday scenarios prognosticated by modern AI experts.

In this edition were going to flip the script and talk about something that might just save us from being destroyed by our robot overlords on September 23, 2029 (random date, but if it actually happens your mind is going to be blown), and that is: artificial stupidity.

But first, a few words about humans.

You wont find any comprehensive data on the subject outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history.

Luckily were still the smartest species on the planet, so weve managed to remain in charge for a long time despite our shortcomings. Unfortunately a new challenger has entered the arena in the form of AI. And despite its relative infancy, artificial intelligence isnt as far from challenging our status as the apex intellects as you might think.

The experts will tell you that were really far away from human-level AI (HLAI). But maybe thats because nobodys quite sure what the benchmark for that would be. What should a human be able to do? Can you play the guitar? I can. Can you play the piano? I cant.

Sure, you can argue that a human-level AI should be able to learn to play the guitar or the piano, just like a human can many play both. But the point is that measuring human ability isnt a cut-and-dry endeavor.

Computer scientist Roman Yampolskiy, of the university of Louisville, recently published a paper discussing this exact concept. He writes:

Imagine that tomorrow a prominent technology company announces that they have successfully created an Artificial Intelligence (AI) and offers for you to test it out.

You decide to start by testing developed AI for some very basic abilities such as multiplying 317 by 913, and memorizing your phone number. To your surprise, the system fails on both tasks.

When you question the systems creators, you are told that their AI is human-level artificial intelligence (HLAI) and as most people cannot perform those tasks neither can their AI. In fact, you are told, many people cant even compute 13 x 17, or remember name of a person they just met, or recognize their coworker outside of the office, or name what they had for breakfast last Tuesday.

The list of such limitations is quite significant and is the subject of study in the field of Artificial Stupidity.

Trying to define what HLAI should and shouldnt be able to do is just as difficult as trying to define the same for an 18-year-old human. Change a tire? Run a business? Win at Jeopardy?

This line of reasoning usually swings the conversation to narrow intelligence versus general intelligence. But here we run into a problem as well. General AI is, hypothetically, a machine capable of learning any function in any domain that a human can. That means a single GAI should be capable of replacing any human in the entire world given proper training.

Humans dont work that way however. Theres no general human intelligence. The combined potential for human function is not achievable by an individual. If we build a machine capable of replacing any of us, it stands to reason it will.

And thats cause for concern. We dont consider which ants are most talented when we wreck an anthill to build a softball field, why should our intellectual superiors?

The good news is that most serious AI experts dont think GAI will happen anytime soon, so the most well have to deal with is whatever fuzzy definition of HLAI the person or company who claims it comes up with. Much like Google decided it had achieved quantum supremacy by coming up with an arbitrary (and disputed) benchmark, itll surprise nobody in the industry if, for example, the AI crew at Facebook determines that a specific translation algorithm theyve invented meets their self-imposed criteria for HLAI (or something like that). Maybe itll be Amazon or OpenAI.

The bad news is that you also wont find many reputable scientists willing to rule GAI out. And that means we could be an eureka! or two away from someone like Ian Goodfellow oopsing up an algorithm that ties general intelligence to hardware. And when that happens, we could be looking at Bostroms Paperclip Maximizer in full effect. In other words: the robots wont kill us out of spite, theyll just forget we exist and transform the world and its habitats to suit their needs just as we did.

Thats one theory anyway. And, as with any potential extinction scenario, its important to have a plan to stop it. Based on the fact that we cant know exactly whats going to happen once a superintelligent artificial being emerges, we should probably just start hard-coding artificial stupidity into the mix.

The right dose of unwavering limitations think Asimovs Laws of Robotics but more specific to the number of parameters or compute a specific model can use and what level of network integration can exist between disparate systems could spell the difference between our existence and extinction.

So, rather than attempting to program advanced AI with a philosophical view on the sanctity of human life and what constitutes the greater good, we should just hamstring them with artificial stupidity from the start.

Published July 17, 2020 19:55 UTC

Read the original:

A beginners guide to the AI apocalypse: Artificial stupidity - The Next Web

Related Posts

Comments are closed.