Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)

July 21, 2017 by Ryan Hagemann

At the recent annual meeting of the National Governors Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanitys future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.

Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. Its not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time hes called for such stringent controls on the technology. And hes not alone.

In the preface to his book Superintelligence, Nick Bostrom contends that developing AI is quite possibly the most important and most daunting challenge humanity has ever faced. Andwhether we succeed or failit is probably the last challenge we will ever face. Even Stephen Hawking has jumped on the panic wagon.

These concerns arent uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.

All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.

Luckily, if history is any guide, the height of this hysteria means were probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenariosat least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.

Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if theres something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So whats the solution?

Gov. Doug Ducey (R-AZ) asked that very question: Youve given some of these examples of how AI can be an existential threat, but I still dont understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers dont get in front of entrepreneurs or innovators should be enacted. Musks response? First, government needs to gain insight by standing up an agency to make sure the situation is understood. Then put in place regulations to protect public safety. Thats it. Well, not quite.

The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administrations Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.

While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benignand it certainly didnt call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.

In short, put off those end-of-the-world parties, because AI isnt going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.

Despite Musks claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speedsto say nothing of the rise of quantum computing.

Musks solution to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI arent going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.

Musk of all people should know the future is always rife with uncertaintyafter all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the public good demanded even more stringent regulations for commercial space launch or autopilot features? Thats unlikelyand, notwithstanding Musks apprehensions, the same is probably true for AI.

Original post:

Have We Reached Peak AI Hysteria? - Niskanen Center (press release) (blog)

Related Posts

Comments are closed.