The future of AI is being shaped right now. How should policymakers respond? – Vox.com

Posted: April 9, 2021 at 2:41 am

For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didnt necessarily translate into a commercially viable product, let alone a superintelligent one.

And for a while in the 60s, 70s, and 80s it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: AI winters, periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.

No one is bored now.

Limited AI systems have taken on an ever-bigger role in our lives, wrangling our news feeds, trading stocks, translating and transcribing text, scanning digital pictures, taking restaurant orders, and writing fake product reviews and news articles. And while theres always the possibility that AI development will hit another wall, theres reason to think it wont: All of the above applications have the potential to be hugely profitable, which means there will be sustained investment from some of the biggest companies in the world. AI capabilities are reasonably likely to keep growing until theyre a transformative force.

A new report from the National Security Commission on Artificial Intelligence (NSCAI), a committee Congress established in 2018, grapples with some of the large-scale implications of that trajectory. In 270 pages and hundreds of appendices, the report tries to size up where AI is going, what challenges it presents to national security, and what can be done to set the US on a better path.

It is by far the best writing from the US government on the enormous implications of this emerging technology. But the report isnt without flaws, and its shortcomings underscore how hard it will be for humanity to get a handle on the warp-speed development of a technology thats at once promising and perilous.

As it exists right now, AI poses policy challenges. How do we determine whether an algorithm is fair? How do we stop oppressive governments from using AI surveillance for totalitarianism? Those questions are mostly addressable with the same tools the US has used in other policy challenges over the decades: Lawsuits, regulations, international agreements, and pressure on bad actors, among others, are tried-and-true tactics to control the development of new technologies.

But for more powerful and general AI systems advanced systems that dont yet exist but may be too powerful to control once they do such tactics probably wont suffice.

When it comes to AI, the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans that is, humanity doesnt construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.

Because the tech is necessarily speculative, the problem is that we dont know as much as wed like to about how to design those systems. In many ways, were in a position akin to someone worrying about nuclear proliferation in 1930. Its not that nothing useful could have been done at that early point in the development of nuclear weapons, but at the time it would have been very hard to think through the problem and to marshal the resources let alone the international coordination needed to tackle it.

In its new report, the NSCAI wrestles with these problems and (mostly successfully) addresses the scope and key challenges of AI; however, it has limitations. The commission nails some of the key concerns about AIs development, but its US-centric vision may be too myopic to confront a problem as daunting and speculative as an AI that threatens humanity.

AI has seen extraordinary progress over the past decade. AI systems have improved dramatically at tasks including translation, playing games such as chess and Go, answering important research biology questions (such as predicting how proteins fold), and generating images.

These systems also determine what you see in a Google search or in your Facebook News Feed. They compose music and write articles that, at first glance, read as though a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.

All of those are instances of narrow AI computer systems designed to solve specific problems, versus those with the sort of generalized problem-solving capabilities humans have.

But narrow AI is getting less narrow and researchers have gotten better at creating computer systems that generalize learning capabilities. Instead of mathematically describing detailed features of a problem for a computer to solve, today its often possible to let the computer system learn the problem by itself.

As computers get good enough at performing narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAIs famous GPT series of text generators is, in one sense, the narrowest of narrow AIs it just predicts what the next word will be, based on previous words its prompted with and its vast store of human language. And yet, it can now identify questions as reasonable or unreasonable as well as discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first).

What these developments show us is this: In order to be very good at narrow tasks, some AI systems eventually develop abilities that are not narrow at all.

The NSCAI report acknowledges this eventuality. As AI becomes more capable, computers will be able to learn and perform tasks based on parameters that humans do not explicitly program, creating choices and taking actions at a volume and speed never before possible, the report concludes.

Thats the general dilemma the NSCAI is tasked with addressing. A new technology, with both extraordinary potential benefits and extraordinary risks, is being developed. Many of the experts working on it warn that the results could be catastrophic. What concrete policy measures can the government take to get clarity on a challenge such as this one?

The NSCAI report is a significant improvement on much of the existing writing about artificial intelligence in one important respect: It understands the magnitude of the challenge.

For a sense of that magnitude, its useful to imagine the questions involved in figuring out government policy on nuclear nonproliferation in the 1930s.

By 1930, there was certainly some scientific evidence that nuclear weapons would be possible. But there were no programs anywhere in the world to make them, and there was even some dissent within the research community about whether such weapons could ever be built.

As we all know, nuclear weapons were built within the next decade and a half, and they changed the trajectory of human history.

Given all that, what could the government have done about nuclear proliferation in 1930? Decide on the wisdom of pushing itself to develop such weapons, perhaps, or develop surveillance systems that would alert the country if other nations were building them.

In practice, the government in 1930 did none of these things. When an idea is just beginning to gain a foothold among the academics, engineers, and experts who work on it, its hard for policymakers to figure out where to start.

When considering these decisions, our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared, Chair Eric Schmidt and Vice Chair Bob Work wrote of this dilemma in the NSCAI report.

As a result, much government writing about AI to date has seemed fundamentally confused, limited by the fact that no one knows exactly what transformative AI will look like or what key technical challenges lie ahead.

In addition, a lot of the writing about AI both by policymakers and by technical experts is very small, focused on possibilities such as whether AI will eliminate call centers, rather than the ways general AI, or AGI, will usher in a dramatic technological realignment, if its built at all.

The NSCAI analysis does not make this mistake.

First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence and in some instances exceed human performance is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience, reads the executive summary.

The report also extrapolates from current progress in machine learning to identify some specific areas where AI might enable notable good or notable harm:

Combined with massive computing power and AI, innovations in biotechnology may provide novel solutions for mankinds most vexing challenges, including in health, food production, and environmental sustainability. Like other powerful technologies, however, applications of biotechnology can have a dark side. The COVID-19 pandemic reminded the world of the dangers of a highly contagious pathogen. AI may enable a pathogen to be specifically engineered for lethality or to target a genetic profile the ultimate range and reach weapon.

One major challenge in communicating about AI is its much easier to predict the broad effects that unleashing fast, powerful research and decision-making systems on the world will have speeding up all kinds of research, for both good and ill than it is to predict the specific inventions those systems will come up with. The NSCAI report outlines some of the ways AI will be transformative, and some of the risks those transformations pose that policymakers should be thinking about how to manage.

Overall, the report seems to grasp why AI is a big deal, what makes it hard to plan for, and why its necessary to plan for it anyway.

But theres an important way in which the NSCAI report falls short. Recognizing that AI poses enormous risks and that it will be powerful and transformative, the report foregrounds a posture of great-power competition with both eyes on China to address the looming problem before humanity.

We should race together with partners when AI competition is directed at the moonshots that benefit humanity like discovering vaccines. But we must win the AI competition that is intensifying strategic competition with China, the report concludes.

China is run by a totalitarian regime that poses geopolitical and moral problems for the international community. Chinas repression in Hong Kong and Tibet, and the genocide of the Uyghur people in Xinjiang, have been technologically aided, and the regime should not have more powerful technological tools with which to violate human rights.

Theres no question that China developing AGI would be a bad thing. And the countermeasures the report proposes especially an increased effort to attract the worlds top scientists to America are a good idea.

More than that, the US and the global community should absolutely devote more attention and energy to addressing Chinas human rights violations.

But its where the report proposes beating China to the punch by accelerating AI development in the US, potentially through direct government funding, that I have hesitations. Adopting an arms-race mentality on AI would make involved companies and projects more likely to discourage international collaboration, cut corners, and evade transparency measures.

In 1939, at a conference at George Washington University, Niels Bohr announced that hed determined that uranium fission had been discovered. Physicist Edward Teller recalled the moment:

For all that the news was amazing, the reaction that followed was remarkably subdued. After a few minutes of general comment, my neighbor said to me, perhaps we should not discuss this. Clearly something obvious has been said, and it is equally clear that the consequences will be far from obvious. That seemed to be the tacit consensus, for we promptly returned to low-temperature physics.

Perhaps that consensus would have prevailed, if World War II hadnt started. It took the concerted efforts of many brilliant researchers to bring nuclear bombs to fruition, and at first most of them hesitated to be a part of the effort. Those hesitations were reasonable inventing the weaponry with which to destroy civilization is no small thing. But once they had reason to fear that the Nazis were building the bomb, those reservations melted away. The question was no longer Should these be built at all? but Should these be built by us, or by the Nazis?

It turned out, of course, that the Nazis were never close, nor was the atomic bomb needed to defeat them. And the US development of the bomb caused its geopolitical adversaries, the USSR, to develop it too, much sooner than it otherwise would have, through espionage. The world then spent decades teetering on the brink of nuclear war.

The specter of a mess like that looms large in everyones minds when they think of AI.

I think its a mistake to think of this as an arms race, Gilman Louie, a commissioner on the NSCAI report, told me though he immediately added, We dont want to be second.

An arms race can push scientists toward working on a technology that they have reservations about, or one they dont know how to safely build. It can also mean that policymakers and researchers dont pay enough attention to the AI alignment problem which is really the looming issue when it comes to the future of AI.

AI alignment is the work of trying to design intelligent systems that are accountable to humans. An AI even in well-intentioned hands will not necessarily ensure its development consistent with human priorities. Think of it this way: An AI aiming to increase a companys stock price, or to ensure a robust national defense against enemies, or to make a compelling ad campaign, might take large-scale actions like disabling safeguards, rerouting resources, or interfering with other AI systems we would never have asked for or wanted. Those large-scale actions in turn could have drastic consequences for economies and societies.

Its all speculative, for sure, but thats the point. Were in the year 1930 confronting the potential creation of a world-altering technology that might be here a decade-and-a-half from now or might be five decades away.

Right now, our capacity to build AIs is racing ahead of our capacity to understand and align them. And trying to make sure AI advancements happen in the US first can just make that problem worse, if the US doesnt also invest in the research which is much more immature, and has less obvious commercial value to build aligned AIs.

We ultimately came away with a recognition that if America embraces and invests in AI based on our values, it will transform our country and ensure that the United States and its allies continue to shape the world for the good of all humankind, NSCAI executive director Yll Bajraktari writes in the report. But heres the thing: Its entirely possible for America to embrace and invest in an AI research program based on liberal-democratic values that still fails, simply because the technical problem ahead of us is so hard.

This is an important respect in which AI is not analogous to nuclear weapons, where the most important policy decisions were whether to build them at all and how to build them faster than Nazi Germany.

In other words, with AI, theres not just the risk that someone else will get there first. A misaligned AI built by an altruistic, transparent, careful research team with democratic oversight and a goal to share its profits with all of humanity will still be a misaligned AI, one that pursues its programmed goals even when theyre contrary to human interests.

The limited scope of the NSCAI report is a fairly obvious consequence of what the commission is and what it does. The commission was created in 2018 and tasked with recommending policies that would advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.

Right now, the part of the US government that takes artificial intelligence risks seriously is the national security and defense community. Thats because AI risk is weird, confusing, and futuristic, and the national security community has more latitude than the rest of the government to spend resources seriously investigating weird, confusing, and futuristic things.

But AI isnt just a defense and security issue; it will affect is affecting most aspects of society, like education, criminal justice, medicine, and the economy. And to the extent it is a defense issue, that doesnt mean that traditional defense approaches make sense.

If, before the invention of electricity, the only people working on producing electricity had been armies interested in electrical weapons, theyd not just be missing most of the effects of electricity on the world, theyd even be missing most of the effects of electricity on the military, which have to do with lighting, communications, and intelligence, rather than weapons.

The NSCAI, to its credit, takes AI seriously, including the non-defense applications and including the possibility that AI built in America by Americans could still go wrong. The thing I would say to American researchers is to avoid skipping steps, Louie told me. We hope that some of our competitor nations, China, Russia, follow a similar path demonstrate it meets thorough requirements for what we need to do before we use these things.

But the report, overall, looks at AI from the perspective of national defense and international competition. Its not clear that will be conducive to the international cooperation we might need in order to ensure no one anywhere in the world rushes ahead with an AI system that isnt ready.

Some AI work, at least, needs to be happening in a context insulated from arms-race concerns and fears of China. By all means, lets devote greater attention to Chinas use of tech in perpetrating human rights violations. But we should hesitate to rush ahead with AGI work without a sense of how well make it happen safely, and there needs to be more collaborative global work on AI, with a much longer-term lens. The perspectives that work could create room for just might be crucial ones.

Continued here:

The future of AI is being shaped right now. How should policymakers respond? - Vox.com

Related Posts