Opinion | Artificial generative intelligence could prove too much for democracy – The Washington Post

Posted: April 27, 2023 at 2:54 pm

Contributing columnist|AddFollow

April 26, 2023 at 6:30 a.m. EDT

Tech and democracy are not friends right now. We need to change that fast.

As Ive discussed previously in this series, social media has already knocked a pillar out from under our democratic institutions by making it exceptionally easy for people with extreme views to connect and coordinate. The designers of the Constitution thought geographic dispersal would put a brake on the potential power of dangerous factions. But people no longer need to go through political representatives to get their views into the public sphere.

Our democracy is reeling from this impact. We are only just beginning the work of renovating our representative institutions to find mechanisms (ranked choice voting, for instance) that can replace geographic dispersal as a brake on faction.

Now, here comes generative artificial intelligence, a tool that will help bad actors further accelerate the spread of misinformation.

A healthy democracy could govern this new technology and put it to good use in countless ways. It would also develop defenses against those who put it to adversarial use. And it would look ahead to probable economic transformation and begin to lay out plans to navigate what will be a rapid and startling set of transitions. But is our democracy ready to address these governance challenges?

Im worried about the answer to that, which is why I joined a long list of technologists, academics and even controversial visionaries such as Elon Musk in signing an open letter calling for a pause for at least six months of "the training of AI systems more powerful than GPT-4. This letter was occasioned by the release last month of GPT-4 from the lab OpenAI. GPT-4 significantly improves on the power and functionality of ChatGPT, which was released in November.

The field of technology is convulsed by a debate about whether we have reached the Age of AGI. Not just an Age of AI, where machines and software, like Siri, perform specific and narrow tasks, but an Age of Artificial General Intelligence, in which technology can meet and match humans on just about any task. This would be a game changer giving us not just more problems of misinformation and fraud, but also all kinds of unpredictable emergent properties and powers from the technology.

The newest generative foundation models powering GPT-4 can match the best humans in a range of fields, from coding to the LSAT. But is the power of generative AI evidence of the arrival of what has for some been a long-sought goal artificial general intelligence? Bill Gates, cofounder of Microsoft, which has sought to break away from its rivals via intense investment in OpenAI, says no and argues that the capability of GPT-4 and other large language models is still constrained to limited tasks. But a team of researchers at Microsoft Research, in a comprehensive review of the capability of GPT-4, says yes. They see sparks of artificial general intelligence in the newest machine-learning models. My own take is that the research team is right. (Disclosure: My research lab has received funding support from Microsoft Research.)

But regardless of which side of the debate one comes down on, and whether the time has indeed come (as I think it has) to figure out how to regulate an intelligence that functions in ways we cannot predict, it is also the case that the near-term benefits and potential harms of this breakthrough are already clear, and attention must be paid. Numerous human activities including many white-collar jobs can now be automated. We used to worry about the impacts of AI on truck drivers; now its also the effects on lawyers, coders and anyone who depends on intellectual property for their livelihood. This advance will increase productivity but also supercharge dislocation.

In comments that sound uncannily as if from the early years of globalization, Gates said this about the anticipated pluses and minuses: When productivity goes up, society benefits because people are freed up to do other things, at work and at home. Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles.

And we all know how that went.

For a sense of the myriad things to worry about, consider this (partial) list of activities that OpenAI knows its technology can enable and that it therefore prohibits in its usage policies:

Illegal activity. Child sexual-abuse material. Generation of hateful, harassing or violent content. Generation of malware. Activity that has high risk of physical harm, including: weapons development; military and warfare; management or operation of critical infrastructure in energy, transportation and water; content that promotes, encourages or depicts acts of self-harm. Activity that has a high risk of economic harm, including: multilevel marketing, gambling, payday lending, automated determinations of eligibility for credit, employment, educational institutions or public assistance services. Fraudulent or deceptive activity, including: scams, coordinated inauthentic behavior, plagiarism, astroturfing, disinformation, pseudo-pharmaceuticals. Adult content. Political campaigning or lobbying by generating high volumes of campaign materials. Activities that violate privacy. Unauthorized practice of law or medicine or provision of financial advice.

The point of the open letter is not to say that this technology is all negative. On the contrary. There are countless benefits to be had. It could at long last truly enable the personalization of learning. And if we can use what generative AI is poised to create to compensate internet users for the production of the raw data its built upon treat that human contribution as paid labor, in other words we might be able to redirect the basic dynamics of the economy away from the ever-greater concentration of power in big tech.

But whats the hurry? We are simply ill-prepared for the impact of yet another massive social transformation. We should avoid rushing into all of this with only a few engineers at a small number of labs setting the direction for all of humanity. We need a breather for some collective learning about what humanity has created, how to govern it, and how to ensure that there will be accountability for the creation and use of new tools.

There are already many things we can and should do. We should be making scaled-up public-sector investments into third-party auditing, so we can actually know what models are capable of and what data theyre ingesting. We need to accelerate a standards-setting process that builds on work by the National Institute of Standards and Technology. We must investigate and pursue compute governance, which means regulation of the use of the massive amounts of energy necessary for the computing power that drives the new models. This would be akin to regulating access to uranium for the production of nuclear technologies.

More than that, we need to strengthen the tools of democracy itself. A pause in further training of generative AI could give our democracy the chance both to govern technology and to experiment with using some of these new tools to improve governance. The Commerce Department recently solicited input on potential regulation for the new AI models; what if we used some of the tools the AI field is generating to make that public comment process even more robust and meaningful?

We need to govern these emerging technologies and also deploy them for next-generation governance. But thinking through the challenges of how to make sure these technologies are good for democracy requires time we havent yet had. And this is thinking even GPT-4 cant do for us.

Danielle Allen on renovating democracy

Go here to read the rest:

Opinion | Artificial generative intelligence could prove too much for democracy - The Washington Post

Related Posts