Bill Foster, a particle physicist-turned-congressman, on why he’s worried about artificial general intelligence – FedScoop

Posted: February 24, 2024 at 12:01 pm

Congress is just starting to ramp up its efforts to regulate artificial intelligence, but one member says he first encountered the technology in the 1990s, when he used neural networks to study physics. Now, Rep. Bill Foster, D-Ill., is returning to AI as a member of the new bipartisan task force on artificial intelligence, led by Reps. Ted Lieu, D-Calif., and Jay Obernolte, R-Calif., which was announced by House leadership earlier this week.

In a chat with FedScoop, the congressman outlined his concerns with artificial intelligence. The threat of deepfakes, he warned, cant necessarily be solved with detection and may require some kind of digital authentication platform. At the same time, Foster said hes also worried that the setup of committees and the varying levels of expertise within Congress arent well situated to deal with the technology.

There are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true, he told FedScoop. Its much harder for the average member of Congress to push back on claims about AI. Thats the difference. Were not as well defended against statements that may or may not be factual from lobbying organizations.

Compared to some other members of Congress, Foster appears particularly concerned about artificial general intelligence, a theoretical form of AI that, some argue, could end up rivaling human abilities. This technology doesnt exist yet, but some executives, including OpenAI CEO Sam Altman, have warned that this type of AI could raise massive safety issues. In particular, Foster argues that there will be a survival advantage to algorithmic systems that are opaque and deceptive.

(Critics, meanwhile, argue that discussion of AGI has distracted from opportunities to address the risks of AI systems that already exist today, like bias issues raised by facial recognition software.)

Fosters comments come in the nascent days of the AI task force, but help elucidate how varied perspectives on artificial intelligence are, even within the Democratic party. Unlike other areas, the technology is still relatively new to Congress and positions on how to rein in AI, and potential partisan divides, are only still forming.

Editors note: The transcript has been edited for clarity and length.

FedScoop: With this new AI task force, to what extent do you think youre going to be focusing on chips and focusing on hardware, given both the recent chips legislation and OpenAIs Sam Altmans calls for more focus on chip infrastructure, too?

Rep. Bill Foster: Its an interesting tradeoff. I doubt that this committee is going to be in a position to micromanage the [integrated circuit] industry. I first met Sam Altman about six years ago when I visited OpenAI [to talk about] universal basic income, which is one of the things that a lot of people point to having to do with the disruption to the labor market that [AI] is likely to cost.

When I started making noise about this inside the caucus, people expected the first jobs to fall would be factory assembly line workers, long haul truck drivers, taxi drivers. Thats taken longer than people guessed right then. But the other thing thats happened thats surprised people is how quickly the creative arts have come under assault from AI. Theres a lot of nervousness among teachers about what exactly are the careers of the future that were actually training people for.

I think one of the most important responses something that the government can actually deliver and even deliver this session of Congress is to provide people some way of defending themselves against deepfakes. Theres two approaches to this. The first thing is to try to imagine that you can detect fraudulent media and to develop software to detect deepfake material. Im not optimistic that thats going to work. Its going to be a cat-and-mouse game forever. Another approach is to provide citizens with a means of proving they are who they say they are online and they are not a deepfake.

FS: An authentication service?

BF: A mobile ID. A digital drivers license or a secure digital identity. This is a way for someone to use their cell phone and their government-provided credential, like a passport or Real ID-compliant drivers license, and associate it with your cell phones [This could] take advantage of your cell phones ability through AI to recognize its owner and also the modern cell phones ability to be used like a security dongle. It has whats called a secure enclave, or a secure compute facility, that allows it to hold private encryption keys that makes the device essentially a unique device in the world that can be associated with a unique person and their credential.

FS: How optimistic are you that this new AI task force is actually going to produce legislation?

BF: One reason Im optimistic is the Republicans choice of a chair: Jay Obernolte. Hes another guy who keeps up the effort to maintain his technical currency. He and I can geek out about the actual state of the art, which is rather rare in the U.S. Congress. One of the missions, certainly for this task force, will be to try to educate members about at least the capabilities of AI.

FS: How worried are you that companies might try to influence what legislation is crafted to sort of benefit their own finances?

BF: I served on the Financial Services Committee for all my time in Congress, so Im very familiar with industry trying to influence policy. It would shock me if that didnt happen. One of the dangers here is that there are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true. Its much harder for the average member of Congress to push back on claims about AI. Thats the difference. Were not as well defended against statements that may or may not be factual from lobbying organizations.

FS: To what extent should the government itself be trying to build its own models or creating data sources for training those models?

BF: There is a real role for the national labs in curating datasets. This is already done at Argonne National Lab and others. For example, with datasets where privacy is a concern, like electronic medical records where you really need to analyze them, but you need a gatekeeper on privacy thats something where a national laboratory that deals with very high-security data has the right culture to protect that. Even when theyre not developing the algorithms, they can allow third parties to come in and apply those algorithms for the datasets and give them the results without turning over all the private information.

FS: Youve proposed legislation related to technology modernization and Congress. To what extent are members exposed to ChatGPT and similar technologies?

BF: The first response is to have Congress organize itself in a way that reflects todays economy. Information technology just passed financial services as a fraction of the economy. That puts it pretty much on a par with, for example, health care, which is also a little under 20%. If you look at the structure of Congress, it looks like a snapshot of our economy 100 years ago.

The AI disruption might be an opportunity for Congress to organize itself to match the modern economy. Thats one of the big issues that Id say. Obviously, thats the work of a decade at least. Theres going to be a number of economic responses to the disruption of the workforce. I think the thing we just have to understand and appreciate [is] that were all in this together. It used to be 10 or 15 years ago that people say, those poor, long-haul truck drivers or taxi drivers or factory workers that lose their jobs. But no, its everybody. With that realization, it will be easier to get a consensus that weve got to expand the safety net for those who have seen their skills and everything that defines their identity and their economic productivity put at risk from AI.

FS: How worried are you about artificial general intelligence?

BF: Over the last five years, Ive become much more worried than I previously was. And the reason for that is theres this analogy between the evolution of AI algorithms and the evolution in living organisms. And what if you look at living organisms and the strategies that have evolved, many of them are deceptive.

This happens in the natural kingdom. It will also happen and its already happening in the evolution of artificial intelligence. If you imagine there are two AI algorithms: one of them is completely transparent and you understand how it thinks [and] the other one is a black box. Then you ask yourself, which of those is more likely to be shut down and the research abandoned on it? The answer is it is the transparent one that is more likely to be shut down, because you will see it, you will understand that [it has] evil thought processes and stop working on it. There will be a survival advantage to being opaque.

You are already seeing in some of these large language models behavior that looks like deceptive behavior. Certainly to the extent that it just models whats on the internet, there will be lots of deceptive behavior, documented on the internet, for it to model and to try out in its behavior. It will be a huge survival advantage for AI algorithms to be deceptive. Its similar to the whole scandal with Volkswagen and the smog emission software. When you have opaque algorithms, the companies might not even know that their algorithm is behaving this way. Because they will put it under observation, they will test it. The difficulty is that [theyre going to] start knowing theyre under observation and then behave very nicely, and theyll do everything that you wish they would. Then, when its out in the wild, they will just try to be as profitable as they can for their company. Those are the algorithms that will survive and displace other algorithms.

More here:

Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop

Related Posts