The problem with Big Tech’s voluntary AI safety commitments – Emerging Tech Brew

Posted: July 29, 2023 at 8:46 pm

The European Union might be making strides toward regulating artificial intelligence (with passage of the AI Act expected by the end of the year), but the US government has largely failed to keep pace with the global push to put guardrails around the technology.

The White House, which said it will continue to take executive action and pursue bipartisan legislation, introduced an interim measure last week in the form of voluntary commitments for safe, secure, and transparent development and use of AI technology.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to prioritize research on societal risks posed by AI systems and incent third-party discovery and reporting of issues and vulnerabilities, among other things.

But according to academic experts, the agreements fall far short.

The elephant in the room here is that the United States continues to push forward with voluntary measures, whereas the European Union will pass the most comprehensive piece of AI legislation that weve seen to date, Brandie Nonnecke, founding director of UC Berkeleys Citris Policy Lab, told Tech Brew.

[These companies] want to be there in helping to essentially develop the test by which they will be graded, Nonnecke said. That, combined with cuts to trust and safety teams in recent months, is cause for skepticism, she added.

Emily Bender, a University of Washington professor who specializes in computational linguistics and natural language processing, said the vagueness of the commitments could be a reflection of what the companies were willing to agree to (the agreements voluntary nature at work).

We really shouldn't have the government compromising with companies, she said. The government should act in the public interest and regulate.

Tech Brew informs business leaders about the latest innovations, automation advances, policy shifts and more to help them make smart decisions.

Bender also voiced concerns about the measures approach to potential future risks, pointing to commitments to give significant attention to the effects of system interaction and tool use and the capacity for models to make copies of themselves or self-replicate.

And that to me doesnt sound like grounded thinking about actual risks, she added. I suspect that one of the through threads here isthis AI hype train of believing that the large language models are a step toward what gets called artificial general intelligence, which humanity needs to be protected from because of this weird fantasy world that it becomes sentient or autonomous and takes over, Bender said. I dont see Nvidia and IBM playing that game so much, so that might be part of why theyre not there.

Both Bender and Nonnecke pointed to the Federal Trade Commission, which opened an investigation into OpenAI in July, as an effective regulatory player in the absence of federal AI legislation. But neither expects much to come from the voluntary commitments.

I could imagine that the White House was interested in coming to the table because they might feel stymied by the split Congress, and so they cant directly do that much in terms of regulation, Bender said. They want to look like theyre doing something, but theres no teeth here. This is not regulation. The title is Ensuring Safe, Secure, and Trustworthy AI, and I dont think it does any of that.

See the rest here:

The problem with Big Tech's voluntary AI safety commitments - Emerging Tech Brew

Related Posts