Asking Big Tech to police AI is like turning to ‘oil companies to solve climate change,’ AI researcher says – Fortune

Posted: April 18, 2024 at 3:39 pm

Managing artificial intelligences impact on society isnt the responsibility of private companies, noted the founder of a nonprofit AI research lab. Instead, its elected governments that should regulate the sector adequately to keep people safe.

Speaking at the Fortune Brainstorm AI London conference on Monday, Connor Leahy, cofounder of EleutherAI, said the onus of how transformational technologies will impact the public shouldnt be placed on the tech industry.

Companies shouldnt even have to answer society-wide questions about AI, Leahy told Fortunes Ellie Austin.

He explained: This might be controversial but its not the responsibility of oil companies to solve climate change. Instead, he said, it is the role of governments to stop oil companies from causing climate change or at least make them pay to clean it up after theyve caused the mess.

Rather than guardrails coming from within the industry, they should cascade down from the government levelat least when it comes to society-wide issues, he added.

The boss of EleutherAIwhich launched in 2020 and operates primarily through an open Discord serversaid responsibility does lie with businesses, however, when it comes to expectations of how much AI can do.

At present the tech is super unreliable he continued, adding it does not have a human level [of] reliability.

Some of the most prominent voices in the tech industry agree with Leahy, with even disrupters in the sector imploring the government for some safety nets.

Sam Altman, boss of ChatGPT maker OpenAI, tolda Senate Judiciary subcommittee in May of last year that the regulation of AI is essential.

He came out in favor of appropriate safety requirements, including internal and external testing prior to release for AI software and also urged some kind of licensing and registration regime for AI systems beyond a certain capability.

However, the fired-and-rehired billionaire CEO also called for a governance framework that is flexible enough to adapt to new technological developments and said that regulation should balance incentivizing safety while ensuring that people are able to access the technologys benefits.

Likewise Tesla CEO Elon Muskwho is utilizing AI for everything from large language model Grok to humanoid robot Optimus and autonomous drivingsaid that regulation will be annoying but necessary.

During a conversation with British Prime Minister Rishi Sunak during the U.K. AI Safety Summit, Musk said: I think weve learned over the years that having a referee is a good thing.

CEOs pining after some regulation that will be suitable for multiple markets may have had their wishes granted in recent weeks.

Earlier this month, the U.S. and U.K. governments signed a memorandum of understanding pledging to a shared approach for AI safety testing and guidance.

The governments will work closely with each other and seek other nations to join their approach.

U.S. Commerce Secretary Gina Raimondo said at the time: Our partnership makes clear that we arent running away from these concernswere running at them.

By working together, we are furthering the long-lasting special relationship between the U.S. and U.K. and laying the groundwork to ensure that were keeping AI safe both now and in the future.

Go here to see the original:

Asking Big Tech to police AI is like turning to 'oil companies to solve climate change,' AI researcher says - Fortune

Related Posts