A quick guide to the most important AI law youve never heard of – MIT Technology Review

Posted: May 15, 2022 at 10:12 pm

What about outside the EU?

The GDPR, the EUs data protection regulation, is the blocs most famous tech export, and it has been copied everywhere from California to India.

The approach to AI the EU has taken, which targets the riskiest AI, is one that most developed countries agree on. If Europeans can create a coherent way to regulate the technology, it could work as a template for other countries hoping to do so too.

US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers with regard to transparency and accountability, says Marc Rotenberg, who heads the Center for AI and Digital Policy, a nonprofit that tracks AI policy.

The bill is also being watched closely by the Biden administration. The US is home to some of the worlds biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White Houses AI effort, have welcomed Europes effort to regulate AI.

This is a sharp contrast to how the US viewed the development of GDPR, which at the time people in the US said would end the internet, eclipse the sun, and end life on the planet as we know it, says Rotenberg.

Despite some inevitable caution, the US has good reasons to welcome the legislation. Its extremely anxious about Chinas growing influence in tech. For America, the official stance is that retaining Western dominance of tech is a matter of whether democratic values prevail. It wants to keep the EU, a like-minded ally, close.

Some of the bills requirements are technically impossible to comply with at present. The first draft of the bill requires that data sets be free of errors and that humans be able to fully understand how AI systems work. The data sets that are used to train AI systems are vast, and having a human check that they are completely error free would require thousands of hours of work, if verifying such a thing were even possible. And todays neural networks are so complex even their creators dont fully understand how they arrive at their conclusions.

Tech companies are also deeply uncomfortable about requirements to give external auditors or regulators access to their source code and algorithms in order to enforce the law.

Read the original post:

A quick guide to the most important AI law youve never heard of - MIT Technology Review

Related Posts