Tech experts agree its time to regulate artificial intelligence if only it were that simple – GeekWire

AI2 CEO Oren Etzioni spakes at the Technology Alliances AI Policy Matters Summit. (GeekWire Photo / Monica Nickelsburg)

Artificial intelligence is here, its just the beginning, and its time to start thinking about how to regulate it.

Those were the takeaways from the Technology Alliances AI Policy Matters Summit, a Seattle event that convened experts and government officials for a conversation about artificial intelligence. Many of those experts agreed that the government should start establishing guardrails to defend against malicious or negligent uses of artificial intelligence. But determining what shape those regulations should take is no easy feat.

Its not even clear what the difference is between AI and software, said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, on stage at the event. Where does something cease to be a software program and become an AI program? Google, is that an AI program? It uses a lot of AI in it. Or is Google software? How about Netflix recommendations? Should we regulate that? These are very tricky topics.

Regulations written now will also have to be nimble enough to keep up with the evolving technology, according to Heather Redman, co-founder of the venture capital firm, Flying Fish Ventures.

Weve got a 30-40 year technology arc here and were probably in year five, so we cant do a regulation that is going to fix it today, she said during the event. We have to make it better and go to the next level next year and the next level the year after that.

With those challenges in mind, Etzioni and Redman recommend regulations that are tied to specific use cases of artificial intelligence, rather than broad rules for the technology. Laws should be targeted to areas like AI-enabled weapons and autonomous vehicles, they said.

My suggestion was to identify particular applications and regulate those using existing regulatory regimes and agencies, Etzioni said. That both allows us to move faster and also be more targeted in our application of regulations, using a scalpel rather than a sledgehammer.

He believes the rules should include a mandatory kill switch on all AI programs and requirements that AI notify users when they are not interacting with a human. Etzioni also stressed the importance of humans taking responsibility for autonomous systems, though it isnt clear whether the manufacturer or user of the technology will be liable.

Lets say my car ran somebody over, he said. I shouldnt be able to say my dog ate my homework. Hey I didnt do it, it was my AI car. Its an autonomous vehicle. We have to take responsibility for our technology. We have to be liable for it.

Redman also sees the coming tide of A.I. regulation as a business opportunity for startups seeking to break into the industry. Her venture capital firm is inundated with startups pitching an A.I. and M.L. first approach but Redman said there are two other related fields, or stacks as she describes them, that companies should be exploring.

If you talk to somebody on Wall Street, they dont care what tech stack theyre running their trading on theyre looking at new evolutions in law and policy as big opportunities to build new businesses or things that will kill existing businesses, she said.

From a startup perspective, if youre not thinking about the law and policy stack as much as youre thinking about the tech stack, youre making a mistake, Redman added.

But progress toward a regulatory framework has been slow at the local and federal level. In the last legislative session, Washington state almost became one of the first to regulate facial recognition, the controversial technology that is pushing the artificial intelligence debate forward. But the bill died in the state House. Lawmakers plan to introduce data privacy and facial recognition bills again next session.

Redman said shes disappointed Washington state wasnt a first-mover on AI regulation because the company is home to two of the tech giants consumers trust most with their data: Amazon and Microsoft. Amazon is in the political hot seat along with many of its tech industry peers but the Seattle tech giant has not been implicated in the types of data privacy scandals plaguing Facebook.

We are the home of trusted tech, Redman said, and we need to lead on the regulatory frameworks for tech.

Follow this link:

Tech experts agree its time to regulate artificial intelligence if only it were that simple - GeekWire

Related Posts

Comments are closed.