Why Hawaii Should Take The Lead On Regulating Artificial … – Honolulu Civil Beat

Posted: August 10, 2023 at 7:25 pm

A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products.

Not a day passes without a major news headline on the great strides being made on artificial intelligence and warnings from industry insiders, academics and activists about the potentially very serious risks from AI.

A 2023survey of AI expertsfound that 36% fear that AI development may result in a nuclear-level catastrophe. Almost 28,000 people have signed on to anopen letterwritten by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a public policy lawyer and also a researcher in consciousness (I have a part-time position at UC Santa Barbaras META Lab I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

Why are we all so concerned? In short: AI development is going way too fast and its not being regulated.

The key issue is the profoundly rapid improvement in the new crop of advanced chatbots, or what are technically called large language models such as ChatGPT, Bard, Claude 2, and many others coming down the pike.

The pace of improvement in these AIs is truly impressive. This rapidaccelerationpromises to soon result in artificial general intelligence, which is defined as AI that is as good or better on almost anything a human can do.

When AGI arrives, possibly in the near future but possibly in a decade or more, AI will be able toimproveitself with no human intervention. It will do this in the same way that, for example, GooglesAlphaZeroAI learned in 2017 how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

In testing GPT-4, it performed better than90%of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10% in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning, not of regurgitated knowledge. Reasoning is perhaps the hallmark of general intelligence so even todays AIs are showing significant signs of general intelligence.

This pace of change is why AI researcher Geoffrey Hinton, formerly with Google for a number of years,toldtheNew York Times: Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. Thats scary.

In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation crucial. But Congress has done almost nothing on AI since then and the White House recently issued a letter applauding a purely voluntary approach adopted by the major AI development companies like Google and OpenAI.

A voluntary approach on regulating AI safety is like asking oil companies to voluntarily ensure their products keep us safe from climate change.

With the AI explosion underway now, and with artificial general intelligence perhaps very close, we may have just onechanceto get it right in terms of regulating AI to ensure its safe.

Im working with Hawaii state legislators to create a new Office of AI Safety and Regulation because the threat is so immediate that it requires significant and rapid action. Congress is working on AI safety issues but it seems that Congress is simply incapable of acting rapidly enough given the scale of this threat.

The new office would follow the precautionary principle in placing the burden on AI developers to demonstrate that their products are safe for Hawaii before they are allowed to be used in Hawaii. The current approach by regulators is to allow AI companies to simply release their products to the public, where theyre being adopted at record speed, with literally no proof of safety.

We cant afford to wait for Congress to act.

The new Hawaii office of AI Safety and Regulation would then take a risk-based approach to regulating various AI products. This means that the office staff, with public input, would assess the potential dangers of each AI product type and would impose regulations based on the potential risk. So less risky products would be subject to lighter regulation and more risky AI products would face more burdensome regulation.

My hope is that this approach will help to keep Hawaii safe from the more extreme dangers posed by AI which another recent open letter, signed by hundreds of AI industry leaders and academics, warned should be considered as dangerous as nuclear war or pandemics.

Hawaii can and should lead the way on a state-level approach to regulating these dangers. We cant afford to wait for Congress to act and it is all but certain that anything Congress adopts will be far too little and too late.

Sign Up

Sorry. That's an invalid e-mail.

Thanks! We'll send you a confirmation e-mail shortly.

Read more:

Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat

Related Posts