{"id":1116932,"date":"2023-08-10T19:25:11","date_gmt":"2023-08-10T23:25:11","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/why-hawaii-should-take-the-lead-on-regulating-artificial-honolulu-civil-beat\/"},"modified":"2023-08-10T19:25:11","modified_gmt":"2023-08-10T23:25:11","slug":"why-hawaii-should-take-the-lead-on-regulating-artificial-honolulu-civil-beat","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/why-hawaii-should-take-the-lead-on-regulating-artificial-honolulu-civil-beat\/","title":{"rendered":"Why Hawaii Should Take The Lead On Regulating Artificial &#8230; &#8211; Honolulu Civil Beat"},"content":{"rendered":"<p><p>      A new state office of AI Safety and Regulation could take a      risk-based approach to regulating various AI products.    <\/p>\n<p>    Not a day passes without a    major news headline on the great strides being made on    artificial intelligence  and warnings from industry insiders,    academics and activists about the potentially very serious    risks from AI.  <\/p>\n<p>    A     2023survey of AI expertsfound that 36% fear    that AI development may result in a nuclear-level    catastrophe. Almost 28,000 people have signed on to     anopen letterwritten by the Future of Life    Institute, including Steve Wozniak, Elon Musk, the CEOs of    several AI companies and many other prominent technologists,    asking for a six-month pause or a moratorium on new advanced AI    development.  <\/p>\n<p>    As a public policy lawyer and also a researcher in    consciousness (I have a part-time position at UC Santa    Barbaras META Lab I share these strong concerns about the    rapid development of AI, and I am a co-signer of the Future of    Life open letter.  <\/p>\n<p>    Why are we all so concerned? In short: AI development is going    way too fast and its not being regulated.  <\/p>\n<p>    The key issue is the profoundly rapid improvement in the new    crop of advanced chatbots, or what are technically called    large language models such as ChatGPT, Bard, Claude 2, and    many others coming down the pike.  <\/p>\n<p>    The pace of improvement in these AIs is truly impressive. This    rapidaccelerationpromises    to soon result in artificial general intelligence, which is    defined as AI that is as good or better on almost anything a    human can do.  <\/p>\n<p>    When AGI arrives, possibly in the near future but possibly in a    decade or more, AI will be able toimproveitself    with no human intervention. It will do this in the same way    that, for example, GooglesAlphaZeroAI    learned in 2017 how to play chess better than even the very    best human or other AI chess players in just nine hours from    when it was first turned on. It achieved this feat by playing    itself millions of times over.  <\/p>\n<p>    In testing GPT-4, it performed better than90%of    human test takers on the     Uniform Bar Exam, a standardized test used to certify    lawyers for practice in many states. That figure was up from    just 10% in the previous GPT-3.5 version, which was trained on    a smaller data set. They found similar improvements in dozens    of other standardized tests.  <\/p>\n<p>    Most of these tests are tests of reasoning, not of regurgitated    knowledge. Reasoning is perhaps the hallmark of general    intelligence so even todays AIs are showing significant signs    of general intelligence.  <\/p>\n<p>    This pace of change is why AI researcher Geoffrey Hinton,    formerly with Google for a number of    years,toldtheNew York Times: Look at how it    was five years ago and how it is now. Take the difference and    propagate it forwards. Thats scary.  <\/p>\n<p>    In a mid-May Senate hearing on the potential of AI, Sam Altman,    the head of OpenAI called regulation crucial. But Congress    has done almost nothing on AI since then and the White House    recently issued a letter applauding a purely voluntary approach    adopted by the major AI development companies like Google and    OpenAI.  <\/p>\n<p>    A voluntary approach on regulating AI safety is like asking oil    companies to voluntarily ensure their products keep us safe    from climate change.  <\/p>\n<p>    With the AI explosion underway now, and with artificial    general intelligence perhaps very close, we may have just    onechanceto get it right in terms of regulating AI    to ensure its safe.  <\/p>\n<p>    Im working with Hawaii state legislators to create a new    Office of AI Safety and Regulation because the threat is so    immediate that it requires significant and rapid action.    Congress is working on AI safety issues but it seems that    Congress is simply incapable of acting rapidly enough given the    scale of this threat.  <\/p>\n<p>    The new office would follow the precautionary    principle in placing the burden on AI developers to    demonstrate that their products are safe for Hawaii before they    are allowed to be used in Hawaii. The current approach by    regulators is to allow AI companies to simply release their    products to the public, where theyre being adopted at record    speed, with literally no proof of safety.  <\/p>\n<p>        We cant afford to wait for Congress to act.      <\/p>\n<p>    The new Hawaii office of AI Safety and Regulation would then    take a risk-based approach to regulating various AI products.    This means that the office staff, with public input, would    assess the potential dangers of each AI product type and would    impose regulations based on the potential risk. So less risky    products would be subject to lighter regulation and more risky    AI products would face more burdensome regulation.  <\/p>\n<p>    My hope is that this approach will help to keep Hawaii safe    from the more extreme dangers posed by AI  which another    recent open letter, signed by hundreds of AI industry leaders    and academics, warned should be considered as dangerous as    nuclear war or pandemics.  <\/p>\n<p>    Hawaii can and should lead the way on a state-level approach to    regulating these dangers. We cant afford to wait for Congress    to act and it is all but certain that anything Congress adopts    will be far too little and too late.  <\/p>\n<p>                   Sign                  Up                <\/p>\n<p>                  Sorry. That's an invalid e-mail.                <\/p>\n<p>                  Thanks! We'll send you a confirmation e-mail                  shortly.                <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.civilbeat.org\/2023\/08\/why-hawaii-should-take-the-lead-on-regulating-artificial-intelligence\/\" title=\"Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat\">Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products. Not a day passes without a major news headline on the great strides being made on artificial intelligence and warnings from industry insiders, academics and activists about the potentially very serious risks from AI. A 2023survey of AI expertsfound that 36% fear that AI development may result in a nuclear-level catastrophe <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/why-hawaii-should-take-the-lead-on-regulating-artificial-honolulu-civil-beat\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-1116932","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116932"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1116932"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116932\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1116932"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1116932"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1116932"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}