{"id":1116637,"date":"2023-07-29T20:46:21","date_gmt":"2023-07-30T00:46:21","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/the-problem-with-big-techs-voluntary-ai-safety-commitments-emerging-tech-brew\/"},"modified":"2023-07-29T20:46:21","modified_gmt":"2023-07-30T00:46:21","slug":"the-problem-with-big-techs-voluntary-ai-safety-commitments-emerging-tech-brew","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/big-tech\/the-problem-with-big-techs-voluntary-ai-safety-commitments-emerging-tech-brew\/","title":{"rendered":"The problem with Big Tech&#8217;s voluntary AI safety commitments &#8211; Emerging Tech Brew"},"content":{"rendered":"<p><p>    The European Union might be     making strides toward regulating artificial intelligence    (with passage of the AI Act expected by the end of the year),    but the US government has largely failed to keep pace with the    global push to put guardrails around the technology.  <\/p>\n<p>    The White House, which said it will continue to take executive    action and pursue bipartisan legislation, introduced an    interim measure last week in the form of     voluntary commitments for safe, secure, and transparent    development and use of AI technology.  <\/p>\n<p>    Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and    OpenAI agreed to prioritize research on societal risks posed    by AI systems and incent third-party discovery and reporting    of issues and vulnerabilities, among other things.  <\/p>\n<p>    But according to academic experts, the agreements fall far    short.  <\/p>\n<p>    The elephant in the room here is that the United States    continues to push forward with voluntary measures, whereas the    European Union will pass the most comprehensive piece of AI    legislation that weve seen to date, Brandie Nonnecke,    founding director of UC Berkeleys Citris Policy Lab, told Tech    Brew.  <\/p>\n<p>    [These companies] want to be there in helping to essentially    develop the test by which they will be graded, Nonnecke said.    That, combined with cuts to     trust and safety teams in recent months, is cause for    skepticism, she added.  <\/p>\n<p>    Emily Bender, a University of Washington professor who    specializes in computational linguistics and natural language    processing, said the vagueness of the commitments could be a    reflection of what the companies were willing to agree to (the    agreements voluntary nature at work).  <\/p>\n<p>    We really shouldn't have the government compromising with    companies, she said. The government should act in the public    interest and regulate.  <\/p>\n<p>        Tech Brew informs business leaders about the latest        innovations, automation advances, policy shifts and more to        help them make smart decisions.      <\/p>\n<p>    Bender also voiced concerns about the measures approach to    potential future risks, pointing to commitments to give    significant attention to the effects of system interaction    and tool use and the capacity for models to make copies of    themselves or self-replicate.  <\/p>\n<p>    And that to me doesnt sound like grounded thinking about    actual risks, she added. I suspect that one of the through    threads here isthis AI hype train of believing that the large    language models are a step toward what gets called artificial    general intelligence, which humanity needs to be protected from    because of this weird fantasy world that it becomes sentient or    autonomous and takes over, Bender said. I dont see Nvidia    and IBM playing that game so much, so that might be part of why    theyre not there.  <\/p>\n<p>    Both Bender and Nonnecke pointed to the Federal Trade    Commission, which opened an     investigation into OpenAI in July, as an effective    regulatory player in the absence of federal AI legislation. But    neither expects much to come from the voluntary commitments.  <\/p>\n<p>    I could imagine that the White House was interested in coming    to the table because they might feel stymied by the split    Congress, and so they cant directly do that much in terms of    regulation, Bender said. They want to look like theyre doing    something, but theres no teeth here. This is not regulation.    The title is Ensuring Safe, Secure, and Trustworthy AI, and I    dont think it does any of that.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.emergingtechbrew.com\/stories\/2023\/07\/25\/us-voluntary-ai-safety-commitment\" title=\"The problem with Big Tech's voluntary AI safety commitments - Emerging Tech Brew\">The problem with Big Tech's voluntary AI safety commitments - Emerging Tech Brew<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The European Union might be making strides toward regulating artificial intelligence (with passage of the AI Act expected by the end of the year), but the US government has largely failed to keep pace with the global push to put guardrails around the technology. The White House, which said it will continue to take executive action and pursue bipartisan legislation, introduced an interim measure last week in the form of voluntary commitments for safe, secure, and transparent development and use of AI technology. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to prioritize research on societal risks posed by AI systems and incent third-party discovery and reporting of issues and vulnerabilities, among other things <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/big-tech\/the-problem-with-big-techs-voluntary-ai-safety-commitments-emerging-tech-brew\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[450977],"tags":[],"class_list":["post-1116637","post","type-post","status-publish","format-standard","hentry","category-big-tech"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116637"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1116637"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116637\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1116637"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1116637"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1116637"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}