{"id":1125749,"date":"2024-06-06T08:48:46","date_gmt":"2024-06-06T12:48:46","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/the-dark-side-of-ai-financial-gains-lead-to-oversight-evasion-say-insiders-cmswire\/"},"modified":"2024-06-06T08:48:46","modified_gmt":"2024-06-06T12:48:46","slug":"the-dark-side-of-ai-financial-gains-lead-to-oversight-evasion-say-insiders-cmswire","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/the-dark-side-of-ai-financial-gains-lead-to-oversight-evasion-say-insiders-cmswire\/","title":{"rendered":"The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders &#8211; CMSWire"},"content":{"rendered":"<p><p>The Gist    <\/p>\n<p>    Leading artificial intelligence companies avoid effective    oversight because of money and operate without sufficient    accountability government or other industry standards, former    and current employees said in a letter published today.  <\/p>\n<p>    In other words, they get away with a lot  and that's not great    news for a technology that comes with risks including human    extinction.  <\/p>\n<p>    \"We are hopeful that these risks can be adequately mitigated    with sufficient guidance from the scientific community,    policymakers, and the public,\" the group wrote in the letter    titled, \"A Right to Warn about Advanced Artificial    Intelligence.\" \"However, AI companies have strong financial    incentives to avoid effective oversight, and we do not believe    bespoke structures of corporate governance are sufficient to    change this.\"  <\/p>\n<p>    The letter was signed by seven former OpenAI employees, four current OpenAI    employees, one former Google DeepMind employee and one current    Google DeepMind employee. It was also endorsed by AI    powerhousesYoshua    Bengio, Geoffrey    Hinton and Stuart    Russell.  <\/p>\n<p>    While the group believes in the potential of AI technology to deliver    unprecedented benefits to humanity, it says risks include:  <\/p>\n<p>    \"AI companies possess substantial non-public information about    the capabilities and limitations of their systems, the adequacy    of their protective measures, and the risk levels of different    kinds of harm,\" the group wrote. \"However, they currently have    only weak obligations to share some of this information with    governments, and none with civil society. We do not think they    can all be relied upon to share it voluntarily.\"  <\/p>\n<p>    The list of employees who shared their names (others were    listed anonymously) includes: Jacob Hilton, formerly OpenAI;    Daniel Kokotajlo, formerly OpenAI; Ramana Kumar, formerly    Google    DeepMind; Neel Nanda, currently Google DeepMind formerly    Anthropic; William    Saunders, formerly OpenAI; Carroll Wainwright, formerly OpenAI;    and Daniel Ziegler, formerly OpenAI.  <\/p>\n<p>    This isn't the first time Hilton spoke    publicly about his former company. And he was pretty    vocal today on X as well.  <\/p>\n<p>    Kokotajlo, who worked on OpenAI, quit last month and was vocal    about it in a public forum as well. He said he \"Quit    OpenAI due to losing confidence that it would behave    responsibly around the time of AGI (artificial general    intelligence).\" Saunders, also on the governance team,     departed along with Kokotajlo.  <\/p>\n<p>    Wainright's time at OpenAI dates back at least to the debut of ChatGPT.    Ziegler, according to this LinkedIn    profile, was with OpenAI from 2018 to 2021.  <\/p>\n<p>    Related Article: Musk, Wozniak and Thousands of    Others: 'Pause Giant AI Experiments'  <\/p>\n<p>    Leading AI companies won't give up critical information    surrounding the development of AI technologies on their own,    according to this group. Today, it's up to current and former    employees rather than governments that can hold them    accountable to the public.  <\/p>\n<p>    \"Yet,\" the group wrote, \"broad confidentiality agreements block    us from voicing our concerns, except to the very companies that    may be failing to address these issues. Ordinary whistleblower    protections are insufficient because they focus on illegal    activity, whereas many of the risks we are concerned about are    not yet regulated.\"  <\/p>\n<p>    These employees fear various forms of retaliation, given the    history of such cases across the industry.  <\/p>\n<p>    Related Article: OpenAI Names Sam Altman CEO 5 Days    After It Fired Him  <\/p>\n<p>    Here's the gist of what this group calls on leading AI    companies to do:  <\/p>\n<p>    AI companies should not:  <\/p>\n<p>    AI companies should:  <\/p>\n<p>    OpenAI had no public response to the group's letter. In its    most recent tweet, it shared its post about     deceptive uses of AI.  <\/p>\n<p>    \"OpenAI is committed to enforcing policies that prevent abuse    and to improving transparency around AI-generated content,\" the    company wrote May 30. \"That is especially true with respect to    detecting and disrupting covert influence operations (IO),    which attempt to manipulate public opinion or influence    political outcomes without revealing the true identity or    intentions of the actors behind them.\"  <\/p>\n<p>     Have a tip to share with our editorial    team? Drop us a line:   <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.cmswire.com\/digital-experience\/employees-say-ai-companies-dodge-effective-oversight-threaten-humanity\/\" title=\"The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire\">The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The Gist Leading artificial intelligence companies avoid effective oversight because of money and operate without sufficient accountability government or other industry standards, former and current employees said in a letter published today. In other words, they get away with a lot and that's not great news for a technology that comes with risks including human extinction. \"We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public,\" the group wrote in the letter titled, \"A Right to Warn about Advanced Artificial Intelligence.\" \"However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.\" The letter was signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/the-dark-side-of-ai-financial-gains-lead-to-oversight-evasion-say-insiders-cmswire\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1125749","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125749"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1125749"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125749\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1125749"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1125749"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1125749"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}