{"id":1125759,"date":"2024-06-06T08:48:52","date_gmt":"2024-06-06T12:48:52","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/employees-claim-openai-google-ignoring-risks-of-ai-and-should-give-them-right-to-warn-public-new-york-post\/"},"modified":"2024-06-06T08:48:52","modified_gmt":"2024-06-06T12:48:52","slug":"employees-claim-openai-google-ignoring-risks-of-ai-and-should-give-them-right-to-warn-public-new-york-post","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/employees-claim-openai-google-ignoring-risks-of-ai-and-should-give-them-right-to-warn-public-new-york-post\/","title":{"rendered":"Employees claim OpenAI, Google ignoring risks of AI  and should give them &#8216;right to warn&#8217; public &#8211; New York Post"},"content":{"rendered":"<p><p>    A group of AI whistleblowers claim tech giants like Google and    ChatGPT creator OpenAI are locked in a reckless race to develop        technology that could endanger humanity  and demanded a    right to warn the public in an open letter Tuesday.  <\/p>\n<p>    Signed by current and former employees of OpenAI, Google    DeepMind and Anthropic, the    open letter cautioned that AI companies have strong    financial incentives to avoid effective oversight and cited a    lack of federal rules on developing advanced AI.  <\/p>\n<p>    The workers point to potential risks including the spread of    misinformation, worsening inequality and even loss of control    of autonomous AI systems potentially resulting in human    extinction  especially as OpenAI and other firms pursue    so-called advanced general intelligence, with capacities on par    with or surpassing the human mind.  <\/p>\n<p>    Companies are racing to develop and deploy ever more powerful    artificial intelligence, disregarding the risks and impact of    AI, former OpenAI employee Daniel Kokotajlo, one of the    letters organizers, said in a statement. I decided to leave    OpenAI because I lost hope that they would act responsibly,    particularly as they pursue artificial general intelligence.  <\/p>\n<p>    They and others have bought into the move fast and break    things approach and that is the opposite of what is needed for    technology this powerful and this poorly understood, Kokotajlo    added.  <\/p>\n<p>    Kokotajlo, who joined OpenAI in 2022 as a researcher focused on    charting AI advancements before leaving in April, has placed    the probability that advanced AI will destroy or severely harm    humanity in the future at a whopping 70%, according    tothe New York    Times, which first reported on the letter.  <\/p>\n<p>    He believes theres a 50% chance that researchers will achieve    artificial general intelligence by 2027.  <\/p>\n<p>    The letter drew endorsements by two prominent experts known as    the Godfathers of AI  Geoffrey Hinton, who warned last year    that the     threat of rogue AI was more urgent to humanity than    climate change, and Canadian computer scientist Yoshua Bengio.    Famed British AI researcher Stuart Russell also backed the    letter.  <\/p>\n<p>    The letter asks AI giants to commit to four principles designed    to boost transparency and protect whistleblowers who speak out    publicly.  <\/p>\n<p>    Those include an agreement not to retaliate against employees    who speak out about safety concerns and to support an anonymous    system for whistleblowers to alert the public and regulators    about risks.  <\/p>\n<p>    The AI firms are also asked to allow a culture of open    criticism so long as no trade secrets are disclosed, and    pledge not to enter into or enforce non-disparagement    agreements or non-disclosure agreements.  <\/p>\n<p>    As of Tuesday morning, the letters signers include a total of    13 AI workers. Of that total, 11 are formerly or currently    employed by OpenAI, including Kokotajlo, Jacob Hilton, William    Saunders, Carroll Wainwright and Daniel Ziegler.  <\/p>\n<p>    There should be ways to share information about risks with    independent experts, governments, and the public, said    Saunders. Today, the people with the most knowledge about how    frontier AI systems work and the risks related to their    deployment are not fully free to speak because of possible    retaliation and overly broad confidentiality agreements.  <\/p>\n<p>    Other signers included former Google DeepMind employee Ramana    Kumar and current employee Neel Nanda, who formerly worked at    Anthropic.  <\/p>\n<p>    When reached for comment, an OpenAI spokesperson said the    company has a proven track record of not releasing AI products    until necessary safeguards were in place.  <\/p>\n<p>    Were proud of our track record providing the most capable and    safest A.I. systems and believe in our scientific approach to    addressing risk, OpenAI said in a statement.  <\/p>\n<p>    We agree that rigorous debate is crucial given the    significance of this technology, and well continue to engage    with governments, civil society and other communities around    the world, the company added  <\/p>\n<p>    Google and Anthropic did not immediately return requests for    comment.  <\/p>\n<p>    The letter was published just days after revelations that        OpenAI has dissolved its Superalignment safety team,    whose responsibilities included creating safety measures for    advanced general intelligence (AGI) systems that could lead to    the disempowerment of humanity or even human extinction.  <\/p>\n<p>            Subscribe to our daily Business Report newsletter!          <\/p>\n<p>    Two OpenAI executives who led the team, co-founder Ilya    Sutskever and Jan Leike, have since resigned from the company.    Leike blasted the firm on his way out the door, claiming that    safety had taken a backseat to shiny products.  <\/p>\n<p>    Elsewhere, former OpenAI board member Helen Toner  who was    part of the group that briefly succeeded in ousting Sam Altman    as the firms CEO last year  alleged that he had repeatedly    lied during her tenure.  <\/p>\n<p>    Toner claimed that she and other board members did not learn    about ChatGPTs launch in November 2022 from Altman and        instead found out about its debut on Twitter.  <\/p>\n<p>    OpenAI has since established a new safety oversight committee    that includes Altman as it begins training the new version of    the AI model that powers ChatGPT.  <\/p>\n<p>    The company pushed back on Toners allegations, noting that an    outside review had determined that safety concerns were not a    factor in Altmans removal.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read this article: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/nypost.com\/2024\/06\/04\/business\/openai-google-ignoring-risks-in-race-for-advanced-ai-should-allow-right-to-warn-public-employees\/\" title=\"Employees claim OpenAI, Google ignoring risks of AI  and should give them 'right to warn' public - New York Post\">Employees claim OpenAI, Google ignoring risks of AI  and should give them 'right to warn' public - New York Post<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity and demanded a right to warn the public in an open letter Tuesday. Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that AI companies have strong financial incentives to avoid effective oversight and cited a lack of federal rules on developing advanced AI.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/employees-claim-openai-google-ignoring-risks-of-ai-and-should-give-them-right-to-warn-public-new-york-post\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1125759","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125759"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1125759"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125759\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1125759"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1125759"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1125759"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}