{"id":1125757,"date":"2024-06-06T08:48:50","date_gmt":"2024-06-06T12:48:50","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/what-arent-the-openai-whistleblowers-saying-platformer\/"},"modified":"2024-06-06T08:48:50","modified_gmt":"2024-06-06T12:48:50","slug":"what-arent-the-openai-whistleblowers-saying-platformer","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/what-arent-the-openai-whistleblowers-saying-platformer\/","title":{"rendered":"What aren&#8217;t the OpenAI whistleblowers saying? &#8211; Platformer"},"content":{"rendered":"<p><p>    Eleven current and former employees of OpenAI, along with two    more from Google DeepMind, posted an open    letter today stating that they are unable to voice    concerns about risks created by their employees due to    confidentiality agreements. Today lets talk about what they    said, what they left out, and why lately the AI safety    conversation feels like its going nowhere.  <\/p>\n<p>    Heres a dynamic weve seen play out a few times now at    companies including     Meta,     Google, and     Twitter. First, in a bid to address potential harms    created by their platforms, companies hire idealistic workers    and charge them with building safeguards into their systems.    For a while, the work of these teams gets prioritized. But over    time, executives enthusiasm wanes, commercial incentives take    over, and the team is gradually de-funded.  <\/p>\n<p>    When those roadblocks go up, some of the idealistic employees    will speak out, either to a reporter like me, or via the sort    of open letter that the AI workers published today. And the    company responds by reorganizing the team out of existence,    while putting out a statement saying that whatever that team    used to work on is now everyones responsibility.  <\/p>\n<p>    At Meta, this process gave us the whistleblower Frances Haugen.    On Googles AI ethics team, a slightly different version of the    story played out after the firing of researcher Timnit Gebru.    And in 2024, the story came to the AI industry.  <\/p>\n<p>    OpenAI arguably set itself up for this moment more than those    other tech giants. After all, it was established not as a    traditional for-profit enterprise, but as a nonprofit research    lab devoted to safely building an artificial general    intelligence.  <\/p>\n<p>    OpenAIs status as a relatively obscure nonprofit changed    forever in November 2022. Thats when     it released ChatGPT, a chatbot based on the latest    version of its large language model, which by some estimates    soon became     the fastest-growing consumer product in history.  <\/p>\n<p>    ChatGPT took a technology that had been exclusively the    province of nerds and put it in the hands of everyone from    elementary school children to state-backed foreign influence    operations. And OpenAI soon barely resembled the nonprofit that    was founded out of a fear that AI poses an existential risk to    humanity.  <\/p>\n<p>    This OpenAI placed a premium on speed. It pushed the frontier    forward     with tools like plugins, which connected ChatGPT to    the wider internet. It aggressively courted developers. Less    than a year after ChatGPTs release, the company a    for-profit subsidiary of its nonprofit parent  was valued at    $90 billion.  <\/p>\n<p>    That transformation, led by CEO Sam Altman, gave many in the    company whiplash. And it was at the heart of the tensions that    led the nonprofit board to fire Altman last year, for reasons    related to governance.  <\/p>\n<p>    The five-day interregnum between Altmans firing and his return    marked a pivotal moment for the company. The board could have    recommitted to its original vision of slow, cautious    development of powerful AI systems. Or it could endorse the    post-ChatGPT version of OpenAI, which closely resembled a    traditional Silicon Valley venture-backed startup.  <\/p>\n<p>    Almost immediately, it became clear that a vast majority of    employees preferred working at a more traditional startup.    Among other things, that startups commercial prospects meant    that their (unusual)    equity in the company would be worth millions of dollars. The    vast majority of OpenAI employees threatened to quit if Altman    didnt return.  <\/p>\n<p>    And so Altman returned. Most of the old board left. New, more    business-minded board members replaced them. And that board has    stood by Altman in the months that followed, even as questions    mount about     his complex business dealings and conflicts of    interest.  <\/p>\n<p>    Most employees seem content under the new regime; positions at    OpenAI are still highly sought after. But like Meta and Google    before it, OpenAI had its share of conscientious objectors. And    increasingly, were hearing what they think.  <\/p>\n<p>    The latest wave began last month when OpenAI co-founder Ilya    Sutskever, who initially backed Altmans firing and who had    focused on AI safety efforts,     quit the company. He was followed out the door by    Jan Leike, who led the superalignment team, and a handful of    other employees who worked on safety.  <\/p>\n<p>    Then on Tuesday a new group of whistleblowers came forward to    complain. Heres     handsome podcaster Kevin Roose in the New York    Times:  <\/p>\n<p>      They also claim that OpenAI has used hardball tactics to      prevent workers from voicing their concerns about the      technology, including restrictive nondisparagement agreements      that departing employees were asked to sign.    <\/p>\n<p>      OpenAI is really excited about building A.G.I., and they are      recklessly racing to be the first there, said Daniel      Kokotajlo, a former researcher in OpenAIs governance      division and one of the groups organizers.    <\/p>\n<p>    Anyone looking for jaw-dropping allegations from the    whistleblowers will likely leave disappointed. Kokotajlos sole    specific complaint in the article is that some employees    believed Microsoft had released a new version of GPT-4 in Bing    without proper testing; Microsoft denies that this happened.  <\/p>\n<p>    But the accompanying letter offers one possible explanation for    why the charges feel so thin: employees are forbidden from    saying more by various agreements they signed as a condition of    working at the company. (The company has said it is removing    some of the more onerous language from its agreements,     after Vox reported on them last month.)  <\/p>\n<p>    Were proud of our track record providing the most capable and    safest A.I. systems and believe in our scientific approach to    addressing risk, an OpenAI spokeswoman     told the Times. We agree that rigorous    debate is crucial given the significance of this technology,    and well continue to engage with governments, civil society    and other communities around the world.  <\/p>\n<p>    The company also     created a whistleblower hotline for employees to    anonymously voice their concerns.  <\/p>\n<p>    So how should we think about this letter?  <\/p>\n<p>    I imagine that it will be a Rorschach test for whoever reads    it, and what they see will depend on what they think of the AI    safety movement in general.  <\/p>\n<p>    For those who believe that AI poses existential risk, I imagine    this letter will provide welcome evidence that at least some    employees inside the big AI makers are taking those risks    seriously. And for those who dont, I imagine it will provide    more ammunition for the argument that the AI doomers are once    again warning about dire outcomes without providing any    compelling evidence for their beliefs.  <\/p>\n<p>    As a journalist, I find myself naturally sympathetic to people    inside companies who warn about problems that havent happened    yet. Journalism often serves a similar purpose, and every once    in a while, it can help prevent those problems from occurring.    (This can often make the reporter look foolish, since they    spent all that time warning about a scenario that never    unfolded, but thats a subject for another day.)  <\/p>\n<p>    At the same time, theres no doubt that the AI safety argument    has begun to feel a bit tedious over the past year, when the    harms caused by large language models have been     funnier than they have been terrifying. Last week,    when OpenAI put out     the first account of how its products are being used in    covert influence operations, there simply wasnt much    there to report.  <\/p>\n<p>    Weve seen plenty of problematic misuse of AI, particularly    deepfakes in elections and in schools. (And of women in    general.) And yet people who sign letters like the one released    today fail to connect high-level hand-wringing about their    companies to the products and policy decisions that their    companies make. Instead, they speak through opaque open letters    that have surprisingly little to say about what safe    development might actually look like in practice.  <\/p>\n<p>    For a more complete view of the problem, I preferred another    (and much longer) piece of writing that came out Tuesday.    Leopold Aschenbrenner, who worked on OpenAIs superalignment    team and     was reportedly fired for leaking in April, published    a    165-page paper today laying out a path from GPT-4 to    superintelligence, the dangers it poses, and the challenge of    aligning that intelligence with human intentions.  <\/p>\n<p>    Weve heard a lot of this before, and the hypotheses remain as    untestable (for now) as they always have. But I find it    difficult to read the paper and not come away believing that AI    companies ought to prioritize alignment research, and that    current and former employees ought to be able to talk about the    risks they are seeing.  <\/p>\n<p>    Navigating these perils will require good people bringing a    level of seriousness to the table that has not yet been    offered, Aschenbrenner concludes. As the acceleration    intensifies, I only expect the discourse to get more shrill.    But my greatest hope is that there will be those who feel the    weight of what is coming, and take it as a solemn call to    duty.  <\/p>\n<p>    And if those who feel the weight of what is coming work for an    AI company, it seems important that they be able to talk about    what theyre seeing now, and in the open.  <\/p>\n<p>    For more good posts every day, follow    Caseys Instagram stories.  <\/p>\n<p>    (Link)  <\/p>\n<p>    (Link)  <\/p>\n<p>    Send us tips, comments, questions, and situational awareness:    <a href=\"mailto:casey@platformer.news\">casey@platformer.news<\/a> and    <a href=\"mailto:zoe@platformer.news\">zoe@platformer.news<\/a>.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Excerpt from: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.platformer.news\/tuesday-newsletter\/\" title=\"What aren't the OpenAI whistleblowers saying? - Platformer\">What aren't the OpenAI whistleblowers saying? - Platformer<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Eleven current and former employees of OpenAI, along with two more from Google DeepMind, posted an open letter today stating that they are unable to voice concerns about risks created by their employees due to confidentiality agreements. Today lets talk about what they said, what they left out, and why lately the AI safety conversation feels like its going nowhere <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/what-arent-the-openai-whistleblowers-saying-platformer\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1125757","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125757"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1125757"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125757\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1125757"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1125757"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1125757"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}