{"id":169391,"date":"2024-05-25T02:42:21","date_gmt":"2024-05-25T06:42:21","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/openai-departures-why-cant-former-employees-talk-but-the-new-chatgpt-release-can-vox-com\/"},"modified":"2024-08-18T12:48:28","modified_gmt":"2024-08-18T16:48:28","slug":"openai-departures-why-cant-former-employees-talk-but-the-new-chatgpt-release-can-vox-com","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/openai-departures-why-cant-former-employees-talk-but-the-new-chatgpt-release-can-vox-com.php","title":{"rendered":"OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? &#8211; Vox.com"},"content":{"rendered":"<p><p>      Editors note, May 18, 2024, 7:30 pm ET:      This story has been updated to reflect OpenAI CEO Sam      Altmans tweet      on Saturday afternoon that the company was in the process      of changing its offboarding documents.    <\/p>\n<p>      On Monday, OpenAI announced      exciting new product news: ChatGPT can now talk like a      human.    <\/p>\n<p>      It has a cheery, slightly ingratiating feminine voice that      sounds impressively non-robotic, and a bit familiar if youve      seen a certain 2013 Spike Jonze film. Her, tweeted      OpenAI CEO Sam Altman, referencing the movie in which a man      falls in love with an AI assistant voiced by Scarlett      Johansson.    <\/p>\n<p>      But the product release of ChatGPT 4o was quickly      overshadowed by much bigger news out of OpenAI: the      resignation of the companys co-founder and chief scientist,      Ilya Sutskever, who also led its superalignment team, as well      as that of his co-team leader Jan Leike (who we put on the            Future Perfect 50 list last year).    <\/p>\n<p>      The resignations didnt come as a total surprise. Sutskever            had been involved in the boardroom revolt that led to      Altmans temporary firing last year, before the CEO quickly      returned to his perch. Sutskever publicly regretted his      actions and backed Altmans return, but hes been mostly            absent from the company since, even as other members of      OpenAIs policy, alignment, and safety teams       have departed.    <\/p>\n<p>      But what has really stirred speculation was the radio silence      from former employees. Sutskever posted      a pretty typical resignation message, saying Im confident      that OpenAI will build AGI that is both safe and beneficialI      am excited for what comes next.    <\/p>\n<p>      Leike ... didnt. His resignation message was      simply: I resigned. After several days of fervent      speculation, he expanded      on this on Friday morning, explaining that he was worried      OpenAI had shifted away from a safety-focused culture.    <\/p>\n<p>      Questions arose immediately: Were they forced out? Is this      delayed fallout of Altmans brief firing last fall? Are they      resigning in protest of some secret and dangerous new OpenAI      project? Speculation filled the void because no one who had      once worked at OpenAI was talking.    <\/p>\n<p>      It turns out theres a very clear reason for that. I have      seen the extremely restrictive off-boarding agreement that      contains nondisclosure and non-disparagement provisions      former OpenAI employees are subject to. It forbids them, for      the rest of their lives, from criticizing their former      employer. Even acknowledging that the NDA exists is a      violation of it.    <\/p>\n<p>      If a departing employee declines to sign the document, or if      they violate it, they can lose all vested equity they earned      during their time at the company, which is likely worth      millions of dollars. One former employee, Daniel Kokotajlo,      who posted that he quit OpenAI due to losing confidence that      it would behave responsibly around the time of AGI, has            confirmed publicly that he had to surrender what would      have likely turned out to be a huge sum of money in order to      quit without signing the document.    <\/p>\n<p>      While nondisclosure agreements arent unusual in highly      competitive Silicon Valley, putting an employees      already-vested equity at risk for declining or violating one      is. For workers at startups like OpenAI, equity is a vital      form of compensation, one that can dwarf the salary they      make. Threatening that potentially life-changing money is a      very effective way to keep former employees quiet.    <\/p>\n<p>      OpenAI did not respond to a request for comment in time for      initial publication. After publication, an OpenAI      spokespersonsent me this statement: We have never      canceled any current or former employees vested equity nor      will we if people do not sign a release or nondisparagement      agreement when they exit.    <\/p>\n<p>      Sources close to the company I spoke to told me that this      represented a change in policy as they understood      it.When I askedthe OpenAI spokespersonif      thatstatement representeda      change,theyreplied, This statement reflects      reality.    <\/p>\n<p>      On Saturday afternoon, a little more than a day after this      article published, Altman acknowledged in a tweet      that there had been a provision in the companys off-boarding      documents about potential equity cancellation for departing      employees, but said the company was in the process of      changing that language.    <\/p>\n<p>      All of this is highly ironic for a company that initially      advertised itself as OpenAI  that is, as committed      in its mission statements to building powerful systems in a      transparent and accountable manner.    <\/p>\n<p>      OpenAI long ago abandoned      the idea of open-sourcing its models,       citing safety concerns. But now it has shed the most      senior and respected members of its safety team, which should      inspire some skepticism about whether safety is really the      reason why OpenAI has become so closed.    <\/p>\n<p>      OpenAI has spent a long time occupying an unusual position in      tech and policy circles. Their releases, from DALL-E to      ChatGPT, are often very cool, but by themselves they would      hardly attract the near-religious fervor with which the      company is often discussed.    <\/p>\n<p>      What sets OpenAI apart is the ambition of its mission: to      ensure that artificial general intelligence  AI systems that      are generally smarter than humans  benefits all of      humanity. Many of its employees believe that this aim is      within reach; that with perhaps one more decade (or even      less)  and       a few trillion dollars  the company will succeed at      developing AI systems that make most human labor      obsolete.    <\/p>\n<p>      Which, as the company itself has long said, is as risky as it      is exciting.    <\/p>\n<p>      Superintelligence will be the most impactful technology      humanity has ever invented, and could help us solve many of      the worlds most important problems, a recruitment page for      Leike and Sutskevers team at OpenAI states.      But the vast power of superintelligence could also be very      dangerous, and could lead to the disempowerment of humanity      or even human extinction. While superintelligence seems far      off now, we believe it could arrive this decade.    <\/p>\n<p>      Naturally, if artificial superintelligence in our lifetimes      is possible (and experts are       divided), it would have enormous implications for      humanity. OpenAI has historically positioned itself as      a      responsible actor trying to transcend mere commercial      incentives and bring AGI about for the benefit of all. And      theyve said they are willing to do that even if that      requires       slowing down development, missing out on profit      opportunities, or allowing external oversight.    <\/p>\n<p>      We dont think that AGI should be just a Silicon Valley      thing, OpenAI co-founder Greg Brockman       told me in 2019, in the much calmer pre-ChatGPT days.      Were talking about world-altering technology. And so how do      you get the right representation and governance in there?      This is actually a really important focus for us and      something we really want broad input on.    <\/p>\n<p>      OpenAIs unique corporate structure  a capped-profit company      ultimately controlled by a nonprofit  was supposed to      increase accountability. No one person should be trusted      here. I dont have super-voting shares. I dont want them,      Altman assured Bloombergs Emily Chang       in 2023. The board can fire me. I think thats      important. (As the board       found out last November, it could fire Altman,      but it couldnt make the move stick. After his firing, Altman            made a deal to effectively take the company to Microsoft, before being      ultimately reinstated with most of the board      resigning.)    <\/p>\n<p>      But there was no stronger sign of OpenAIs commitment to its      mission than the prominent roles of people like Sutskever and      Leike, technologists with a long history of commitment to      safety and an apparently genuine willingness to ask OpenAI to      change course if needed. When I said to Brockman in that 2019      interview, You guys are saying, Were going to build a      general artificial intelligence, Sutskever cut      in.Were going to do everything that can be done in      that direction while also making sure that we do it in a way      thats safe, he told me.    <\/p>\n<p>      Their departure doesnt herald a change in OpenAIs mission      of building artificial general intelligence  that remains      the goal. But it almost certainly heralds a change in      OpenAIs interest in safety work; the company hasnt      announced who, if anyone, will lead the superalignment      team.    <\/p>\n<p>      And it makes it clear that OpenAIs concern with external      oversight and transparency couldnt have run all that deep.      If you want external oversight and opportunities for the rest      of the world to play a role in what youre doing, making      former employees sign extremely restrictive NDAs doesnt      exactly follow.    <\/p>\n<p>      This contradiction is at the heart of what makes OpenAI      profoundly frustrating for those of us who care deeply about      ensuring that AI really does go well and benefits humanity.      Is OpenAI a buzzy, if midsize tech company that makes a      chatty personal assistant, or a trillion-dollar effort to      create an AI god?    <\/p>\n<p>      The companys leadership says they want to transform the      world, that they want to be accountable when they do so, and      that they welcome the worlds input into how to do it justly      and wisely.    <\/p>\n<p>      But when theres real money at stake  and there are       astounding sums of real money at stake in the race to      dominate AI  it becomes clear that they probably never      intended for the world to get all that much input.      Their process ensures former employees  those who know the      most about whats happening inside OpenAI  cant tell the      rest of the world whats going on.    <\/p>\n<p>      The website may have high-minded ideals, but their      termination agreements are full of hard-nosed legalese. Its      hard to exercise accountability over a company whose former      employees are restricted to saying I resigned.    <\/p>\n<p>      ChatGPTs new cute voice may be charming, but Im not feeling      especially enamored.    <\/p>\n<p>      Update, May 18, 7:30 pm ET: This story      was published on May 17 and has been updated multiple times,      most recently to include Sam Altmans response on social      media.    <\/p>\n<p>      A version of this story originally appeared in      theFuture      Perfectnewsletter.Sign      up here!    <\/p>\n<p>      Youve read 1 article      in the last month    <\/p>\n<p>          Here at Vox, we believe in helping everyone understand          our complicated world, so that we can all help to shape          it. Our mission is to create clear, accessible journalism          to empower understanding and action.        <\/p>\n<p>          If you share our vision, please consider supporting our          work by becoming a Vox Member. Your support          ensures Vox a stable, independent source of funding to          underpin our journalism. If you are not ready to become a          Member, even small contributions are meaningful in          supporting a sustainable model for journalism.        <\/p>\n<p>          Thank you for being part of our community.        <\/p>\n<p>            Swati Sharma          <\/p>\n<p>            Vox Editor-in-Chief          <\/p>\n<p>          We accept credit card, Apple Pay, and Google Pay.          You can also contribute via        <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Continued here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.vox.com\/future-perfect\/2024\/5\/17\/24158478\/openai-departures-sam-altman-employees-chatgpt-release\" title=\"OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com\">OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Editors note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmans tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/openai-departures-why-cant-former-employees-talk-but-the-new-chatgpt-release-can-vox-com.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234933],"tags":[],"class_list":["post-169391","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169391"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=169391"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169391\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=169391"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=169391"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=169391"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}