{"id":167681,"date":"2023-11-24T02:48:45","date_gmt":"2023-11-24T07:48:45","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/what-the-openai-drama-means-for-ai-progress-and-safety-nature-com\/"},"modified":"2024-08-18T12:46:59","modified_gmt":"2024-08-18T16:46:59","slug":"what-the-openai-drama-means-for-ai-progress-and-safety-nature-com","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/what-the-openai-drama-means-for-ai-progress-and-safety-nature-com.php","title":{"rendered":"What the OpenAI drama means for AI progress  and safety &#8211; Nature.com"},"content":{"rendered":"<p><p>        OpenAI fired its charismatic chief executive, Sam        Altman, on 17 November  but has now reinstated        him.Credit: Justin        Sullivan\/Getty      <\/p>\n<p>    OpenAI  the company behind the blockbuster artificial    intelligence (AI) bot ChatGPT  has been consumed by frenzied    changes for almost a week. On 17 November, the company fired    its charismatic chief executive, Sam Altman. Five days, and    much drama, later, OpenAI announced that Altman would return    with an overhaul of the companys board.  <\/p>\n<p>    The debacle has thrown the spotlight on an ongoing debate about    how commercial competition is shaping the development of AI    systems, and how quickly AI can be deployed ethically and    safely.  <\/p>\n<p>    The push to retain dominance is leading to toxic competition.    Its a race to the bottom, says Sarah Myers West, managing    director of the AI Now Institute, a policy-research    organization based in New York City.  <\/p>\n<p>    Altman, a successful investor and entrepreneur, was a    co-founder of OpenAI and its public face. He had been chief    executive since 2019, and oversaw an investment of some US$13    billion from Microsoft. After Altmans initial ousting,    Microsoft, which uses OpenAI technology to power its search    engine Bing, offered Altman a job    leading a new advanced AI research team. Altmans return to OpenAI came    after hundreds of company    employees signed a letter threatening to follow Altman to    Microsoft unless he was reinstated.  <\/p>\n<p>    The OpenAI board that ousted Altman last week did not give    detailed reasons for the decision, saying at first that    he was fired because he was not consistently candid in his    communications with the board and later adding that the    decision had nothing to do with malfeasance or anything    related to our financial, business, safety or security\/privacy    practice.  <\/p>\n<p>    But some speculate that the firing might have its origins in a    reported schism at    OpenAI between those focused on commercial growth and those    uncomfortable with the strain of rapid development and its    possible impacts on the companys mission to ensure that    artificial general intelligence benefits all of humanity.  <\/p>\n<p>    OpenAI, which is based in San Francisco, California, was    founded in 2015 as a non-profit organization. In 2019, it    shifted to an unusual capped-profit model, with a board    explicitly not accountable to shareholders or investors,    including Microsoft. In the background of Altmans firing is    very clearly a conflict between the non-profit and the    capped-profit; a conflict of culture and aims, says Jathan    Sadowski, a social scientist of technology at Monash University    in Melbourne, Australia.  <\/p>\n<p>    Ilya Sutskever, OpenAIs chief scientist and a member of the    board that ousted Altman, this July shifted his focus to    superalignment, a    four-year project attempting to ensure that future    superintelligences work for the good of humanity.  <\/p>\n<p>    Its unclear whether Altman and Sutskever are at odds about    speed of development: after the board fired Altman, Sutskever    expressed regret about the    impacts of his actions and was among the employees who    signed the letter threatening to leave unless Altman returned.  <\/p>\n<p>    With Altman back, OpenAI has reshuffled its board: Sutskever    and Helen Toner, a researcher in AI governance and safety at    Georgetown Universitys Center for Security and Emerging    Technology in Washington DC, are no longer on the board. The    new board members include Bret Taylor, who is on the board of    e-commerce platform Shopify and used to lead the software    company Salesforce.  <\/p>\n<p>    It seems likely that OpenAI will shift further from its    non-profit origins, says Sadowski, restructuring as a classic    profit-driven Silicon Valley tech company.  <\/p>\n<p>    OpenAI released ChatGPT almost a year ago, catapulting the    company to worldwide fame. The bot was based on the companys    GPT-3.5 large language model    (LLM), which uses the statistical correlations between    words in billions of training sentences to generate fluent    responses to prompts. The breadth of capabilities that have    emerged from this technique (including what some see as logical    reasoning) has astounded and worried scientists and the    general public alike.  <\/p>\n<p>    OpenAI is not alone in pursuing large language models, but the    release of ChatGPT probably pushed others to deployment: Google    launched its chatbot Bard in March 2023, the same month that an    updated version of ChatGPT, based on GPT-4, was released. West    worries that products are appearing before anyone has a full    understanding of their behaviour, uses and misuses, and that    this could be detrimental for society.  <\/p>\n<p>    The competitive landscape for conversational AI is heating up.    Google has hinted that more AI products lie ahead. Amazon has    its own AI offering, Titan. Smaller companies that aim to    compete with ChatGPT include the German effort Aleph Alpha and    US-based Anthropic, founded in 2021 by former OpenAI employees,    which released the chatbot Claude 2.1 on    21 November. Stability AI and Cohere are other often-cited    rivals.  <\/p>\n<p>    West notes that these start-ups rely heavily on the vast and    expensive computing resources provided by just three companies     Google, Microsoft and Amazon  potentially creating a race    for dominance between these controlling giants.  <\/p>\n<p>    Computer scientist Geoffrey Hinton at the University of Toronto    in Canada, a pioneer of deep learning, is deeply concerned    about the speed of AI development. If you specify a    competition to make a car go as fast as possible, the first    thing you do is remove the brakes, he says. (Hinton declined    to comment to Nature on the events at OpenAI since 17    November.)  <\/p>\n<p>    OpenAI was founded with the specific goal of developing an    artificial general intelligence (AGI)  a deep-learning system    thats trained not just to be good at one specific thing, but    to be as generally smart as a person. It remains unclear    whether AGI is even possible. The jury is very much out on    that front, says West. But some are starting to bet on it.    Hinton says he used to think AGI would happen on the timescale    of 30, 50 or maybe 100 years. Right now, I think well    probably get it in 520 years, he says.  <\/p>\n<p>    The imminent dangers of AI are related to it being used as a    tool by human bad actors  people who use it to, for example,    create misinformation,    commit scams or, potentially, invent new bioterrorism    weapons1. And because    todays AI systems work by finding patterns in existing data,    they also tend to reinforce historical biases and social    injustices, says West.  <\/p>\n<p>    In the long term, Hinton and others worry about an AI system    itself becoming a bad actor, developing sufficient agency to    guide world events in a negative direction. This could arise    even if an AGI was designed  in line with OpenAIs    superalignment mission  to promote humanitys best    interests, says Hinton. It might decide, for example, that the    weight of human suffering is so vast that it would be better    for humanity to die than to face further misery. Such    statements sound like science fiction, but Hinton argues that    the existential threat of an AI that cant be turned off and    veers onto a destructive path is very real.  <\/p>\n<p>    The AI Safety Summit hosted by the United Kingdom in November    was designed to get ahead of    such concerns. So far, some two dozen nations have agreed    to work together on the problem, although what exactly they will do    remains unclear.  <\/p>\n<p>    West emphasizes that its important to focus on already-present threats    from AI ahead of far-flung concerns  and to ensure that    existing laws are applied to tech companies developing AI. The    events at OpenAI, she says, highlight how just a few companies    with the money and computing resources to feed AI wield a lot    of power  something she thinks needs more scrutiny from    anti-trust regulators. Regulators for a very long time have    taken a very light touch with this market, says West. We need    to start by enforcing the laws we have right now.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Continued here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.nature.com\/articles\/d41586-023-03700-4\" title=\"What the OpenAI drama means for AI progress  and safety - Nature.com\">What the OpenAI drama means for AI progress  and safety - Nature.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan\/Getty OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/what-the-openai-drama-means-for-ai-progress-and-safety-nature-com.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-167681","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/167681"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=167681"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/167681\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=167681"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=167681"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=167681"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}