{"id":1116768,"date":"2023-08-02T19:10:05","date_gmt":"2023-08-02T23:10:05","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/an-oppenheimer-moment-for-the-progenitors-of-ai-noema-noema-magazine\/"},"modified":"2023-08-02T19:10:05","modified_gmt":"2023-08-02T23:10:05","slug":"an-oppenheimer-moment-for-the-progenitors-of-ai-noema-noema-magazine","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/an-oppenheimer-moment-for-the-progenitors-of-ai-noema-noema-magazine\/","title":{"rendered":"An &#8216;Oppenheimer Moment&#8217; For The Progenitors Of AI &#8211; NOEMA &#8211; Noema Magazine"},"content":{"rendered":"<p><p>      Credits    <\/p>\n<p>        Nathan Gardels is the editor-in-chief of Noema        Magazine.      <\/p>\n<p>    The movie directorChristopher Nolansays    he has spoken to AI scientists who are having an Oppenheimer    moment, fearing the destructive potential of their creation.    Im telling the Oppenheimer story, he reflected on his biopic    of the man, because I think its an important story, but also    because its absolutely a cautionary tale.Indeed, some    are already comparing OpenAIs Sam Altman to the father of the    atomic bomb.  <\/p>\n<p>    Oppenheimer was called the American Prometheus by his    biographers because he hacked the secret of nuclear fire from    the gods, splitting matter to release horrendous energy he then    worried could incinerate civilization.  <\/p>\n<p>    Altman, too, wonders if he did something really bad by    advancing generative AI with ChatGPT. He told a Senate hearing,    If this technology goes wrong, it can go quite wrong. Gregory    Hinton, the so-called godfather of AI, resigned from Google in    May saying part of him regretted his lifes work of building    machines that are smarter than humans. He warned that It is    hard to see how you can prevent the bad actors from using it    for bad things. Others among his peers have spoken of the    risk of extinction from AI that ranks with other existential    threats such as nuclear war, climate change and pandemics.  <\/p>\n<p>    For Yuval Noah Harari, generative AI may be no less a shatterer    of societies, or destroyer of worlds in the phrase    Oppenheimer cited from the Baghavad Gita, than the bomb. This    time sapiens have become the gods, siring inorganic offspring    that may one day displace their progenitors. In a conversation some years ago,    Harari put it this way: Human history began when men created    gods. It will end when men become gods.  <\/p>\n<p>    As Harari and co-authors Tristan Harris and Aza Raskin    explained in a recent essay, In the beginning was    the word. Language is the operating system of human culture.    From language emerges myth and law, gods and money, art and    science, friendships and nations and computer code. A.I.s new    mastery of language means it can now hack and manipulate the    operating system of civilization. By gaining mastery of    language, A.I. is seizing the master key to civilization, from    bank vaults to holy sepulchers.  <\/p>\n<p>    They went on:  <\/p>\n<p>        For thousands of years, we humans have lived inside the        dreams of other humans. We have worshiped gods, pursued        ideals of beauty and dedicated our lives to causes that        originated in the imagination of some prophet, poet or        politician. Soon we will also find ourselves living inside        the hallucinations of nonhuman intelligence.       <\/p>\n<p>        Soon we will finally come face to face with Descartess        demon, with Platos cave, with the Buddhist Maya. A curtain        of illusions could descend over the whole of humanity, and        we might never again be able to tear that curtain away  or        even realize it is there.      <\/p>\n<p>    This prospect of a nonhuman entity writing our narrative so    alarms the Israeli historian and philosopher that he urgently    advises that sapiens stop and think twice before we relinquish    the mastery of our domain to technology we empower.  <\/p>\n<p>    The time to reckon with A.I. is before our politics, our    economy and our daily life become dependent on it, he, Harris    and Raskin warn. If we wait for the chaos to ensue, it will be    too late to remedy it.  <\/p>\n<p>    Writing in Noema, Google vice president    Blaise Agera Y Arcas and colleagues from the Quebec AI    Institute dont see the Hollywood scenario of a Terminator    event where miscreant AI goes on a calamitous rampage anywhere    on the near horizon. They worry instead that focusing on an    existential threat in the distant future distracts from    mitigating the clear and present dangers of AIs disruption of    society today.  <\/p>\n<p>    What worries them most is already at hand before AI becomes    superintelligent: mass surveillance, disinformation and    manipulation, military misuse of AI and the replacement of    whole occupations on a widespread scale.  <\/p>\n<p>    For this group of scientists and technologists,    Extinctionfrom a rogue AI is an extremely unlikely    scenario that depends on dubious assumptions about the    long-term evolution of life, intelligence, technology and    society. It is also an unlikely scenario because of the many    physical limits and constraints a superintelligent AI system    would need to overcome before it could go rogue in such a    way. There are multiple natural checkpoints where researchers    can help mitigate existential AI risk by addressing tangible    and pressing challenges without explicitly making existential    risk a global priority.  <\/p>\n<p>    As they see it, Extinction is induced in one of three ways:    competition for resources, hunting and over-consumption or    altering the climate or their ecological niche such that    resulting environmental conditions lead to their demise. None    of these three cases apply to AI as it stands.  <\/p>\n<p>    Above all, For now, AI depends on us, and a superintelligence    would presumably recognize that fact and seek    topreservehumanity since we are as    fundamental to AIs existence as oxygen-producing plants are to    ours. This makes the evolution of mutualism between AI and    humans a far more likely outcome than competition.  <\/p>\n<p>    To assign an infinite cost to the unlikely outcome of    extinction would be akin to turning all our technological    prowess toward deflecting a one-in-a-million chance of a meteor    strike on Earth as the planetary preoccupation. Simply,    existential risk from superintelligent AI does not warrant    being a global priority, in line with climate change, nuclear    war, and pandemic prevention.  <\/p>\n<p>    Any dangers, distant or near, that may emerge from competition    between humans and budding superintelligence will only be    exacerbated by rivalry among nation-states.  <\/p>\n<p>    This leads to one last thought on the analogy between Sam    Altman and Oppenheimer, who in his later years was persecuted,    isolated and denied official security clearance because the    McCarthyist fever of the early Cold War cast him as a Communist    fellow traveler. His crime: opposing the deployment of a    hydrogen bomb and calling for working with other nations,    including adversaries, to control the use of nuclear    weapons.  <\/p>\n<p>    In aspeechto AI scientists    in Beijing in June, Altman similarly called for collaboration    on how to govern the use of AI. China has some of the best AI    talents in the world, he said. Controlling advanced AI systems    requires the best minds from around the world. With the    emergence of increasingly powerful AI systems, the stakes for    global cooperation have never been higher.  <\/p>\n<p>    One wonders, and worries, how long it will be before Altmans    sense of universal scientific responsibility is sucked, like    Oppenheimer, into the maw of the present McCarthy-like    anti-China hysteria in Washington. No doubt the fervent    atmosphere in Beijing poses the mirror risk for any AI    scientist with whom he might collaborate on behalf of the whole    of humanity instead of for the dominance of one nation.  <\/p>\n<p>    At the top of the list of clear and present dangers posed by AI    is how it might be weaponized in the U.S.-China conflict. As    Harari warns, the time to reckon with such a threat is now, not    when it is an eventuality realized and too late to roll back.    Responsible players on both sides need to exercise the wisdom    that cant be imparted to machines and cooperate to mitigate    risks. For Altman to suffer the other Oppenheimer moment would    bring existential risk ever closer.  <\/p>\n<p>    One welcome sign is that U.S. Secretary of State Antony Blinken    and Commerce Secretary Gina Raimondo acknowledged this week    that no country or company can shape the future of AI alone.     [O]nly with the combined focus, ingenuity and cooperation of    the international community will we be able to fully and safely    harness the potential of AI.  <\/p>\n<p>    So far, however, the initiatives they propose, essential as    they are, remain constrained by strategic rivalry and limited    to the democratic world. The toughest challenge for both the    U.S. and China is to engage each other directly to blunt an AI    arms race before it spirals out of control.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.noemamag.com\/an-oppenheimer-moment-for-the-progenitors-of-ai\" title=\"An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine\">An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Credits Nathan Gardels is the editor-in-chief of Noema Magazine. The movie directorChristopher Nolansays he has spoken to AI scientists who are having an Oppenheimer moment, fearing the destructive potential of their creation. Im telling the Oppenheimer story, he reflected on his biopic of the man, because I think its an important story, but also because its absolutely a cautionary tale.Indeed, some are already comparing OpenAIs Sam Altman to the father of the atomic bomb.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/an-oppenheimer-moment-for-the-progenitors-of-ai-noema-noema-magazine\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-1116768","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116768"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1116768"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1116768\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1116768"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1116768"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1116768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}