{"id":1118195,"date":"2023-09-29T19:11:55","date_gmt":"2023-09-29T23:11:55","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/opinion-elon-musk-geoff-hinton-and-the-war-over-a-i-the-new-york-times\/"},"modified":"2023-09-29T19:11:55","modified_gmt":"2023-09-29T23:11:55","slug":"opinion-elon-musk-geoff-hinton-and-the-war-over-a-i-the-new-york-times","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/mars-colony\/opinion-elon-musk-geoff-hinton-and-the-war-over-a-i-the-new-york-times\/","title":{"rendered":"Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. &#8211; The New York Times"},"content":{"rendered":"<p><p>      There is no shortage of researchers and industry titans      willing to warn us about the potential destructive power of      artificial intelligence. Reading the headlines, one would      hope that the rapid gains in A.I. technology have also      brought forth a unifying realization of the risks  and the      steps we need to take to mitigate them.    <\/p>\n<p>      The reality, unfortunately, is quite different. Beneath      almost all of the testimony, the manifestoes, the blog posts      and the public declarations issued about A.I. are battles      among deeply divided factions. Some are concerned about      far-future risks that sound like science fiction. Some are      genuinely alarmed by the practical problems that chatbots and      deepfake video generators are creating right now. Some are      motivated by potential business revenue, others by national      security concerns.    <\/p>\n<p>      The result is a cacophony of coded language, contradictory      views and provocative policy demands that are undermining our      ability to grapple with a technology destined to drive the      future of politics, our economy and even our daily lives.    <\/p>\n<p>      These factions are in dialogue not only with the public but      also with one another. Sometimes, they trade letters, opinion      essays or social threads outlining their positions and      attacking others in public view. More often, they tout their      viewpoints without acknowledging alternatives, leaving the      impression that their enlightened perspective is the      inevitable lens through which to view A.I. But if lawmakers      and the public fail to recognize the subtext of their      arguments, they risk missing the real consequences of our      possible regulatory and cultural paths forward.    <\/p>\n<p>      To understand the fight and the impact it may have on our      shared future, look past the immediate claims and actions of      the players to the greater implications of their points of      view. When you do, youll realize this isnt really a debate      only about A.I. Its also a contest about control and power,      about how resources should be distributed and who should be      held accountable.    <\/p>\n<p>      Beneath this roiling discord is a true fight over the future      of society. Should we focus on avoiding the dystopia of mass      unemployment, a world where China is the dominant superpower      or a society where the worst prejudices of humanity are      embodied in opaque algorithms that control our lives? Should      we listen to wealthy futurists who discount the importance of      climate change because theyre already thinking ahead to      colonies on Mars? It is critical that we begin to recognize      the ideologies driving what we are being told. Resolving the      fracas requires us to see through the specter of A.I. to stay      true to the humanity of our values.    <\/p>\n<p>      One way to decode the motives behind the various declarations      is through their language. Because language itself is part of      their battleground, the different A.I. camps tend not to use      the same words to describe their positions. One faction      describes the dangers posed by A.I. through the framework of      safety, another through ethics or integrity, yet another      through security and others through economics. By decoding      who is speaking and how A.I. is being described, we can      explore where these groups differ and what drives their      views.    <\/p>\n<p>      The loudest perspective is a frightening, dystopian vision in      which A.I. poses an existential risk to humankind, capable of      wiping out all life on Earth. A.I., in this vision, emerges      as a godlike, superintelligent, ungovernable entity      capable of controlling everything. A.I. could destroy humanity or pose a risk on par      with nukes. If were not careful, it could kill      everyone or enslave      humanity. Its likened to monsters like the Lovecraftian      shoggoths, artificial servants that rebelled against their      creators, or paper clip      maximizers that consume all of Earths resources in a      single-minded pursuit of their programmed goal. It sounds      like science fiction, but these people are serious, and they      mean the words they use.    <\/p>\n<p>      These are the A.I. safety people, and their ranks include the      Godfathers of A.I., Geoff      Hinton and Yoshua      Bengio. For many years, these leading lights battled      critics who doubted that a computer could ever mimic      capabilities of the human mind. Having steamrollered the      public conversation by creating large language models like      ChatGPT and other A.I. tools capable of increasingly      impressive feats, they appear deeply invested in the idea      that there is no limit to what their creations will be able      to accomplish.    <\/p>\n<p>      This doomsaying is boosted by a class of tech elite that has      enormous power to shape the conversation. And some in this      group are animated by the radical effective altruism movement      and the associated cause of long-term-ism, which tend to      focus on the most extreme catastrophic risks and emphasize      the far-future consequences of our actions. These      philosophies are hot among the cryptocurrency crowd, like the      disgraced former billionaire Sam Bankman-Fried, who at one time possessed      sudden wealth in search of a cause.    <\/p>\n<p>      Reasonable sounding on their face, these ideas can become      dangerous if stretched to their logical      extremes. A dogmatic long-termer would willingly sacrifice      the well-being of people today to stave off a prophesied      extinction event like A.I. enslavement.    <\/p>\n<p>      Many doomsayers say they are acting rationally, but their      hype about hypothetical existential risks amounts to making a      misguided bet with our future. In the name of long-term-ism,      Elon Musk reportedly believes that our society needs to      encourage reproduction among those with the greatest culture      and intelligence (namely, his ultrarich buddies). And he      wants to go further, such as limiting the right to vote to parents and even      populating      Mars. Its widely believed that Jaan      Tallinn, the wealthy long-termer who co-founded the      most prominent centers      for the study of A.I. safety, has made dismissive      noises about climate change because he thinks that it      pales in comparison with far-future unknown unknowns like      risks from A.I. The technology historian David C. Brock      calls      these fears wishful worries  that is, problems that      it would be nice to have, in contrast to the actual agonies      of the present.    <\/p>\n<p>      More practically, many of the researchers in this group are      proceeding full steam ahead in developing A.I.,      demonstrating how unrealistic it is to simply hit pause      on technological development. But the roboticist Rodney      Brooks has pointed      out that we will see the existential risks coming, the      dangers will not be sudden and we will have time to change      course. While we shouldnt dismiss the Hollywood nightmare      scenarios out of hand, we must balance them with the      potential benefits of A.I. and, most important, not allow      them to strategically distract from more immediate concerns.      Lets not let apocalyptic prognostications overwhelm us and      smother the momentum we need to develop critical guardrails.    <\/p>\n<p>      While the doomsayer faction focuses on the far-off future,      its most prominent opponents are focused on the here and now.      We agree with this group that theres plenty already      happening to cause concern: Racist      policing and legal systems that disproportionately arrest and      punish people of color. Sexist      labor systems that rate feminine-coded rsums lower.      Superpower nations automating military      interventions as tools of imperialism and, someday,      killer robots.    <\/p>\n<p>      The alternative to the end-of-the-world, existential risk      narrative is a distressingly familiar vision of dystopia: a      society in which humanitys worst instincts are encoded into      and enforced by machines. The doomsayers think A.I.      enslavement looks like the Matrix; the reformers point to      modern-day contractors doing traumatic work at low      pay for OpenAI in Kenya.    <\/p>\n<p>      Propagators of these A.I. ethics concerns  like Meredith      Broussard, Safiya      Umoja Noble, Rumman Chowdhury      and Cathy      ONeil  have been raising the alarm on inequities coded      into A.I. for years. Although we dont have a census, its      noticeable that many      leaders in this cohort are people of color, women and      people who identify as L.G.B.T.Q. They are often motivated by      insight into what it feels like to be on the wrong end of      algorithmic oppression and by a connection to the communities      most vulnerable to the misuse of new technology. Many in this      group take an explicitly social perspective: When Joy Buolamwini      founded an organization to fight for equitable A.I., she      called it the Algorithmic Justice League. Ruha Benjamin      called her organization the Ida B. Wells Just Data Lab.    <\/p>\n<p>      Others frame efforts to reform A.I. in terms of integrity,      calling for Big Tech to adhere to an oath      to consider the benefit of the broader public alongside  or      even above  their self-interest. They point      to social media companies failure to control hate speech      or how online misinformation can undermine democratic      elections. Adding urgency for this group is that the very      companies driving the A.I. revolution have, at times, been      eliminating safeguards. A signal moment came when Timnit      Gebru, a co-leader of Googles A.I. ethics team, was      dismissed for pointing      out the risks of developing ever-larger A.I. language      models.    <\/p>\n<p>      While doomsayers and reformers share the concern that A.I.      must align with human interests, reformers tend to push back      hard against the doomsayers focus on the distant future.      They want to wrestle the attention of regulators and      advocates back toward present-day harms that are exacerbated      by A.I. misinformation, surveillance and inequity. Integrity      experts call for the development of responsible      A.I., for civic education to ensure A.I. literacy and for      keeping humans front and center in A.I. systems.    <\/p>\n<p>      This groups concerns are well documented and urgent  and      far older than modern A.I. technologies. Surely, we are a      civilization big enough to tackle more than one problem at a      time; even those worried that A.I. might kill us in the      future should still demand that it not profile and exploit us      in the present.    <\/p>\n<p>      Other groups of prognosticators cast the rise of A.I. through      the language of competitiveness and national security. One      version has a post-9\/11 ring to it  a world where      terrorists, criminals and psychopaths have unfettered access      to technologies of mass destruction. Another version is a      Cold War narrative of the United States losing an A.I. arms      race with China and its surveillance-rich society.    <\/p>\n<p>      Some arguing from this perspective are acting on genuine      national security concerns, and others have a simple      motivation: money. These perspectives serve the interests of      American tech tycoons as well as the government agencies and      defense contractors they are intertwined with.    <\/p>\n<p>      OpenAIs Sam Altman and Metas Mark Zuckerberg, both of whom lead dominant A.I.      companies, are pushing for A.I. regulations that they say      will protect us from criminals and terrorists. Such      regulations would be expensive to comply with and are likely      to preserve the market position of leading A.I.      companies while restricting competition from start-ups. In      the lobbying battles over Europes trailblazing A.I.      regulatory framework, U.S. megacompanies pleaded to exempt their general purpose A.I.      from the tightest regulations, and whether and how to apply high-risk compliance      expectations on noncorporate open-source models emerged as a      key point of debate. All the while, some of the moguls      investing in upstart companies are fighting the regulatory      tide. The Inflection AI      co-founder Reid Hoffman argued, The answer to our challenges is not to      slow down technology but to accelerate it.    <\/p>\n<p>      Any technology critical to national defense usually has an      easier      time avoiding oversight, regulation and limitations on      profit. Any readiness gap in our military demands urgent budget increases, funds      distributed to the military branches and their contractors,      because we may soon be called upon to fight. Tech moguls like      Googles former chief executive Eric      Schmidt, who has the ear of many lawmakers, signal to      American policymakers about the Chinese threat even as they      invest      in U.S. national security concerns.    <\/p>\n<p>      The warriors narrative seems to misrepresent that science      and engineering are different from what they were during the      mid-20th century. A.I. research is fundamentally      international; no one country will win a monopoly. And while      national security is important to consider, we must also be      mindful of self-interest of those positioned to benefit      financially.    <\/p>\n<p>      As the science-fiction author Ted Chiang has said, fears about the      existential risks of A.I. are really fears about the threat      of uncontrolled capitalism, and dystopias like the paper clip      maximizer are just caricatures of every start-ups business      plan. Cosma Shalizi and Henry Farrell further argue      that weve lived among shoggoths for centuries, tending to      them as though they were our masters as monopolistic      platforms devour and exploit the totality of humanitys labor      and ingenuity for their own interests. This dread applies as      much to our future with A.I. as it does to our past and      present with corporations.    <\/p>\n<p>      Regulatory solutions do not need to reinvent the wheel. Instead, we need to double      down on the rules that we know limit corporate power. We need      to get more serious about establishing good and effective      governance on all the issues we lost track of while we were      becoming obsessed with A.I., China and the fights picked among robber barons.    <\/p>\n<p>      By analogy to the health care sector, we need an A.I.      public option to truly keep A.I. companies in check. A      publicly      directed A.I. development project would serve to      counterbalance for-profit corporate A.I. and help ensure an      even playing field for access to the 21st centurys key      technology while offering a platform for the ethical      development and use of A.I.    <\/p>\n<p>      Also, we should embrace the humanity behind A.I. We can hold      founders and corporations accountable by mandating greater      A.I. transparency in the development stage, in addition to      applying legal standards for actions associated with A.I.      Remarkably, this is something that both the left      and the right      can agree on.    <\/p>\n<p>      Ultimately, we need to make sure the network of laws and      regulations that govern our collective behavior is knit more      strongly, with fewer gaps and greater ability to hold the      powerful accountable, particularly in those areas most      sensitive to our democracy and environment. As those with      power and privilege seem poised to harness A.I. to accumulate      much more or pursue extreme ideologies, lets think about how      we can constrain their influence in the public square rather      than cede our attention to their most bombastic nightmare      visions for the future.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See more here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.nytimes.com\/2023\/09\/28\/opinion\/ai-safety-ethics-effective.html\" title=\"Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times\">Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/mars-colony\/opinion-elon-musk-geoff-hinton-and-the-war-over-a-i-the-new-york-times\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[450967],"tags":[],"class_list":["post-1118195","post","type-post","status-publish","format-standard","hentry","category-mars-colony"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1118195"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1118195"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1118195\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1118195"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1118195"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1118195"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}