{"id":1124345,"date":"2024-04-27T12:09:18","date_gmt":"2024-04-27T16:09:18","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/scientists-inspired-the-right-guardrails-for-nuclear-energy-the-internet-and-dna-research-let-them-do-the-same-for-ai-fortune\/"},"modified":"2024-04-27T12:09:18","modified_gmt":"2024-04-27T16:09:18","slug":"scientists-inspired-the-right-guardrails-for-nuclear-energy-the-internet-and-dna-research-let-them-do-the-same-for-ai-fortune","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/transhuman-news-blog\/dna\/scientists-inspired-the-right-guardrails-for-nuclear-energy-the-internet-and-dna-research-let-them-do-the-same-for-ai-fortune\/","title":{"rendered":"Scientists inspired the right guardrails for nuclear energy, the internet, and DNA research. Let them do the same for AI &#8211; Fortune"},"content":{"rendered":"<p><p>  In July 1957, 22 prominent scientists gathered quietly at a  private lodge in Pugwash, a small town in Canadas Nova Scotia province. They had  answered a call to action by Albert Einstein, inviting scientists  to shape guardrails that would contain the danger of nuclear  weapons. The Pugwash Conference earned a Nobel Peace Prize,  and more importantly, it laid the foundations for the nuclear  non-proliferation treaties, which saved our world from risks of  annihilation.<\/p>\n<p>    Today, governments and businesses are frantically searching for    ways to limit the many feared perils of AIespecially those    from Artificial General Intelligence (AGI), the next phase of    AI evolution. AGI will perform a wide range of cognitive tasks    with an efficiency and accuracy far superior to current AI    systems. This next stage of A.I., often referred to by Silicon    Valley enthusiasts as God-like, is expected to surpass human    intelligence and efficiency by a substantial margin. It is    rumored that an internal report on the risks of AGI may be what    ignited the recent board drama at OpenAI, the maker of ChatGPT.    But while the race to build AGI is still in progress, we can be    certain that whoever controls it will have enormous sway on    society and the economy, potentially exerting influence on the    lives of humans everywhere.  <\/p>\n<p>    In the past year, numerous and uncoordinated efforts by    government and business to contain AI sprang across the world,    in the U.S., China, the EU, and the U.K. Businesses have been    pleading with governments to regulate their AI creations,    whilst knowing full well that governments will never succeed in    regulating effectively at the speed of A.I. evolution. The EU    recently completed a multi-year effort to deliver the AI Act.    However, the shifts in generative AI capabilities mean that by    the time it is enacted in 2025, the new AI Act may already be    outdated.  <\/p>\n<p>    Governments are not equipped to outgallop fast-moving    technologies with effective rules and policiesespecially in    the early hyperfast stages of development. Moreover, AI    technologies have a transnational borderless reach, limiting    the effectiveness of national and regional rule systems to    govern them. As for businesses, they are in intense competition    to dominate and profit from these technologies. In such a race,    fueled by billions of investments, safety guardrails are    inevitably a low priority for most businesses.  <\/p>\n<p>    Ironically, governments and businesses are in fact the two    stakeholders who are most in need of guardrails to prevent them    from misusing A.I. in surveillance, warfare, and other    endeavors to influence or control the public.  <\/p>\n<p>    A careful analysis of how prior technologies and scientific    innovations were tamed in the 20th century offers a    clear answer to this dilemma. Guardrails were designed by    scientists who know their own creations and understand (better    than most) how they might evolve.  <\/p>\n<p>    At Pugwash, influential scientists came together to develop    strategies to mitigate the risks of nuclear weapons,    significantly contributing to the formulation of arms control    agreements and fostering international dialogue during the    tense Cold War era.  <\/p>\n<p>    In February 1975, at the Asilomar Conference in California, it    was again scientists who met and successfully established    critical guidelines for the safe and ethical research of DNA,    thereby preventing potential biohazards. The Asilomar    guidelines not only paved the way for responsible scientific    inquiry but also informed regulatory policies worldwide. More    recently, it was again the scientists and inventors of the    Internet, led by Vint Cerf, who convened and shaped the    framework of guardrails and protocols that made the Internet    thrive globally.  <\/p>\n<p>    All these successful precedents are proof that we need    businesses and governments to first make space and let A.I.    scientists shape a framework of guardrails that contain the    risks without limiting the many benefits of A.I. Businesses can    then implement such a framework voluntarily, and only when    necessary, governments should step in to enforce the    implementation by enacting policies and laws based on the    scientists framework. This proven approach worked well for    nuclear technology, DNA, and the Internet. It should be a    blueprint to build safer AI.  <\/p>\n<p>    A Pugwash Conference for AI scientists is therefore urgently    needed. The conference should include no more than two dozen    scientists, in the mold of Geoffrey Hinton who chose to quit    Google in order to speak his mind on    AIs promise and perils.  <\/p>\n<p>    Like Pugwash, the scientists should be chosen from all the key    countries where advanced A.I. technologies are developing, in    order to at least strive for a global consensus. Most    importantly, the choice of the participants at this seminal    A.I. conference must reassure the public that the conferees are    shielded from special interests, geopolitical pressures, and    profit-centric motives.  <\/p>\n<p>    While hundreds of government leaders and business bosses will    cozy up to discuss A.I. at multiple annual international    events, thoughtful and independent A.I. scientists must    urgently get together to make A.I. good for all.  <\/p>\n<p>    Fadi Chehad is chairman, cofounder, and    managingpartner of Ethos Capital. He founded several    software companies and was a fellow at Harvard and Oxford. From    2012 to 2016 he led ICANN, the technical institution that sets    the global rules and policies for the internets key    resources.  <\/p>\n<p>    The opinions expressed in Fortune.com commentary pieces are    solely the views of their authors and do not necessarily    reflect the opinions and beliefs ofFortune.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here:<br \/>\n<a target=\"_blank\" href=\"https:\/\/fortune.com\/2024\/04\/26\/scientists-inspired-right-guardrails-nuclear-energy-internet-dna-research-ai-risk-humanity-tech-ethics\/\" title=\"Scientists inspired the right guardrails for nuclear energy, the internet, and DNA research. Let them do the same for AI - Fortune\" rel=\"noopener\">Scientists inspired the right guardrails for nuclear energy, the internet, and DNA research. Let them do the same for AI - Fortune<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In July 1957, 22 prominent scientists gathered quietly at a private lodge in Pugwash, a small town in Canadas Nova Scotia province.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/transhuman-news-blog\/dna\/scientists-inspired-the-right-guardrails-for-nuclear-energy-the-internet-and-dna-research-let-them-do-the-same-for-ai-fortune\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1124345","post","type-post","status-publish","format-standard","hentry","category-dna"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1124345"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1124345"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1124345\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1124345"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1124345"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1124345"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}