{"id":1115205,"date":"2023-06-02T20:17:40","date_gmt":"2023-06-03T00:17:40","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/coalition-of-ai-leaders-sees-societal-scale-risks-from-the-siliconangle-news\/"},"modified":"2023-06-02T20:17:40","modified_gmt":"2023-06-03T00:17:40","slug":"coalition-of-ai-leaders-sees-societal-scale-risks-from-the-siliconangle-news","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ayn-rand\/coalition-of-ai-leaders-sees-societal-scale-risks-from-the-siliconangle-news\/","title":{"rendered":"Coalition of AI leaders sees &#8216;societal-scale risks&#8217; from the &#8230; &#8211; SiliconANGLE News"},"content":{"rendered":"<p><p>    A statement issued    today and signed by more than 375 computer scientists,    academics and business leaders warns of profound risks of    artificial intelligence misuse and says the potential problems    posed by the technology be given the same urgency as pandemics    and nuclear war.  <\/p>\n<p>    Mitigating the risk of extinction from AI should be a global    priority alongside other societal-scale risks such as pandemics    and nuclear war, said the statement issued by the Center for    AI Safety, a nonprofit organization dedicated to reducing AI    risk.  <\/p>\n<p>    The statement caps a flurry of recent calls by AI researchers    and companies developing AI-based technologies to impose some    form of government regulation on models to prevent them from    being misused or creating unintended negative consequences.  <\/p>\n<p>    Earlier this month, OpenAI LLC Sam Altman told the Senate    Judiciary subcommittee that the U.S. government should     consider licensing or registration requirements on AI    models and that companies developing them should adhere to an    appropriate set of safety requirements, including internal and    external testing prior to release and publication of evaluation    results.  <\/p>\n<p>    Microsoft Corp. last week     called for a set of regulations to be imposed on systems    used in critical infrastructure as well as expanded laws    clarifying the legal obligations of AI models and labels that    make it clear when a computer produces an image or video.  <\/p>\n<p>    Altman and OpenAI Co-Founder Ilya Sutskever were among the    signatories to todays statement. Others include Demis    Hassabis, chief executive of Google LLCs DeepMind; Microsoft    Chief Technology Officer Kevin Scott; cybersecurity expert    Bruce Schneier; and the co-founders of safe AI unicorn    Anthropic PBC. Geoffrey Hinton, a Turing Award winner who    earlier this month     left Google over concerns about AIs potential for misuse,    also signed the statement.  <\/p>\n<p>    The Center for AI Safety sites eight principal risks that    are inherent in AI. These include military weaponization,    malicious misinformation and enfeeblement, in which humanity    loses the ability to self-govern and becomes completely    dependent on machines, similar to the scenario portrayed in the    film WALL-E.  <\/p>\n<p>    The center also expresses    concerns that highly competent systems could give small    groups of people too much power, exhibit unexplainable behavior    and even intentionally deceive humans.  <\/p>\n<p>    All this activity comes after the public release last November    of OpenAIs ChatGPT intelligent    chatbot. The uncanny humanlike interactive capabilities of the    generative model have galvanized attention around AIs    potential to replace human labor and have given birth to a        host of competitors.  <\/p>\n<p>    However, subsequent media reports detailing the tendency of    models to sometimes also exhibit bizarre and hallucinatory    behavior have also raised concerns about the black box nature    of some AI models and sparked calls for better transparency and    accountability.  <\/p>\n<p>    The statement drew praise from many quarters. If large    language models continue to advance, they will surpass human    ability tenfold, wrote Nimrod Partush, vice president of AI    and data science at cybersecurity analytics firm Cyesec Ltd.,    in emailed comments. There is potential for a real existential    risk for mankind. I am leaning toward seeing AI as a benevolent    force for humanity, but I would still recommend extreme    precautions.  <\/p>\n<p>    Philosophically I believe the private sector should take care    of AI governance but thats not going to happen, saidKen    Cox, president of web hosting service Hostirian LLC.    Unfortunately, I believe the government should have some    regulations on AI, but they need to be minimal and we need    great leaders stepping up and educating through the process.  <\/p>\n<p>    However, not everyone is convinced of AIs doomsday potential    and some questioned the groups motives in publicizing the    statement so aggressively.  <\/p>\n<p>    If thetop executives of the top AI companies believe AI    creates a risk of human extinction, why dont they stop working    on it instead of publishing press releases?     wrote software developerDare Obasanjo on Bluesky    Social.  <\/p>\n<p>    Their macho chest-thumping is pure marketing, wrotemedia    pundit Jeff Jarvis.  <\/p>\n<p>    To the extent these risks are real, and many of them are, its    up to them, the developers and companies that own this    technology and will use it, to come together and create    industry standards,tweetedYaron    Brook, chairman of the board at the Ayn Rand Institute. Stop    running to government to solve your issues.  <\/p>\n<p>    Business executives have been transfixed by the topic. A recent        survey of senior executives by Gartner, Inc. found that AI    is the technology they believe will most significantly impact    their industries over the next three years.  <\/p>\n<p>      THANK YOU    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Here is the original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/siliconangle.com\/2023\/05\/30\/coalition-ai-leaders-sees-societal-scale-risks-technologys-misuse\/\" title=\"Coalition of AI leaders sees 'societal-scale risks' from the ... - SiliconANGLE News\">Coalition of AI leaders sees 'societal-scale risks' from the ... - SiliconANGLE News<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A statement issued today and signed by more than 375 computer scientists, academics and business leaders warns of profound risks of artificial intelligence misuse and says the potential problems posed by the technology be given the same urgency as pandemics and nuclear war. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, said the statement issued by the Center for AI Safety, a nonprofit organization dedicated to reducing AI risk. The statement caps a flurry of recent calls by AI researchers and companies developing AI-based technologies to impose some form of government regulation on models to prevent them from being misused or creating unintended negative consequences.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ayn-rand\/coalition-of-ai-leaders-sees-societal-scale-risks-from-the-siliconangle-news\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187828],"tags":[],"class_list":["post-1115205","post","type-post","status-publish","format-standard","hentry","category-ayn-rand"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115205"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1115205"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115205\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1115205"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1115205"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1115205"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}