{"id":1075345,"date":"2024-04-04T02:45:10","date_gmt":"2024-04-04T06:45:10","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/ai-safety-researcher-warns-theres-a-99-999999-probability-ai-will-end-humanity-but-elon-musk-conservatively-windows-central\/"},"modified":"2024-08-18T12:49:17","modified_gmt":"2024-08-18T16:49:17","slug":"ai-safety-researcher-warns-theres-a-99-999999-probability-ai-will-end-humanity-but-elon-musk-conservatively-windows-central","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/elon-musk\/ai-safety-researcher-warns-theres-a-99-999999-probability-ai-will-end-humanity-but-elon-musk-conservatively-windows-central.php","title":{"rendered":"AI safety researcher warns there&#8217;s a 99.999999% probability AI will end humanity, but Elon Musk &quot;conservatively &#8230; &#8211; Windows Central"},"content":{"rendered":"<p><p>What you need to know    <\/p>\n<p>    Generative    AI can be viewed as a beneficial or harmful tool.    Admittedly, we've seen impressive feats across        medicine, computing, education, and more fueled by AI. But    on the flipside, critical and concerning issues have been    raised about the technology, from     Copilot's alter ego  Supremacy AGI demanding to be    worshipped to     AI demanding an outrageous amount of water for cooling, not    forgetting the     power consumption concerns.  <\/p>\n<p>    Elon Musk has been rather vocal about his views on AI, brewing    a lot of controversies around the topic. Recently, the    billionaire referred to AI as the \"biggest    technology revolution,\" but indicated there won't be enough    power by 2025, ultimately hindering further development in the    landscape.   <\/p>\n<p>    While at the Abundance Summit, Elon Musk indicated that    \"there's some chance that it will end humanity.\" And while the    billionaire didn't share how he came to this conclusion, he    says there's a 10 to 20 percent chance AI might end humanity    (via     Business Insider).  <\/p>\n<p>    Strangely enough, Musk thinks that potential growth areas and    advances in the AI landscape should still be explored, citing    \"I think that the probable positive scenario outweighs the    negative scenario.\"  <\/p>\n<\/p>\n<p>    While speaking to Business Insider, an AI safety researcher and    director of the Cyber Security Laboratory at the University of    Louisville, Roman Yampolskiy disclosed that the probability of    AI ending humanity is much higher. He referred to Musk's 10 to    20 percent estimate as \"too conservative.\"  <\/p>\n<p>    READ MORE:     Microsoft President compares AI to the Terminator  <\/p>\n<p>    The AI safety researcher says the risk is exponentially high,    referring to it as \"p(doom).\" For context, p(doom) refers to    the probability of generative AI taking over humanity or even    worse  ending it.  <\/p>\n<p>            All the latest news, reviews, and guides for Windows            and Xbox diehards.          <\/p>\n<p>    We all know the privacy and security concerns revolving around    AI, the battle between the US and China is a great reference    point. Last year, the US imposedexport    rules preventing chipmakers like NVIDIA from shipping chips to    China(includingthe    GeForce RTX 4090).  <\/p>\n<p>    The US government categorically indicated that the move wasn't    designed to rundown China's economy, but a safety measure    designed to prevent the use of AI in military advances.  <\/p>\n<p>    Elon Musk raised similar concerns about     OpenAI's GPT-4 model in his     suit against the AI startup and its CEO Sam Altman. The    lack of elaborate measures and guardrails to prevent the    technology from spiraling out of control is alarming. Musk says    the model constitutes AGI and wants its research, findings, and    technological advances easily accessible to the public.  <\/p>\n<p>    Most researchers and executives familiar with (p)doom place the    risk of AI taking over humanity anywhere between 5 to 50    percent, as seen in     The New York Times. On the other hand, Yampolskiy says the    risk is extremely high, with a 99.999999% probability.    The researcher says it's virtually impossible to control    AI once superintelligence is attained, and the only way to    prevent this is not to build it.  <\/p>\n<p>    In a separate interview, Musk said:  <\/p>\n<p>    \"I think we really are on the edge of probably the biggest    technology revolution that has ever existed. You know, there's    supposedly a Chinese curse: 'May you live in interesting    times.' Well, we live in the most interesting of times. For a    while, it was making me a bit depressed, frankly. I was like,    Well, will they take over? Will we be useless?\"  <\/p>\n<p>    Musk shared these comments while talking about Tesla's Optimus    program, and added that humanoid robots are just as good as    humans when handling complex tasks. He jokingly indicated that    he hoped the robots would be nice to us when the if\/when the    evolution starts.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.windowscentral.com\/software-apps\/ai-safety-researcher-warns-theres-a-99999999-probability-ai-will-end-humanity-but-elon-musk-conservatively-dwindles-it-down-to-20-and-says-it-should-be-explored-more-despite-inevitable-doom\" title=\"AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk &quot;conservatively ... - Windows Central\">AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk &quot;conservatively ... - Windows Central<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> What you need to know Generative AI can be viewed as a beneficial or harmful tool. Admittedly, we've seen impressive feats across medicine, computing, education, and more fueled by AI. But on the flipside, critical and concerning issues have been raised about the technology, from Copilot's alter ego Supremacy AGI demanding to be worshipped to AI demanding an outrageous amount of water for cooling, not forgetting the power consumption concerns.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/elon-musk\/ai-safety-researcher-warns-theres-a-99-999999-probability-ai-will-end-humanity-but-elon-musk-conservatively-windows-central.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[612435],"tags":[],"class_list":["post-1075345","post","type-post","status-publish","format-standard","hentry","category-elon-musk"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075345"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1075345"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075345\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1075345"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1075345"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1075345"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}