{"id":1125754,"date":"2024-06-06T08:48:49","date_gmt":"2024-06-06T12:48:49","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/agi-in-less-than-5-years-says-former-openai-employee-99bitcoins\/"},"modified":"2024-06-06T08:48:49","modified_gmt":"2024-06-06T12:48:49","slug":"agi-in-less-than-5-years-says-former-openai-employee-99bitcoins","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/agi-in-less-than-5-years-says-former-openai-employee-99bitcoins\/","title":{"rendered":"AGI in Less Than 5 years, Says Former OpenAI Employee &#8211; &#8211; 99Bitcoins"},"content":{"rendered":"<p><p>    Dig into the latest AI Crypto news as we explore the future of    artificial general intelligence (AGI) and uncover its potential    to surpass human abilities, according to Leopold    Aschenbrenners essay.  <\/p>\n<p>    AGI in less than 5 years. How do you intend to spend your last    few years alive? (kidding).  <\/p>\n<p>    The internets on fire after former OpenAI safety researcher    Leopold Aschenbrenner unleashed Situational Awareness, a    no-holds-barred essay series on the future of Artificial    General Intelligence.  <\/p>\n<p>    It is 165 pages long and fresh as of June 4. It examines where    AI stands now and where its    headed.  <\/p>\n<\/p>\n<p>    (Twitter)  <\/p>\n<p>    In some ways, this is: LINE GOES UP OOOOOOOOOOH ITS HAPPENING    ITS HAPPENING.  <\/p>\n<p>    Reminiscent in some ways to this old Simpsons joke:  <\/p>\n<\/p>\n<p>    But Aschenbrenner envisions AGI systems becoming smarter than    you or I by the decades end, ushering in an era of true    superintelligence. Alongside this rapid advancement, he warns    of significant national security    implications not seen in decades.  <\/p>\n<p>      AGI by 2027 is strikingly plausible, Aschenbrenner claims,      suggesting that AGI machines will outperform college      graduates by 2025 or 2026. To put this in perspective,      suppose GPT-4 training took 3 months. In 2027, a leading AI      lab will be able to train a GPT-4-level model in a minute.    <\/p>\n<p>    Aschenbrenner urges the AI community to adopt what he terms    AGI realism, a viewpoint grounded in three core principles    related to national security and AI development in the U.S.  <\/p>\n<p>    He argues that the industrys smartest minds, like Ilya    Sutskever, who famously failed to unseat CEO Sam Altman in    2023, are converging on this perspective, acknowledging the    imminent reality of AGI.  <\/p>\n<p>    Aschenbrenners latest insights follow his controversial exit    from OpenAI amid accusations of leaking info.  <\/p>\n<p>    DISCOVER:     The Best AI Crypto to Buy in Q2 2024  <\/p>\n<p>    On Tuesday a dozen-plus staffers from AI heavyweights like    OpenAI, Anthropic, and Googles DeepMind have raised red flags    against AGI.  <\/p>\n<p>    Their open letter cautions that without extra protections, AI    might become an existential threat.  <\/p>\n<p>      We believe in the potential of AI technology to deliver      unprecedented benefits to humanity, the letter states. We also understand the serious      risks posed by these technologies. These risks range from the      further entrenchment of existing inequalities, to      manipulation and misinformation, to the loss of control of      autonomous AI systems potentially resulting in human      extinction.    <\/p>\n<p>    The letter takes aim at AI giants for dodging oversight in    favor of fat profits. DeepMinds Neel Nanda was the only one to    break ranks as the only internal researcher to endorse the    letter.  <\/p>\n<p>    AI is quickly becoming a battleground, but the message of the    letter is simple: Dont punish employees for speaking out on AI    dangers.  <\/p>\n<p>    On the one hand, it can be scary to think that human creativity    and the boundaries of thought are being closed in by    politically correct code monkies tinkering with matrix    multiplication.  <\/p>\n<p>    On the other, the power of artificial intelligence is currently    incomprehensible because it is unlike anything we have    understood before.  <\/p>\n<p>    It could be a revolution, just as the first man discovered the    spark or the spinning of a stone wheel  one moment, it didnt    exist, and the next, it changed the face of humanity. Well    see.  <\/p>\n<p>    EXPLORE:A    Complete List of Bitcoin-Friendly Countries  <\/p>\n<p>    Disclaimer: Crypto is a high-risk asset class. This article    is provided for informational purposes and does not constitute    investment advice. You could lose all of your capital.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Continue reading here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/99bitcoins.com\/agi-in-less-than-5-years-says-former-openai-employee\" title=\"AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins\">AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Dig into the latest AI Crypto news as we explore the future of artificial general intelligence (AGI) and uncover its potential to surpass human abilities, according to Leopold Aschenbrenners essay. AGI in less than 5 years.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/agi-in-less-than-5-years-says-former-openai-employee-99bitcoins\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1125754","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125754"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1125754"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1125754\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1125754"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1125754"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1125754"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}