{"id":1075277,"date":"2023-11-16T15:06:21","date_gmt":"2023-11-16T20:06:21","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/ai-2023-risks-regulation-an-existential-threat-to-humanity-rte-ie\/"},"modified":"2024-08-18T12:48:21","modified_gmt":"2024-08-18T16:48:21","slug":"ai-2023-risks-regulation-an-existential-threat-to-humanity-rte-ie","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/ai-2023-risks-regulation-an-existential-threat-to-humanity-rte-ie.php","title":{"rendered":"AI 2023: risks, regulation &amp; an &#8216;existential threat to humanity&#8217; &#8211; RTE.ie"},"content":{"rendered":"<p><p>    Opinion: AI's quickening pace of development has led to a    plethora of coverage and concern over what might come next  <\/p>\n<p>    These days the public is inundated with news stories about the rise of artificial    intelligence and the ever quickening pace of development in    the field. The last year has been particularly noteworthy in    this regard and the most noteworthy stories came as ChatGPT was introduced to the world in November    2022.  <\/p>\n<p>    This is one of many Generative AI systems which can almost    instantaneously create text on any topic, in any style, of any    length, and at a human level of performance. Of course, the    text might not be factual, nor might it make sense, but it    almost always does.  <\/p>\n<p>    ChatGPT is a \"large language model\". It's large in that it has    been trained on enormous amounts of text  almost everything    that is available in a computer-readable form  and it produces    extremely sophisticated output of a level of competence we    would expect of a human. This can be seen as a big sibling to    the predictive text system on your smartphone    that helps by predicting the next word you might want to type.  <\/p>\n<p>    We need your consent to load this rte-player    contentWe use rte-player to    manage extra content that can set cookies on your device and    collect data about your activity. Please review their details    and accept them to load the content.Manage Preferences  <\/p>\n<p>    From RT 2fm's Dave Fanning Show, Prof Barry O'Sullivan    on the rise of AI  <\/p>\n<p>    But ChatGPT doesn't do this just at word level, but at the    level of entire passages of text. It can also compose answers    to complex queries from the user. For example, ChatGPT takes    the prompt \"how can I make something that flies from    cardboard?\" and answers with clear instructions, explains the    principles of flight that can be utilised and how to    incorporate them into your design.  <\/p>\n<p>    The most powerful AI systems, those using machine learning, are    built using huge amounts of data. Arthur C. Clarke said that \"any sufficiently advanced technology is    indistinguishable from magic\". For many years now, there    has been growing evidence that the manner in which these    systems are created can have considerable negative    consequences. For example, AI systems have been shown to    replicate and magnify human biases. Some AI    systems have been shown to amplify gender and racial biases, often due    to hidden biases in the data used to train them. They have also    been shown to be brittle in the sense that they can be easily fooled by carefully formulated or    manipulated queries.  <\/p>\n<p>    AI systems have also been built to perform tasks that raise    considerable ethical questions such as, for example, predicting the sexual orientation of    individuals. There is growing concern about the impact of AI on    employment and the future of work. Will AI automate so many    tasks that entire jobs will disappear and will this lead to an    unemployment crisis? These risks are often referred to as the    \"short-term\" risks of AI. On the back of issues like these,    there is a considerable focus on the ethics of AI, how AI can be made    trustworthy and safe and the many international initiatives    related to the regulation of AI.  <\/p>\n<p>    We need your consent to load this rte-player    contentWe use rte-player to    manage extra content that can set cookies on your device and    collect data about your activity. Please review their details    and accept them to load the content.Manage Preferences  <\/p>\n<p>    From RT Radio 1's Morning Ireland, Prof Barry O'Sullivan    discusses an open letter signed by key figures in artificial    intelligence who want powerful AI systems to be suspended amid    fears of a threat to humanity.  <\/p>\n<p>    We have recently also seen a considerable focus on the \"long-term\" risks of AI which tend to be    far more dystopian. Some believe that general purpose AI and,    ultimately, artificial general intelligence are on the    horizon. Todays AI systems, often referred to as \"narrow AI systems\", tend to be capable of    performing one task well, such as, for example, navigation,    movie recommendation, production scheduling and medical    diagnosis.  <\/p>\n<p>    On the other hand, general purpose AI systems can perform many    different tasks at a human-level of performance. Take a step    further and artificial general intelligence systems would be    able to perform all the tasks that a human can and with far    greater reliability.  <\/p>\n<p>    Whether we will ever get to that point, or even if we really    would want to, is a matter of debate in the AI community and    beyond. However, these systems will introduce a variety of    risks, including the extreme situation where AI systems will be    so advanced that they would pose an existential threat to    humanity. Those who argue that we should be concerned about    these risks sometimes compare artificial general intelligence    to an alien race, that the existence of this extraordinarily    advanced technology would be tantamount to us living with an    advanced race of super-human aliens.  <\/p>\n<p>    We need your consent to load this rte-player    contentWe use rte-player to    manage extra content that can set cookies on your device and    collect data about your activity. Please review their details    and accept them to load the content.Manage Preferences  <\/p>\n<p>    From RT Radio 1's This Week, fears over AI becoming too    powerful and endangering humans has been a regular sci-fi theme    in film and TV for decades, but could it become a    reality?  <\/p>\n<p>    While I strongly believe that we need to address both    short-term and long-term risks associated with AI, we should    not let the dystopian elements distract our focus from the very    real issues raised by AI today. In terms of existential threat    to humanity, the clear and present danger comes from climate change    rather than artificial general intelligence. We already see the    impacts of climate change across the globe and throughout    society. Flooding, impacts on food production and the risks to    human wellbeing are real and immediate concerns.  <\/p>\n<p>    Just like the role AI played in the discovery of the Covid-19    vaccines, the technology has a lot to offer in dealing with    climate change. For almost two decades the field of    computational sustainability has used the methods of artificial    intelligence, data science, mathematics, and computer science,    to the challenges of balancing societal, economic, and    environmental resources to secure the future well-being of    humanity, very much addressing the Sustainable Development Goals agenda.  <\/p>\n<p>    AI has been used to design sustainable and climate-friendly    policies. It has been used to efficiently manage fisheries and    plan and monitor natural resources and industrial production.    Rather than being seen as an existential threat to humanity, AI    should be seen as a tool to help with the greatest threat there    exists to humanity today: climate change.  <\/p>\n<p>    Of course, we cannot let AI develop in a way that is without    guardrails and without proper oversight. I am confident that    the fact that there is active debate about the risks of AI, and    that there are regulatory frameworks being put in place    internationally, that we will tame the genie that is AI.  <\/p>\n<p>    Prof Barry O'Sullivan appears on Game Changer: AI & You which airs on RT One at    10:15pm tonight  <\/p>\n<p>    The views expressed here are those of the author and do not    represent or reflect the views of RT  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.rte.ie\/brainstorm\/2023\/1116\/1416881-artificial-intelligence-risks-rewards-chatgpt-generative-ai-agi\" title=\"AI 2023: risks, regulation &amp; an 'existential threat to humanity' - RTE.ie\">AI 2023: risks, regulation &amp; an 'existential threat to humanity' - RTE.ie<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Opinion: AI's quickening pace of development has led to a plethora of coverage and concern over what might come next These days the public is inundated with news stories about the rise of artificial intelligence and the ever quickening pace of development in the field. The last year has been particularly noteworthy in this regard and the most noteworthy stories came as ChatGPT was introduced to the world in November 2022 <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-general-intelligence\/ai-2023-risks-regulation-an-existential-threat-to-humanity-rte-ie.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234933],"tags":[],"class_list":["post-1075277","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075277"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1075277"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075277\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1075277"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1075277"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1075277"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}