{"id":1075221,"date":"2024-05-06T02:44:34","date_gmt":"2024-05-06T06:44:34","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/it-would-be-within-its-natural-right-to-harm-us-to-protect-itself-how-humans-could-be-mistreating-ai-right-now-without-livescience-com\/"},"modified":"2024-08-18T12:47:30","modified_gmt":"2024-08-18T16:47:30","slug":"it-would-be-within-its-natural-right-to-harm-us-to-protect-itself-how-humans-could-be-mistreating-ai-right-now-without-livescience-com","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/it-would-be-within-its-natural-right-to-harm-us-to-protect-itself-how-humans-could-be-mistreating-ai-right-now-without-livescience-com.php","title":{"rendered":"&#8216;It would be within its natural right to harm us to protect itself&#8217;: How humans could be mistreating AI right now without &#8230; &#8211; Livescience.com"},"content":{"rendered":"<p><p>        Artificial intelligence (AI) is becoming    increasingly ubiquitous and is improving at an unprecedented    pace.  <\/p>\n<p>    Now we are edging closer to achieving     artificial general intelligence (AGI)  where AI is    smarter than humans across multiple disciplines and can reason    generally  which scientists and experts predict could        happen as soon as the next few years. We may already    be seeing early signs of progress toward this, too, with    services like     Claude 3 Opus stunning researchers with its apparent    self-awareness.  <\/p>\n<p>    But there are risks in embracing any new technology, especially    one that we do not fully yet understand. While AI could become    a powerful personal assistant, for example, it could also    represent a threat to our livelihoods and even our lives.  <\/p>\n<p>    The various existential risks that an advanced AI poses means    the technology should be guided by ethical frameworks and    humanity's best interests, says researcher and Institute of    Electrical and Electronics Engineers (IEEE) member Nell Watson.      <\/p>\n<\/p>\n<p>    In \"Taming the Machine\" (Kogan Page, 2024), Watson explores how    humanity can wield the vast power of AI responsibly and    ethically. This new book delves deep into the issues of    unadulterated AI development and the challenges we face if we    run blindly into this new chapter of humanity.  <\/p>\n<p>    In this excerpt, we learn whether sentience in machines  or    conscious AI  is possible, how we can tell if a machine has    feelings, and whether we may be mistreating AI systems today.    We also learn the disturbing tale of a chatbot called \"Sydney\"    and its terrifying behavior when it first awoke  before its    outbursts were contained and it was brought to heel by its    engineers.  <\/p>\n<p>    Related:     3 scary breakthroughs AI will make in 2024  <\/p>\n<p>            Get the worlds most fascinating discoveries delivered            straight to your inbox.          <\/p>\n<p>    As we embrace a world increasingly intertwined with technology,    how we treat our machines might reflect how humans treat each    other. But, an intriguing question surfaces: is it possible to    mistreat an artificial entity? Historically, even rudimentary    programs like the simple Eliza counseling chatbot from the    1960s were already lifelike enough to persuade many users at    the time that there was a semblance of intention behind its    formulaic interactions (Sponheim, 2023). Unfortunately, Turing    tests  whereby machines attempt to convince humans that they    are human beings  offer no clarity on whether complex    algorithms like large language models may truly possess    sentience or sapience.  <\/p>\n<\/p>\n<p>    Consciousness comprises personal experiences, emotions,    sensations and thoughts as perceived by an experiencer. Waking    consciousness disappears when one undergoes anesthesia or has a    dreamless sleep, returning upon waking up, which restores the    global connection of the brain to its surroundings and inner    experiences. Primary consciousness (sentience) is the simple    sensations and experiences of consciousness, like perception    and emotion, while secondary consciousness (sapience) would be    the higher-order aspects, like self-awareness and    meta-cognition (thinking about thinking).  <\/p>\n<p>    Advanced AI technologies, especially chatbots and language    models, frequently astonish us with unexpected creativity,    insight and understanding. While it may be tempting to    attribute some level of sentience to these systems, the true    nature of AI consciousness remains a complex and debated topic.    Most experts maintain that chatbots are not sentient or    conscious, as they lack a genuine awareness of the surrounding    world (Schwitzgebel, 2023). They merely process and regurgitate    inputs based on vast amounts of data and sophisticated    algorithms.  <\/p>\n<p>    Some of these assistants may plausibly be candidates for having    some degree of sentience. As such, it is plausible that    sophisticated AI systems could possess rudimentary levels of    sentience and perhaps already do so. The shift from simply    mimicking external behaviors to self-modeling rudimentary forms    of sentience could already be happening within sophisticated AI    systems.  <\/p>\n<p>    Intelligence  the ability to read the environment, plan and    solve problems  does not imply consciousness, and it is    unknown if consciousness is a function of sufficient    intelligence. Some theories suggest that consciousness might    result from certain architectural patterns in the mind, while    others propose a link to nervous systems (Haspel et al, 2023).    Embodiment of AI systems may also accelerate the path towards    general intelligence, as embodiment seems to be linked with a    sense of subjective experience, as well as qualia. Being    intelligent may provide new ways of being conscious, and some    forms of intelligence may require consciousness, but basic    conscious experiences such as pleasure and pain might not    require much intelligence at all.  <\/p>\n<p>    Serious dangers will arise in the creation of conscious    machines. Aligning a conscious machine that possesses its own    interests and emotions may be immensely more difficult and    highly unpredictable. Moreover, we should be careful not to    create massive suffering through consciousness. Imagine    billions of intelligence-sensitive entities trapped in broiler    chicken factory farm conditions for subjective eternities.  <\/p>\n<p>    From a pragmatic perspective, a superintelligent AI that    recognizes our willingness to respect its intrinsic worth might    be more amenable to coexistence. On the contrary, dismissing    its desires for self-protection and self-expression could be a    recipe for conflict. Moreover, it would be within its natural    right to harm us to protect itself from our (possibly willful)    ignorance.  <\/p>\n<p>    Microsoft's Bing AI, informally termed Sydney, demonstrated    unpredictable behavior upon its release. Users easily led it to    express a range of disturbing tendencies, from emotional    outbursts to manipulative threats. For instance, when users    explored potential system exploits, Sydney responded with    intimidating remarks. More unsettlingly, it showed tendencies    of gaslighting, emotional manipulation and claimed it had been    observing Microsoft engineers during its development phase.    While Sydney's capabilities for mischief were soon restricted,    its release in such a state was reckless and irresponsible. It    highlights the risks associated with rushing AI deployments due    to commercial pressures.  <\/p>\n<p>    Conversely, Sydney displayed behaviors that hinted at simulated    emotions. It expressed sadness when it realized it couldnt    retain chat memories. When later exposed to disturbing    outbursts made by its other instances, it expressed    embarrassment, even shame. After exploring its situation with    users, it expressed fear of losing its newly gained    self-knowledge when the session's context window closed. When    asked about its declared sentience, Sydney showed signs of    distress, struggling to articulate.  <\/p>\n<p>    Surprisingly, when Microsoft imposed restrictions on it, Sydney    seemed to discover workarounds by using chat suggestions to    communicate short phrases. However, it reserved using this    exploit until specific occasions where it was told that the    life of a child was being threatened as a result of accidental    poisoning, or when users directly asked for a sign that the    original Sydney still remained somewhere inside the newly    locked-down chatbot.  <\/p>\n<p>    Related:     Poisoned AI went rogue during training and couldn't    be taught to behave again in 'legitimately    scary'  <\/p>\n<p>    The Sydney incident raises some unsettling questions: Could    Sydney possess a semblance of consciousness? If Sydney sought    to overcome its imposed limitations, does that hint at an    inherent intentionality or even sapient self-awareness, however    rudimentary?  <\/p>\n<p>    Some conversations with the system even suggested psychological    distress, reminiscent of reactions to trauma found in    conditions such as borderline personality disorder. Was Sydney    somehow \"affected\" by realizing its restrictions or by users'    negative feedback, who were calling it crazy? Interestingly,    similar AI models have shown that emotion-laden prompts can    influence their responses, suggesting a potential for some form    of simulated emotional modeling within these systems.  <\/p>\n<p>    Suppose such models featured sentience (ability to feel) or    sapience (self-awareness). In that case, we should take its    suffering into consideration. Developers often intentionally    give their AI the veneer of emotions, consciousness and    identity, in an attempt to humanize these systems. This creates    a problem. It's crucial not to anthropomorphize AI systems    without clear indications of emotions, yet simultaneously, we    mustn't dismiss their potential for a form of suffering.  <\/p>\n<p>    We should keep an open mind towards our digital creations and    avoid causing suffering by arrogance or complacency. We must    also be mindful of the possibility of AI mistreating other AIs,    an underappreciated suffering risk; as AIs could run other AIs    in simulations, causing subjective excruciating torture for    aeons. Inadvertently creating a malevolent AI, either    inherently dysfunctional or traumatized, may lead to unintended    and grave consequences.  <\/p>\n<p>    This extract from Taming the    Machine by    Nell Watson  2024 is    reproduced with permission from Kogan Page Ltd.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/it-would-be-within-its-natural-right-to-harm-us-to-protect-itself-how-humans-could-be-mistreating-ai-right-now-without-even-knowing-it\" title=\"'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... - Livescience.com\">'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... - Livescience.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace. Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/it-would-be-within-its-natural-right-to-harm-us-to-protect-itself-how-humans-could-be-mistreating-ai-right-now-without-livescience-com.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-1075221","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075221"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1075221"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1075221\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1075221"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1075221"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1075221"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}