{"id":169657,"date":"2024-06-28T02:39:14","date_gmt":"2024-06-28T06:39:14","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/the-fast-and-the-deadly-when-artificial-intelligence-meets-weapons-of-mass-destruction-european-leadership-network\/"},"modified":"2024-08-18T12:47:39","modified_gmt":"2024-08-18T16:47:39","slug":"the-fast-and-the-deadly-when-artificial-intelligence-meets-weapons-of-mass-destruction-european-leadership-network","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/the-fast-and-the-deadly-when-artificial-intelligence-meets-weapons-of-mass-destruction-european-leadership-network.php","title":{"rendered":"The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction &#8211; European Leadership Network"},"content":{"rendered":"<p><p>    This article was originally published for the German    Federal Foreign Offices Artificial Intelligence    and Weapons of Mass Destruction Conference 2024, held on    the 28th of June, and can be read     here. You can also read The    implications of AI in nuclear decision-making, by ELN    Policy Fellow Alice    Saltini, who will be speaking on a panel at the    conference.  <\/p>\n<p>    Artificial intelligence (AI) is a catalyst for many trends that    increase the salience of nuclear, biological or chemical    weapons of mass destruction (WMD). AI can facilitate and speed    up the development or manufacturing of WMD or precursor    technologies. With AI assistance, those who currently lack the    necessary knowledge to produce fissile materials or toxic    substances can acquire WMD capabilities. AI itself is of    proliferation concern. As an intangible technology, it spreads    easily, and its diffusion is difficult to control through    supply-side mechanisms, such as export controls. At the    intersection of nuclear weapons and AI, there are concerns    about rising risks of inadvertent or intentional nuclear    weapons use, reduced crisis stability and new arms races.  <\/p>\n<p>    To be sure, AI also has beneficial applications and can reduce    WMD-related risks. AI can make transparency and verification    instruments more effective and efficient because of its ability    to process immense amounts of data and detect unusual patterns,    which may indicate noncompliant behaviour. AI can also improve    situational awareness in crisis situations.  <\/p>\n<p>    While efforts to explore and exploit the military dimension of    AI are moving ahead rapidly, these beneficial dimensions of the    AI-WMD intersection remain under-researched and under-used.  <\/p>\n<p>    The immediate challenge is to build guardrails around the    integration of AI into the WMD sphere and to slow down the    incorporation of AI into research, development, production, and    planning for nuclear, biological and chemical weapons.    Meanwhile, governments should identify risk mitigation measures    and, at the same time, intensify their search for the best    approaches to capitalise on the beneficial applications of AI    in controlling WMD. Efforts to ensure that the international    community is able to govern this    technology rather than let it govern ushave to    address challenges at three levels at the AI and WMD    intersection.  <\/p>\n<p>    First, AI can facilitate the development of biological,    chemical or nuclear weapons by making research, development and    production faster and more efficient. This is true even for    old technologies like fissile material production, which    remains expensive and requires large-scale industrial    facilities.     AI can help to optimise uranium enrichment or plutonium    separation, two key processes in any nuclear weapons    programme.  <\/p>\n<p>    The connection between AI and chemistry and biochemistry is    particularly worrying. The Director General of the Organisation    for the Prohibition of Chemical Weapons (OPCW) has warned of    the potential risks that artificial intelligence-assisted    chemistry may pose to the Chemical Weapons Convention and of    the ease and speed with which novel routes to existing toxic    compounds can be identified.This creates serious new    challenges for the control of toxic substances and their    precursors.  <\/p>\n<p>    Similar concerns exist with regard to biological weapons.    Synthetic biology is in itself a dynamic field.But    AI puts the development of novel chemical or biological agents    through such new technologies on steroids. Rather than    going through lengthy and costly lab experiments, AI can    predict the biological effects of known and even unknown    agents. Amuch-cited    paper by Filippa Lentzos and colleaguesdescribes an    experiment during which an AI, in less than six hours and    running on a standard hardware configuration, generated forty    thousand molecules that scored within our desired threshold,    meaning that these agents were likely more toxic than publicly    known chemical warfare agents.  <\/p>\n<p>    Second,AI    could ease access to nuclear, biological and chemical weapons    by illicit actors by giving advice on how to develop and    produce WMD or relevant technologies from scratch.  <\/p>\n<p>    To be sure, current commercial AI providers have instructed    their AI models not to answer questions on how to build WMD or    related technologies. But such limits will not remain    impermeable. And in future, the problem may not be so much    preventing the misuse of existing AI models but the    proliferation of AI models or the technologies that can be used    to build them. Only    a fraction of all spending on AI is invested in the safety and    security of such models.  <\/p>\n<p>    Third, the integration of AI into the WMD sphere can also lower    the threshold for the use of nuclear, biological or chemical    weapons. Thus, all nuclear weapon stateshave    begun to integrate AI into their nuclear command, control,    communication and information (NC3I) infrastructure. The    ability of AI models to analyse large chunks of data at    unprecedented speedscan    improve situational awareness and help warn, for example,    of incoming nuclear attacks. But at the same time AI may also    be used to optimise military strike options. Because of the    lack of transparency around AI integration, fears that    adversaries may be intent on conducting a disarming strike with    AI assistance can increase, setting up a race to the bottom in    nuclear decision-making.  <\/p>\n<p>    In a crisis situation, overreliance on AI systems that are    unreliable or working with faulty data may create additional    problems. Data may be incomplete or may have been manipulated.    AI models themselves are not objective. These problems are    structural and thus not easily fixed.A UNIDIR study, for    example, found that gender norms and bias can be introduced    into machine learning throughout its life cycle. Another    inherent risk is that AI systems designed and trained for    military uses are biased towards war-fighting rather than war    avoidance, which would make de-escalation in a nuclear crisis    much more difficult.  <\/p>\n<p>    The consensus among nuclear weapons states that a human always    has to stay in the loop before a nuclear weapon is launched, is    important, but it remains a problem that the understanding of    human control may differ significantly.  <\/p>\n<p>    It would be a fools errand to try to slow down AIs    development. But we need to decelerate AIs convergence with    the research, development, production, and military planning    related to WMD. It must also be possible to prevent spillover    from AIs integration into the conventional military sphere to    applications leading to nuclear, biological, and chemical    weapons use.  <\/p>\n<p>    Such deceleration and channelling strategies can build on some    universal norms and prohibitions. But they will also have to be    tailored to the specific regulative frameworks, norms and    patterns regulating nuclear, biological and chemical weapons.    Thezero    draft of the Pact for the Future, to be adopted at the    September 2024Summit of the    Future, points in the right direction by suggesting a    commitment by the international community to developing norms,    rules and principles on the design, development and use of    military applications of artificial intelligence through a    multilateral process, while also ensuring engagement with    stakeholders from industry, academia, civil society and other    sectors.  <\/p>\n<p>    Fortunately, efforts to improve AI governance on WMD do not    need to start from scratch. At the global level, the    prohibitions of biological and chemical weapons enshrined in    the Biological and Chemical Weapons Conventions are    all-encompassing: the general purpose criterion prohibits all    chemical and biological agents that are not used peacefully,    whether AI comes into play or not. But AI may test these    prohibitions in various ways, including by     merging biotechnology and chemistry seamlessly with other    novel technologies. It is, therefore, essential the OPCW    monitors these developments closely.  <\/p>\n<p>    International Humanitarian Law (IHL) implicitly establishes    limits on the military application of AI by prohibiting the    indiscriminate and disproportionate use of force in war. The        Group of Governmental Experts (GGE) on Lethal Autonomous    Weapons under the Convention on Certain Conventional Weapons    (CCW)is doing important work by attempting to spell    out what the IHL requirements mean for weapons that act without    human control. These discussions will,mutatis    mutandis, also be relevant for any nuclear, biological or    chemical weapons that would be reliant on AI functionalities    that reduce human control.  <\/p>\n<p>    Shared concerns around the risks of AI and WMD have triggered a    range of UN-based initiatives to promote norms around    responsible use. The legal, ethical and humanitarian questions    raised at the April 2024Vienna    Conference on Autonomous Weapons Systems are likely to    inform debates and decisions around limits on AI integration    into WMD development and employment, and particularly nuclear    weapons use. After all, similar pressures to shorten decision    times and improve the autonomy of weapons systems apply to    nuclear as well as conventional weapons.  <\/p>\n<p>    From a regulatory point of view, it is advantageous that the    market for AI-related products is still highly concentrated    around a few big players. It is positive that some of the    countries with the largest AI companies are also investing in    the development of norms around responsible use of AI. It is    obvious that these companies have agency and, in some cases,    probably more influence on politics than small states.  <\/p>\n<p>    TheBletchley    Declarationadopted at the November 2023 AI Safety    Summit in the UK, for example, highlighted the particular    safety risks that arise at the frontier of AI. These could    include risks that may arise from potential intentional misuse    or unintended issues of control relating to alignment with    human intent. The summits on Responsible Artificial    Intelligence in the Military Domain (REAIM) are    anothereffort    at coalition building around military AI that could help    to establish the rules of the game.  <\/p>\n<p>    ThePolitical    Declaration on Responsible Military Use of Artificial    Intelligence and Autonomy, agreed on in Washington in    September 2023, confirmed important principles that also apply    to the WMD sphere, including the applicability of international    law and the need to implement appropriate safeguards to    mitigate risks of failures in military AI capabilities. One    step in this direction would be for the nuclear weapon states    to conduct so-called     failsafe reviewsthat would aim to comprehensively    evaluate how control of nuclear weapons can be ensured at all    times, even when AI-based systems are incorporated.  <\/p>\n<p>    All such efforts could and should be building blocks that can    be incorporated into a comprehensive governance approach. Yet,    the risks around AI leading to increased risk of nuclear    weapons use are most pressing. Artificial intelligence is not    the only emerging and disruptive technology affecting    international security.Space    warfare, cyber, hypersonic weapons, and quantum are all    affecting nuclear stability. It is, therefore, particularly    important that nuclear weapon states amongst themselves build a    better understanding and confidence about the limits of AI    integration into NC3I.  <\/p>\n<p>    An understanding between China and the United States on    guardrails around military misuse of AI would be the single    most important measure to slow down the AI race. The fact that    Presidents Xi Jinping and Joe Biden in November 2023 agreed    that     China and the United States have broad common interests,    including on artificial intelligence, and to intensify    consultations on that and other issues, was a much-needed sign    of hope. Although since then China has been     hesitating to actually engagein such talks.  <\/p>\n<p>    Meanwhile, relevant nations can lead by example when    considering the integration of AI into the WMD realm. This    concerns, first of all, the nuclear weapon states which can    demonstrate responsible behaviour by pledging, for example,    that they would not use AI to interfere with the nuclear    command, control and communication systems of their    adversaries. All states should also practice maximum    transparency when conducting experiments around the use of AI    for biodefense activities because such activities can easily be    mistaken for offensive work. Finally, the German governments    pioneering role in looking at the impact of new and emerging    technologies on arms control has to be recognised. Its    Rethinking Arms Control conferences, including the most recent    conference on AI and WMD on June 28 in Berlin with key    contributors such as the Director General of the OPCW, are    particularly important. Such meetings can systematically and    consistently investigate the AI-WMD interplay in a dialogue    between experts and practitioners. If they can agree on what    guardrails and speed bumps are needed, an important step toward    effective governance of AI in the WMD sphere has been taken.  <\/p>\n<p>    The opinions articulated above represent the views of the    author(s) and do not necessarily reflect the position of the    European Leadership Network or any of its members. The ELNs    aim is to encourage debates that will help develop Europes    capacity to address the pressing foreign, defence, and security    policy challenges of our time.  <\/p>\n<p>    Image credit: Free ai generated art image, public domain art    CC0 photo. Mixed with     Wikimedia Commons \/     Fastfission~commonswiki  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Originally posted here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/europeanleadershipnetwork.org\/commentary\/the-fast-and-the-deadly-when-artificial-intelligence-meets-weapons-of-mass-destruction\/\" title=\"The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction - European Leadership Network\">The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction - European Leadership Network<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> This article was originally published for the German Federal Foreign Offices Artificial Intelligence and Weapons of Mass Destruction Conference 2024, held on the 28th of June, and can be read here. You can also read The implications of AI in nuclear decision-making, by ELN Policy Fellow Alice Saltini, who will be speaking on a panel at the conference. Artificial intelligence (AI) is a catalyst for many trends that increase the salience of nuclear, biological or chemical weapons of mass destruction (WMD).  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/the-fast-and-the-deadly-when-artificial-intelligence-meets-weapons-of-mass-destruction-european-leadership-network.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-169657","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169657"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=169657"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169657\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=169657"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=169657"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=169657"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}