Nuclear weapons and artificial intelligence are two technologies that have scared the living daylights out of people for a long time. These fears have been most vividly expressed through imaginative novels, films, and television shows. Nuclear terror gave us Nevil Schutes On the Beach, Kurt Vonneguts Cats Cradle, Judith Merrils Shadow on the Hearth, Nicholas Meyers The Day After, and more recently Jeffrey Lewis 2020 Commission Report. Anxieties about artificial intelligence begat Jack Williamsons With Folded Hands, William Gibsons Neuromancer, Alex Garlands Ex Machina, and Jonathan Nolan and Lisa Joys Westworld. Combine these fears and you might get something like Sarah Connors playground dream sequence in Terminator 2, resulting in the desert of the real that Morpheus presents to Neo in The Matrix.
While strategists have generally offered more sober explorations of the future relationship between AI and nuclear weapons, some of the most widely received musings on the issue, including a recent call for an AI-enabled dead hand to update Americas aging nuclear command, control, and communications infrastructure, tend to obscure more than they illuminate due to an insufficient understanding of the technologies involved. An appreciation for technical detail, however, is necessary to arrive at realistic assessments of any new technology, and particularly consequential where nuclear weapons are concerned. Some have warned that advances in AI could erode the fundamental logic of nuclear deterrence by enabling counter-force attacks against heretofore concealed and mobile nuclear forces. Such secure second-strike forces are considered the backbone of effective nuclear deterrence by assuring retaliation. Were they to become vulnerable to preemption, nuclear weapons would lose their deterrent value.
We, however, view this concern as overstated. Because of AIs inherent limitations, splendid counter-force will remain out of reach. While emerging technologies and nuclear force postures might interact to alter the dynamics of strategic competition, AI in itself will not diminish the deterrent value of todays nuclear forces.
Understanding the Stability Concern
The exponential growth of sensors and data sources across all warfighting domains has analysts today facing an overabundance of information. The Defense Departments Project Maven was born out of this realization in 2017. With the help of AI, then-Deputy Secretary of Defense Robert Work sought to reduce the human factors burden of [full-motion video] analysis, increase actionable intelligence, and enhance military decision-making in support of the counter-ISIL campaign. Hans Vreeland, a former Marine artillery officer involved in the campaign, recently explained the potential of AI in facilitating targeted strikes for counterinsurgency operations, arguing that AI should be recognized and leveraged as a force multiplier, enabling U.S. forces to do more at higher operational tempo with fewer resources and less uncertainty. Such a magic bullet would surely be welcome as a great boon to any commanders arsenal.
Yet, some strategists warn that the same AI-infused capabilities that allow for more prompt and precise strikes against time-critical conventional targets could also undermine deterrence stability and increase the risk of nuclear use. Specifically, AI-driven improvements to intelligence, surveillance, and reconnaissance would threaten the survivability of heretofore secure second-strike nuclear forces by providing technologically advanced nations with the ability to find, identify, track, and destroy their adversaries mobile and concealed launch platforms. Transporter-erector launchers and ballistic missile submarines, traditionally used by nuclear powers to enhance the survivability of their deterrent forces, would be at greater risk. A country that acquired such an exquisite counter-force capability could not only hope to limit damage in case of a spiraling nuclear crisis but also negate its adversaries nuclear deterrence in one swift blow. Such an ability would undermine the nuclear deterrence calculus whereby the costs of imminent nuclear retaliation far outweigh any conceivable gains from aggression.
These expectations are exaggerated. During the 1991 Gulf War, U.S.-led coalition forces struggled hard to find, fix, and finish Iraqi Scud launchers despite overwhelming air and information superiority. Elusive, time-critical targets still seem to present a problem today. Facing a nuclear-armed adversary, such poor performance would prove disastrous. The prospect of just one enemy warhead surviving would give pause to any decisionmaker contemplating a preemptive counter-force strike. This is why nuclear weapons are such powerful deterrents after all and states who possess them go to great lengths to protect these assets. While some worry that AI could achieve near-perfect performance and thereby enable an effective counter-force capability, inherent technological limitations will prevent it from doing so for the foreseeable future. AI may bring modest improvements in certain areas, but it cannot fundamentally alter the calculus that underpins deterrence by punishment.
Enduring Obstacles
The limitations AI faces are twofold: poor data and the inability of even state-of-the-art AI to make up for poor data. Misguided beliefs about what AI can and cannot accomplish further impede realistic assessments.
The data used for training and operationalizing automated image-recognition algorithms suffers from multiple shortcomings. Training an AI to recognize objects of interest among other objects requires prelabeled datasets with both positive and negative examples. While pictures of commercial trucks are abundant, much fewer ground-truth pictures of mobile missile launchers are available. In addition to the ground-truth pictures potentially not representing all launcher models, this data imbalance in itself is consequential. To increase its accuracy with training data that includes fewer launchers than images of other vehicles, the AI would be incentivized to produce false negatives by misclassifying mobile launchers as non-launcher vehicles. Synthetic, e.g., manually warped, variations of missile-launcher images could be included to identify launchers that would otherwise go undetected. This would increase the number of false positives, however, because now trucks that resemble synthetic launchers would be misclassified.
Moreover, images are a poor representation of reality. Whereas humans can infer the function of an object from its external characteristics, AI still struggles to do so. This is not so much an issue where an objects form is meant to inform about its function, like in handwriting or speech recognition. But a vehicles structure does not necessarily inform about its function a problem for an AI tasked with differentiating between vehicles that carry and launch nuclear-armed ballistic missiles and those that do not. Pixilated, two-dimensional images are not only a poor representation of a vehicles function, but also of the three-dimensional object itself. Even though resolution can be increased and a three-dimensional representation constructed from images taken from different angles, this introduces the curse of dimensionality. With greater resolution and dimensional complexity, the number of discernable features increases, thus requiring exponentially more memory and running time for an AI to learn and analyze. AIs inability to discard unimportant features further makes similar pictures seem increasingly dissimilar and vice versa.
Could clever, high-powered AI compensate for these data deficiencies? Machine-learning theory suggests not. When designing algorithms, AI researchers face trade-offs. Data describing real-world problems, particularly those that pertain to human interactions, are always incomplete and imperfect. Accordingly, researchers must specify which patterns AI is to learn. Intuitively it might seem reasonable for an algorithm to learn all patterns present in a particular data set, but many of these patterns will represent random events and noise or be the product of selection bias. Such an AI could also fail catastrophically when encountering new data. In turn, if an algorithm learns only the strongest patterns, it may perform poorly although not catastrophically on any one image. Consequently, attempts to improve an AIs performance by reducing bias generally increase variance and vice versa. Additionally, while any tool can serve as a hammer, few will do a very good job at hammering. Likewise, no one algorithm can outperform all others on all possible problem sets. Neural networks are not universally better than decision trees, for example. Because there is an infinite number of design choices, there is no way to identify the best possible algorithm. And with new data, a heretofore near-perfect algorithm might no longer be the best choice. Invariably, some error is irreducible.
Nevertheless, tailoring improves AI performance. Regarding image recognition, intimate knowledge of the object to be detected allows for greater specification, yielding higher accuracy. On the counter-force problem, however, a priori knowledge is not easily obtained; it is likely to be neither clean nor concise. As discussed above, because function cannot be fully represented in an image, it cannot be fully learned by the AI. Moreover, like most military affairs, counter-force is a contested and dynamic problem. Adversaries will attempt to conceal their mobile-missile launchers or change their design to fool AI-enabled ISR capabilities. They could also try to poison AI training data to induce misclassification. This is particularly problematic because of the one-off nature of a counter-force strike, which prevents validating AI performance with real-world experience. Simulations can only get AI so far.
When it comes to AI, near-perfect performance is tied inextricably to operating in environments that are predictable, even controlled. The counter-force challenge is anything but. Facing such a complex and dynamic problem set, AI would be constrained to lower levels of confidence. Sensor platforms would provide an abundance of imagery and modern precision-guided munitions could be expected to eliminate designated targets, but automated image recognition could not guarantee the detection of all relevant targets.
The Pitfalls of a Faulty Paradigm
Poor data and technological constraints limit AIs impact on the fundamental logic of nuclear deterrence, as well as on other problem sets requiring near-perfect levels of confidence. So, why is the fuzzy buzz not making way for a more measured debate on specific merits and limitations?
The military-technological innovations of the past derived their power principally from the largely familiar and relatively intuitive physical world. Once the mechanics of aviation and satellite communication were understood, they were easily scaled up to enable the awesome capabilities militaries have at their disposal today. What many fail to appreciate, however, is how fundamentally different the world of AI operates and the enduring obstacles it contains. This unfamiliarity with the rules of the computational world sustains the application of an ill-fitting innovation paradigm to AI.
As discussed above, when problems grow more complex, AIs time and resource demands increase exponentially. The traveling salesman problem provides a simple illustration: Given a list of cities and the distances between each pair of cities, what is the shortest possible route a salesman can take that visits each city and returns to the origin city? A desktop computer can answer this question for ten cities (and 3,628,800 possible routes) in mere seconds. With just 60 cities the number of possible routes exceeds the number of atoms in the known universe (roughly 1080). Once the list gets up to 120 destinations, a supercomputer with as many processors as there are atoms in the universe each of them capable of testing a trillion routes per second would have to run longer than the age of the universe to solve the problem. Thus, in contrast to technological innovations rooted in the physical world, there is often no straight-forward way to scale up AI solutions.
Moreover, machine intelligence is much different from human intelligence. When confronted with impressive AI results, some tend to associate machine performance with human-level intelligence without acknowledging that these results were obtained in narrowly defined problem sets. Unlike humans, AI lacks the capacity for conjecture and criticism to deal flexibly with unfamiliar information. It also remains incapable of learning rich, higher-level concepts from few reference points, so that it cannot easily transfer knowledge from one area to another. Rather, there is a high likelihood of catastrophic failure when AI is exposed to a new environment.
Understanding AIs Actual Impact on Deterrence and Stability
What should we make of the real advantages AI promises and the real limitations it will remain constrained by? As Work, Vreeland, and others have persuasively argued, AI could generate significant advantages in a variety of contexts. While the stakes are high in all military operations, nuclear weapons are particularly consequential. But because AI cannot reach near-perfect levels of confidence in dynamic environments, it is unlikely to solve the counter-force problem and imperil nuclear deterrence.
What is less clear at this time is how AI, specifically automated image recognition, will interact with other emerging technologies, doctrinal innovations, and changes in the international security environment. AI could arguably enhance nations confidence in their nuclear early warning systems and lessen pressures for early nuclear use in a conflict, for example, or improve verification for arms control and nonproliferation.
On the other hand, situations might arise in which an imperfect but marginally AI-improved counter-force capability would be considered as good enough to order a strike against an adversarys nuclear forces, especially when paired with overconfidence in homeland missile defense. Particularly states with relatively small and vulnerable arsenals would find it hard to regard assurances that AI would not be used to target their nuclear weapons as credible. Their efforts to hedge against improving counter-force capabilities might include posture adjustments, such as pre-delegating launch authority or co-locating operational warheads with missile units, which could increase first-strike instability and heighten the risk of deliberate, inadvertent, and accidental nuclear use. Accordingly, future instabilities will be a product less of the independent effects of AI than of the perennial credibility problems associated with deterrence and reassurance in a world of ever-evolving capabilities.
Conclusion
As new technologies bring new forms of strategic competition, the policy debate must become better informed about technical matters. There is no better illustration of this requirement than in the debate about AI, where a fundamental misunderstanding of technical matters underpins a serious misjudgment of the impact of AI on stability. While faulty paradigms sustain misplaced expectations about AIs impact, poor data and technological constraints curtail its effect on the fundamental logic of nuclear deterrence. The high demands of counter-force and the inability of AI to provide optimal solutions for extremely complex problems will remain irreconcilable for the foreseeable future.
Rafael Loss (@_RafaelLoss) works at the Center for Global Security Research at Lawrence Livermore National Laboratory. He was a Fulbright fellow at the Fletcher School of Law and Diplomacy at Tufts University and recently participated in the Center for Strategic and International Studies Nuclear Scholars Initiative.
Joseph Johnson is a Ph.D. candidate in computer science at Brigham Young University. His research focuses on novel applications of game theory and network theory in order to enhance wargaming. He deployed to Iraq with the Army National Guard in 2003 and worked at the Center for Global Security Research at Lawrence Livermore National Laboratory.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The views and opinions expressed herein do not necessarily state or reflect those of Lawrence Livermore National Security, LLC., the United States government, or any other organization. LLNL-TR-779058.
Image: U.S. Air Force (Photo bySenior Airman Thomas Barley)
Read the original post:
Will Artificial Intelligence Imperil Nuclear Deterrence? - War on the Rocks
- Researchers create innovative verification techniques to increase security in artificial intelligence and image processing - European Research Council - April 29th, 2024 [April 29th, 2024]
- Who in Europe is investing the most in artificial intelligence? - Euronews - April 29th, 2024 [April 29th, 2024]
- 4 Artificial Intelligence (AI) Stocks Members of Congress Can't Stop Buying (and Nvidia Isn't 1 of Them!) - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- The Standard Names Porter Orr Second Vice President of Artificial Intelligence Strategy and Development - NRToday.com - April 29th, 2024 [April 29th, 2024]
- Meet Nvidia CEO Jensen Huang, the man behind the $2 trillion company powering today's artificial intelligence - CBS News - April 29th, 2024 [April 29th, 2024]
- This Warren Buffett Dividend King Stock Just Invested $1.1 Billion Into Artificial Intelligence (AI) - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- Joe Rogan Reveals What Could Be 'Game Over for the Human Race' - Newsweek - April 29th, 2024 [April 29th, 2024]
- LegalTech Artificial Intelligence Market to Soar to $6.4 Billion by 2028: Global Long-term Forecast to 2033 - Strategic ... - Yahoo Finance UK - April 29th, 2024 [April 29th, 2024]
- Artificial Intelligence Tokens Stumble Again! Is The AI Hype Over? - Coinpedia Fintech News - April 29th, 2024 [April 29th, 2024]
- CISA unveils guidelines for AI and critical infrastructure - FedScoop - April 29th, 2024 [April 29th, 2024]
- IBM's Webinar: Cybersecurity In The Era Of Artificial Intelligence Keynotes - AiThority - April 29th, 2024 [April 29th, 2024]
- Being and becoming a good doctor in the age of artificial intelligence - Firstpost - April 29th, 2024 [April 29th, 2024]
- Pope to take part in G7 summit in June to talk Artificial Intelligence - Crux Now - April 29th, 2024 [April 29th, 2024]
- Pope Francis to attend G7 summit to speak on artificial intelligence - Catholic News Agency - April 29th, 2024 [April 29th, 2024]
- 1 Unstoppable Artificial Intelligence (AI) Stock to Buy and Hold Forever - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- Pope to bring his call for ethical artificial intelligence to G7 summit in June in southern Italy - The Associated Press - April 29th, 2024 [April 29th, 2024]
- Elon Musk Called Tesla an Artificial Intelligence (AI) Robotics Company. Does That Make It a Buy? - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- Emulating neurodegeneration and aging in artificial intelligence systems - Tech Xplore - April 29th, 2024 [April 29th, 2024]
- Pope Francis to participate in G7 session on AI - Vatican News - English - April 29th, 2024 [April 29th, 2024]
- A Once-in-a-Generation Investment Opportunity: 1 Bill Ackman Artificial Intelligence (AI) Stock to Buy Hand Over Fist ... - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- Tesla Stock Investors Cheer Progress on Its Artificial Intelligence (AI) Future - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- In Race to Build A.I., Tech Plans a Big Plumbing Upgrade - The New York Times - April 29th, 2024 [April 29th, 2024]
- A Baltimore-area teacher is accused of using AI to make his boss appear racist - NPR - April 29th, 2024 [April 29th, 2024]
- 3 Top Artificial Intelligence (AI) Stocks That Billionaires Jim Simons, Ray Dalio, and Israel Englander Are Buying - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- The iPad Pro can give us a big surprise with its artificial intelligence chips. - Softonic EN - April 29th, 2024 [April 29th, 2024]
- This Artificial Intelligence (AI) Stock Could Soar 70%, According to Wall Street. Time to Buy? - Yahoo Finance - April 29th, 2024 [April 29th, 2024]
- Billionaire Bill Ackman Owns 8 Stocks -- and This Hypergrowth Artificial Intelligence (AI) Stock Isn't One of Them - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- Artificial Intelligence Has Come for Our...Beauty Pageants? - Glamour - April 29th, 2024 [April 29th, 2024]
- 3 Stocks to Grab Now to Ride the Artificial Intelligence Chip Boom to Riches - InvestorPlace - April 29th, 2024 [April 29th, 2024]
- Is ASML's Big Sell-off a Warning Sign to Artificial Intelligence (AI) Investors? - The Motley Fool - April 29th, 2024 [April 29th, 2024]
- 3 Artificial Intelligence (AI) Stocks to Buy With $1,150 and Hold for Decades - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- This combines the power of artificial intelligence with human insights - finews.com - March 22nd, 2024 [March 22nd, 2024]
- Cohere Targets $5 Billion Valuation for ChatGPT Rival - PYMNTS.com - March 22nd, 2024 [March 22nd, 2024]
- IMF: Artificial Intelligence (AI) Will Transform 40% of Jobs. Can Investors Capitalize? - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- Cathie Wood Is Selling These 2 Artificial Intelligence (AI) Stocks. Should You? - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- iShares Robotics and Artificial Intelligence Multisector ETF (NYSEARCA:IRBO) Shares Acquired by Creative Financial ... - Defense World - March 22nd, 2024 [March 22nd, 2024]
- Inaugural Plenary Meeting of States Endorsing the Political Declaration on Responsible Military Use of Artificial ... - Department of State - March 22nd, 2024 [March 22nd, 2024]
- AMD Fell Today -- Is This a Chance to Buy the Artificial Intelligence (AI) Stock? - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- Artificial Intelligence (AI) Stocks Are Red-Hot, but Here's 1 to Avoid (for Now) - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- A Once-in-a-Generation Investment Opportunity: 1 Artificial Intelligence (AI) Growth Stock to Buy and Hold Forever - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- 2 Artificial Intelligence (AI) Stocks That Could Go Parabolic - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Artificial intelligence will radically improve health care, but only if managed carefully - The Hill - March 22nd, 2024 [March 22nd, 2024]
- Tennessee Makes A.I. an Outlaw to Protect Its Country Music and More - The New York Times - March 22nd, 2024 [March 22nd, 2024]
- When Using Artificial Intelligence In Pharma R&D, Start With Identifying Problem To Solve - HBW Insight - March 22nd, 2024 [March 22nd, 2024]
- Why Super Micro Computer, Advanced Micro Devices, and Other Artificial Intelligence (AI) Stocks Tumbled on Tuesday - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- 3 Billionaires Are Selling Artificial Intelligence (AI) Stock Nvidia and Buying These 10 AI Stocks Instead - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Nvidia Just Bought 5 Artificial Intelligence (AI) Stocks. These 2 Stand Out the Most. - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- UN adopts first global artificial intelligence resolution - ARMENPRESS - March 22nd, 2024 [March 22nd, 2024]
- This New Artificial Intelligence (AI) Chip Is a Massive Game Changer for Nvidia Stock - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Nvidia Led the First Phase of Artificial Intelligence (AI), but These 2 Growth Stocks Will Lead the Next Phases ... - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Meet Wall Street's Newest Stock-Split Stock, Along With the Artificial Intelligence (AI) Stock Likeliest to Follow in Its ... - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Amid Rumors of a Deal With Rivian, Apple Acquired This Artificial Intelligence (AI) Start-Up Instead - Yahoo Finance - March 22nd, 2024 [March 22nd, 2024]
- UN adopts first global artificial intelligence resolution - CGTN - March 22nd, 2024 [March 22nd, 2024]
- 1 No-Brainer Artificial Intelligence (AI) Stock to Buy With $25 and Hold for 10 Years - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- 2 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - The Motley Fool - March 22nd, 2024 [March 22nd, 2024]
- Is Intel the Best Artificial Intelligence (AI) Semiconductor Stock to Buy Before It Skyrockets? - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- The greater social implications of artificial intelligence - New Day NW - KING5.com - February 20th, 2024 [February 20th, 2024]
- AI comes to the world of beauty as eyelash robot uses artificial intelligence to place fake lashes - Fox News - February 20th, 2024 [February 20th, 2024]
- Beyer Appointed To Bipartisan Task Force On Artificial Intelligence - Falls Church News Press - February 20th, 2024 [February 20th, 2024]
- Why Did Nvidia Invest in These 5 Artificial Intelligence (AI) Stocks? Should You Buy Them, Too? - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- Three key themes on artificial intelligence - Research Information - February 20th, 2024 [February 20th, 2024]
- How Artificial Intelligence is transforming consumerism - WFLA - February 20th, 2024 [February 20th, 2024]
- SAP names Philipp Herzig as chief artificial intelligence officer - CIO - February 20th, 2024 [February 20th, 2024]
- Unleashing the Power of Artificial Intelligence: Transforming Web-Based Applications for Enhanced Efficiency and User ... - Financialbuzz.com - February 20th, 2024 [February 20th, 2024]
- Koch Industries continues to accelerate its artificial intelligence initiative - The Business Journals - February 20th, 2024 [February 20th, 2024]
- Forget Nvidia: These 3 Artificial Intelligence (AI) Stocks Can Be the Next Stock-Split Stocks - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- Artificial Intelligence for small business focus of upcoming JWCC Lunch and Learn on Feb. 28 Muddy River News - Muddy River News - February 20th, 2024 [February 20th, 2024]
- Opponents Highlight the Environmental Impact of Artificial Intelligence - News-Press Now - February 20th, 2024 [February 20th, 2024]
- Generative AI's environmental costs are soaring and mostly secret - Nature.com - February 20th, 2024 [February 20th, 2024]
- Chapter Summary: Genesis of Artificial Intelligence and a Scientific Revolution: 1950-1979 - EIN News - February 20th, 2024 [February 20th, 2024]
- What would Thomas Aquinas make of Artificial Intelligence? - ACI Africa - February 20th, 2024 [February 20th, 2024]
- ChatGPT Predicted Bitcoin Price Will "Skyrocket" - Cryptonews - February 20th, 2024 [February 20th, 2024]
- Worried About an Artificial Intelligence (AI) Stock Bubble? Consider This Billionaire Investor's Advice. - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- This Super Artificial Intelligence (AI) Stock Could Be at the Beginning of a Terrific Bull Run - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- Will AI replace Colorado teachers? Here's what experts say. - The Colorado Sun - February 20th, 2024 [February 20th, 2024]
- Nvidia Could Be About to Counter a Big Artificial Intelligence (AI) Threat With This Move - Yahoo Finance - February 20th, 2024 [February 20th, 2024]
- The Healthiest U.S. Pharma Companies: Ranked by RealRate's Impressive Artificial Intelligence - Medium - February 20th, 2024 [February 20th, 2024]
- Down 84%, Is This Artificial Intelligence (AI) Stock a Buy After an Earnings Pop? - The Motley Fool - February 20th, 2024 [February 20th, 2024]
- 'AI for Humans' may be the most entertaining way to learn about artificial intelligence - Fast Company - February 20th, 2024 [February 20th, 2024]