This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (parts a. and b.), which asks how artificial intelligence will affect the character and/or the nature of war, and what might happen if the United States fails to develop robust AI capabilities that address national security issues.
In the 1983 film WarGames, Professor Falken bursts into the war room at NORAD to warn, What you see on these screens up here is a fantasy a computer-enhanced hallucination. Those blips are not real missiles, theyre phantoms! The Soviet nuclear attack onscreen, he explained, was instead a simulation created by WOPR, an artificial intelligence of Falkens own invention.
WOPRs simulation now seems more prescient than most other 20th century predictions about how artificial intelligence, or AI, would change the nature of warfare. Contrary to the promise that AI would deliver an omniscient view of everything happening in the battlespace the goal of U.S. military planners for decades it now appears that technologies of misdirection are winning.
Military deception, in short, could prove to be AIs killer app.
At the turn of this century, Admiral Bill Owens predicted that U.S. commanders would soon be able to see everything of military significance in the combat zone. In the 1990s, one military leader echoed that view, promising that in the first quarter of the 21st century, it will become possible to find, fix or track, and target anything that moves on the surface of the earth. Two decades and considerable progress in most areas of information technology have failed to realize these visions, but predictions that perfect battlespace knowledge is a near-term inevitability persist. A recent Foreign Affairs essay contends that in a world that is becoming one giant sensor, hiding and penetrating never easy in warfare will be far more difficult, if not impossible. It claims that once additional technologies such as quantum sensors are fielded, there will be nowhere to hide.
Conventional wisdom has long held that advances in information technology would inevitably advantage finders at the expense of hiders. But that view seems to have been based more on wishful thinking than technical assessment. The immense potential of AI for those who want to thwart would-be finders could offset if not exceed its utility for enabling them. Finders, in turn, will have to contend with both understanding reality and recognizing what is fake, in a world where faking is much easier.
The value of military deception is the subject of one of the oldest and most contentious debates among strategists. Sun Tzu famously decreed that all warfare is based on deception, but Carl von Clausewitz dismissed military deception as a desperate measure, a last resort for those who had run out of better options. In theory, military deception is extremely attractive. One influential study noted that all things being equal, the advantage in a deception lies with the deceiver because he knows the truth and he can assume that the adversary is eagerly searching for its indicators.
If deception is so advantageous, why doesnt it dominate the practice of warfare already? A major reason is that historically, military deception was planned and carried out in a haphazard, unsystematic way. During World War II, for example, British deception planners engaged in their work much in the manner of college students perpetrating a hoax but they still accomplished feats such as convincing the Germans to expect the Allied invasion of France in Pas-de-Calais rather than Normandy. Despite such triumphs, military commanders have often hesitated to gamble on the uncertain risk-benefit tradeoff of deception plans, as these require investments in effort and resources that would otherwise be applied against the enemy in a more direct fashion. If the enemy sees through the deception, it ends up being worse than useless.
Deception via Algorithm
Whats new is that researchers have invented machine learning systems that can optimize deception. The disturbing new phenomenon called deepfakes is the most prominent example. These are synthetic artifacts (such as images) created by computer systems that compete with themselves and self-improve. In these generative adversarial networks, a generator produces fake examples and a discriminator attempts to identify them. Each refines itself based on the others outputs. This technique produces photorealistic deepfakes of imaginary people, but it can be adapted to generate seemingly real sensor signatures of critical military targets.
Generative adversarial networks can also produce novel forms of disinformation. Take, for instance, the image of unrecognizable objects that went viral earlier this year (fig. 1). The image resembles an indoor scene, but upon closer inspection it contains no recognizable items. It is neither an adversarial example, an image of something that machine learning systems misidentify nor a deepfake, though it was created using a similar technique. The picture does not make any sense to either humans or machines.
This kind of ambiguity-increasing deception could be a boon for militaries with something to hide. Could they design such nonsensical images with AI and paint them on the battlespace using decoys, fake signal traffic, and careful arrangements of genuine hardware? This approach could render multi-billion-dollar sensor systems useless because the data they collect would be incomprehensible to both AI and human analysts. Proposed schemes for deepfake detection would probably be of little help, since these require knowledge of real examples in order to pinpoint subtle statistical differences in the fakes. Adversaries will minimize their opponents opportunities to collect real examples for instance, by introducing spurious deepfake artifacts into their genuine signals traffic.
Rather than lifting the fog of war, AI and machine learning may enable the creation of fog of war machines automated deception planners designed to exacerbate knowledge quality problems.
Figure 1: This bizarre image generated by a generative adversarial network resembles a real scene at first glance but contains no recognizable objects.
Deception via Sensors and Inadequate Algorithms
Meanwhile, the combined use of AI and sensors to enhance situational awareness could make new kinds of military deception possible. AI systems will be fed data by a huge number of sensors everything from space-based synthetic-aperture radar to cameras on drones to selfies posted on social media. Most of that data will be irrelevant, noisy, or disinformation. Detecting many kinds of adversary targets is hard, and indications of such detection will often be rare and ambiguous. AI and machine learning will be essential to ferret them out fast enough and use the subtle clues received by multiple sensors to estimate the locations of potential targets.
To use AI to see everything requires solving a multisource-multitarget information fusion problem that is, to combine information collected from multiple sources to estimate the tracks of multiple targets on an unprecedented scale. Unfortunately, designing algorithms to do this is far from a solved problem, and there are theoretical reasons to believe it will be hard to go far beyond the much-discussed limitations of deep learning. The systems used today, which are only just starting to incorporate machine learning, work fairly well in permissive environments with low noise and limited clutter, but their performance degrades rapidly in more challenging environments. While AI should improve the robustness of multisource-multitarget information fusion, any means of information fusion is limited by the assumptions built into it and wrong assumptions will lead to wrong conclusions even in the hands of human-machine teams or superintelligent AI.
Moreover, some analysts backed by some empirical evidence contend that the approaches typically used today for multisource-multitarget information fusion are unsound. That means that these algorithms may not estimate the correct target state even if they are implemented perfectly and have high-quality data. The intrinsic difficulty of information fusion demands the use of approximation techniques that will sometimes find wrong answers. This could create a potentially rich attack surface for adversaries. Fog of war machines might be able to exploit the flaws in these approximation algorithms to deceive would-be finders.
Neither Offense- nor Defense-Dominant
Thus, AI seems poised to increase the advantages hiders have always enjoyed in military deception. Using data from their own operations, they can model their own forces comprehensively and then use this knowledge to build a fog of war machine. Finders, meanwhile, are forced to rely upon noisy, incomplete, and possibly mendacious data to construct their own tracking algorithms.
If technological progress boosts deception, it will have unpredictable effects. In some circumstances, improved deception benefits attackers; in others, it bolsters defenders. And while effective deception can impel an attacker to misdirect his blows, it does nothing to shield the defender from those that do land. Rather than shifting the offense-defense balance, AI might inaugurate something qualitatively different: a deception-dominant world in which countries can no longer gauge that balance.
Thats a formula for a more jittery world. Even if AI-enhanced military intelligence, surveillance, and reconnaissance prove effective, states that are aware that they dont know what the enemy is hiding are likely to feel insecure. For example, even earnest, mutual efforts to increase transparency and build trust would be difficult because both sides could not discount the possibility their adversaries were deceiving them with the high-tech equivalent of a Potemkin village. That implies more vigilance, more uncertainty, more resource consumption, and more readiness-fatigue will follow. As Paul Bracken observed, the thing about deception is that it is hard to prove it will really work, but technology ensures that we will increasingly need to assume that it will.
Edward Geist is a policy researcher and Marjory Blumenthal is a senior policy researcher at the RAND Corporation. Geist received a Smith Richardson Strategy and Policy Fellowship to write a book on artificial intelligence and nuclear warfare.
Image: U.S. Navy (Photo by Mass Communication Specialist 1st Class Carlos Gomez)
See the article here:
Military Deception: AI's Killer App? - War on the Rocks
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks - November 8th, 2009 [November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations - November 8th, 2009 [November 8th, 2009]
- Software environments for working on AI projects - November 8th, 2009 [November 8th, 2009]
- New version of my NLP toolkit - November 8th, 2009 [November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS - November 8th, 2009 [November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL - November 8th, 2009 [November 8th, 2009]
- Defining AI and Knowledge Engineering - November 8th, 2009 [November 8th, 2009]
- Great Overview of Knowledge Representation - November 8th, 2009 [November 8th, 2009]
- Something like Google page rank for semantic web URIs - November 8th, 2009 [November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems - November 8th, 2009 [November 8th, 2009]
- The URL for this blog has changed - November 8th, 2009 [November 8th, 2009]
- I have a new page on Knowledge Management - November 8th, 2009 [November 8th, 2009]
- N-GRAM analysis using Ruby - November 8th, 2009 [November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web - November 8th, 2009 [November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby - November 8th, 2009 [November 8th, 2009]
- Machines Like Us - November 8th, 2009 [November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool - November 8th, 2009 [November 8th, 2009]
- texai.org - November 8th, 2009 [November 8th, 2009]
- NLTK: The Natural Language Toolkit - November 8th, 2009 [November 8th, 2009]
- My OpenCalais Ruby client library - November 8th, 2009 [November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data - November 8th, 2009 [November 8th, 2009]
- Protégé OWL Ontology Editor - November 8th, 2009 [November 8th, 2009]
- New version of Numenta software is available - November 8th, 2009 [November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs - November 8th, 2009 [November 8th, 2009]
- Verison 2.0 of OpenCyc is available - November 8th, 2009 [November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] - November 8th, 2009 [November 8th, 2009]
- Minimax Search [Knowledge] - November 8th, 2009 [November 8th, 2009]
- Decision Tree [Knowledge] - November 8th, 2009 [November 8th, 2009]
- More AI Content & Format Preference Poll [Article] - November 8th, 2009 [November 8th, 2009]
- New Planners Solve Rescue Missions [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] - November 8th, 2009 [November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] - November 8th, 2009 [November 8th, 2009]
- Mining Data for the Netflix Prize [News] - November 8th, 2009 [November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] - November 8th, 2009 [November 8th, 2009]
- Decision Making for Medical Support [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Creates Music CD [News] - November 8th, 2009 [November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] - November 8th, 2009 [November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] - November 8th, 2009 [November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] - November 8th, 2009 [November 8th, 2009]
- What Would You do With 80 Cores? [News] - November 8th, 2009 [November 8th, 2009]
- Software Finds Learning Language Child's Play [News] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence in Games [Article] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence Resources - November 8th, 2009 [November 8th, 2009]
- Alan Turing: Mathematical Biologist? - April 25th, 2012 [April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video - April 30th, 2012 [April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video - April 30th, 2012 [April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video - April 30th, 2012 [April 30th, 2012]
- Science Breakthroughs - April 30th, 2012 [April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video - April 30th, 2012 [April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video - April 30th, 2012 [April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video - April 30th, 2012 [April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner - May 4th, 2012 [May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course - May 4th, 2012 [May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... - May 4th, 2012 [May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster - May 4th, 2012 [May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course - May 5th, 2012 [May 5th, 2012]
- Why Your Brain Isn't A Computer - May 5th, 2012 [May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course - May 7th, 2012 [May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... - May 10th, 2012 [May 10th, 2012]
- Google Driverless Car Ok'd by Nevada - May 10th, 2012 [May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar - May 10th, 2012 [May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award - May 13th, 2012 [May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB - May 16th, 2012 [May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... - May 16th, 2012 [May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International - May 16th, 2012 [May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm - May 23rd, 2012 [May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets - May 23rd, 2012 [May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying - May 23rd, 2012 [May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? - May 25th, 2012 [May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] - May 25th, 2012 [May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants - May 27th, 2012 [May 27th, 2012]
- Artificial intelligence: science fiction or simply science? - May 28th, 2012 [May 28th, 2012]
- Exetel taps artificial intelligence - May 29th, 2012 [May 29th, 2012]
- Software offers brain on the rain - May 29th, 2012 [May 29th, 2012]
- New Dean of Science has high hopes for his faculty - May 30th, 2012 [May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App - May 31st, 2012 [May 31st, 2012]
- A Rat is Smarter Than Google - June 5th, 2012 [June 5th, 2012]