This past year, revelations about the plight of Muslim Uighurs in China have come to light, with massive-scale detentions and human rights violations of this ethnic minority of the Chinese population. Last month, additional classified Chinese government cables revealed that this policy of oppression was powered by artificial intelligence (AI): that algorithms fueled by massive data collection of the Chinese population were used to make decisions regarding detention and treatment of individuals. China failing to uphold the fundamental and inalienable human rights of its population is not new, and indeed, tyranny is as old as history. But the Chinese government is harnessing new technology to do wrong more efficiently.
Concerns about how governments can leverage AI also extend to the waging of war. Two major concerns about the application of AI to warfare are ethics (is it the right thing to do?) and safety (will civilians or friendly forces be harmed, or will the use of AI lead to accidental escalation leading to conflict?). With the United States, Russia, and China all signaling that AI is a transformative technology central to their national security strategy, with their militaries planning to move ahead with military applications of AI quickly, should this development raise the same kinds of concerns as Chinas use of AI against its own population? In an era where Russia targets hospitals in Syria with airstrikes in blatant violation of international law, and indeed of basic humanity, could AI be used unethically to conduct war crimes more efficiently? Will AI in war endanger innocent civilians as well as protected entities such as hospitals?
To be clear, any technology can be misused in war. A soldier could use a rock to commit a war crime. A simple, low-tech land or sea mine can be used indiscriminately and endanger civilians if it is used in the wrong way. A transformative technology like AI can be used responsibly and safely, or it could fuel a much faster race to the bottom.
The United States has declared it will take the high road with military applications of AI. For example, the Department of Defense AI strategy has AI ethics and safety as one of the its fundamental lines of effort. And this is not an empty promise: The Defense Innovation Board just released its principles for ethical military use of AI, marking a year-long, deliberate initiative drawing in AI experts, ethicists, and the general public. By this laudable effort, the United States has shown leadership in the responsible and principled use of this technology in war.
But there is something missing: The AI strategy commitment was to ethics and safety. To date, the Department of Defense has not shown a similar, concerted focus on AI safety. Despite commitments made to the international community and in its own AI strategy, the Pentagon has done little to act on promises to address safety risks unique to the technology of AI or to use AI to enhance safety in conflict. My recent research has shown that this inaction creates risks to those on the battlefield, for civilians and combatants alike, and increases the likelihood of accidental escalation and conflict. In an era where the technology of AI can so easily be exploited by governments to violate the principles of humanity, the United States can demonstrate the high road is possible, but to do so it needs to keep its promises: to address safety risks intrinsic to AI and to search for ways to use AI for good.
Promise 1: Addressing Safety Risks Unique to AI
In its AI strategy, the Department of Defense made a promise to address the safety risks unique to AI technology. This is reflective of Americas long record of commitments to safety and adherence to international laws for armed conflict. For example, all military systems are subject to test and evaluation activities to ensure that they are reliable and safe, as well as legal reviews to ensure they are consistent with international humanitarian law (e.g., the Geneva Conventions). It is not surprising, therefore, that safety is prominent in the defense AI strategy.
Though a commendable intention, the strategy has not yet resulted in significant institutional steps to promote safety with regard to AI. The U.S. military has been busy supporting the Defense Innovation Boards development of AI ethics, with the Joint AI Center also emphasizing the critical role ethics plays in AI applications, yet the pursuit of safety for example, avoiding civilian casualties, friendly fire, and inadvertent escalation has not received the same sort of attention.
I acknowledge a few steps are being taken towards promoting AI safety. For example, the Defense Advanced Research Projects Agency has a program working to develop explainable AI, to help address challenges with the use of machine learning as a black box technology. Explainability will enhance AI safety: for example, by being able to explain why an AI application does something amiss in testing or operations and to take corrective actions. But such steps, while important, do not make up a comprehensive approach to identify and then systematically address AI safety risks. To that end, our most recent report draws on a risk management approach to AI safety: identifying risks, analyzing them, and then suggesting concrete actions to begin addressing them.
From this we see two types of safety risks: those associated with the technology of AI in general (e.g., fairness and bias, unpredictability and unexplainability, cyber security and tampering), and those associated with specific military applications of AI (e.g., lethal autonomous systems, decision aids). The first type of safety risk will require the U.S. government, industry, and academia to work together in order to address existing challenges. The second type of risk, being associated with specific military missions, is a military problem with a military solution obtained through military experimentation, research, and concept development to find ways to promote effectiveness along with safety.
Promise 2: AI for Good
A second promise made by the U.S. government was to use AI to better protect civilians and friendly forces, as was first expressed in international discussions. The United Nations Convention for Certain Conventional Weapons, a forum that considers restrictions on the design and use of weapons in light of requirements of international humanitarian law, has held discussions regarding lethal autonomous weapon systems since 2014. Over time, the topic of those discussions has informally broadened from purely autonomous systems to also including the use of AI in weapon systems in general. As the State Departments senior advisor on civilian protection, I was a member of the Conventions delegation in the discussions on lethal autonomous weapon systems. The U.S. position paper in 2017 emphasized how, in contrast to the concerns of some over the legality of autonomous weapons, such weapons carried promise for upholding the law and better protecting civilians in war. This was a sincere position: Several of us on the delegation were also involved in the drafting of U.S. executive order on civilian casualties, which contained a policy commitment to make serious efforts to reduce civilian casualties in U.S. military operations. The thoughtful use of AI and autonomy represented one way to meet that commitment.
The 2018 Department of Defense AI strategy also contained a similar promise of using AI for better protecting civilians in war. As described in the unclassified summary of the strategy, one of its main commitments was to lead internationally in military ethics and AI safety. This included development of specific applications that would reduce the risk of civilian casualties.
That last commitment, made in both the strategy and in U.S. government position papers, is probably the one that draws the most skepticism. When Hollywood portrays AI and autonomous systems and the use of force, it is often to show machines running amok and killing innocents, such as seen in the Terminator series of movies. But using AI for good in war is not a fanciful notion: At CNA, our analysis of real-world incidents shows specific areas where AI can be used for this purpose. We have worked with the U.S. military and others to better understand the reasons that civilian casualties occur and what measures can be taken to avoid them. Based on analysis of underlying causes of over 1,000 incidents, AI technologies could be used to better avoid civilian harm in ways including:
These are just some examples of concrete applications of AI to promote civilian protection in conflict. The Department of Defense could be a leader in this area, and it is easy to imagine other countries following a U.S. lead. For example, many countries lament the frequency of military attacks on hospitals in recent operations, with a UN Security Council Resolution passed unanimously to promote protection of medical care in conflict in 2016. If the United States were to announce it was leading an effort to use AI to better protect hospitals, it is likely there would be interest from other countries in cooperating with such an effort.
Safety Is Strategically Smart
Why is it a problem if the United States does not rise to establish concrete steps to emphasize AI safety? After all, current and former U.S. government leaders have spoken about how neither Russia nor China will be slowing down their AI efforts in order to address ethical or safety issues. Does it matter?
A focus on safety and care in the conduct of operations has served the United States well. During the second offset, Washington developed precision capabilities to help counter the Soviet Unions advantages in troop numbers. These developments then enabled the United States to take additional steps to promote safety in the form of reduced civilian casualties: developing and fielding new types of munitions for precision engagements with reduced collateral effects, developing intelligence capabilities for more accurately identifying and locating military targets, and creating predictive tools to help estimate and avoid collateral damage. These steps had strategic as well as practical benefits, enhancing freedom of action and boosting legitimacy of U.S. actions while enabling steps to reduce the civilian toll of recent operations. If AI is leveraged to help promote safety on the battlefield, it can yield similar strategic and practical benefits for the United States.
AI safety also has relevance to U.S. allies. Unlike its peer competitors, Russia and China, the United States almost always operates as part of a coalition effort. This is a significant advantage for the United States politically, numerically, and in terms of additional capabilities that can be brought to bear. But ally cooperation, and the interoperability of partners within a coalition, will depend on what capabilities our allies are willing to adopt or to operate in the same battlespace with. It is important that the United States be able to convince would-be allies of both the effectiveness and the safety of its military AI.
AI and Americas Future
The revelation that China used AI to violate human rights contrasts strongly to U.S. promises to take the high road with regard to its military applications of AI. Eric Schmidt and Bob Work have declared that the United States could easily lose its leadership in AI if it does not act urgently. The leadership the United States has shown in AI ethics is commendable, but to fully be the leader it needs to be for our national security and for our prosperity, America must lead the way on safety too. The opportunity to develop AI of unrivaled precision is historic. If it can build AI that is lethal, ethical, and safe, the United States would have an edge in both future warfare and the larger climate of competition that surrounds it. Developing safer AI would once again show the world that there is no better friend and no worse enemy than the United States.
Dr. Larry Lewisis a principal research scientist at CNA, the project lead and primary author for many of DODs Joint Lessons Learned studies, and the lead analyst and coauthor (with Dr. Sarah Sewall at Harvard University) for the Joint Civilian Casualty Study (JCCS). General Petraeus described JCCS as the first comprehensive assessment of the problem of civilian protection. The opinions expressed are Dr. Lewis alone, not necessarily those of CNA or its sponsors.
Image: Wikimedia Commons
See the original post here:
AI Safety: Charting out the High Road - War on the Rocks
- Government Oppression Of Climate Protesters Is Rampant. Are You Next? - CleanTechnica - February 5th, 2024 [February 5th, 2024]
- Invasion Day protests oppose oppression of indigenous Australians, genocide in Gaza - WSWS - February 5th, 2024 [February 5th, 2024]
- Florida's 'hostile' laws? Five laws NAACP listed in travel advisory. - St. Augustine Record - May 22nd, 2023 [May 22nd, 2023]
- Iran Faces A Huge Budget Deficit It Tries To Conceal - - May 22nd, 2023 [May 22nd, 2023]
- Satyendar Jain taken to Safdarjung Hospital after losing 35 kgs - The Statesman - May 22nd, 2023 [May 22nd, 2023]
- Opinion: Reassessing the approach to Israel | DW | 22.05.2023 - DW - May 22nd, 2023 [May 22nd, 2023]
- Durham Report Is Latest Choose-Your-Own-Reality Adventure - TIME - May 22nd, 2023 [May 22nd, 2023]
- In Conversation with Stan Grant - Honi Soit - May 22nd, 2023 [May 22nd, 2023]
- Rep. Bare: Assembly Republicans' local government funding plan is ... - WisPolitics.com - May 22nd, 2023 [May 22nd, 2023]
- Never Again Is Right Now in Palestine - Jacobin magazine - May 22nd, 2023 [May 22nd, 2023]
- UF community condemns bill defunding DEI initiatives - The Independent Florida Alligator - May 22nd, 2023 [May 22nd, 2023]
- Prices of basic commodities and foods have gone insane in Sierra ... - Sierra Leone Telegraph - May 22nd, 2023 [May 22nd, 2023]
- Don't cancel Gladstone. He was a true friend of freedom at home ... - The Telegraph - May 22nd, 2023 [May 22nd, 2023]
- If you want to do things like gender ideology, go to Berkeley: DeSantis bans diversity, equity and inclusion in Florida colleges - The Mercury News - May 22nd, 2023 [May 22nd, 2023]
- TikTok: The new frontier for political info-wars - DAWN.com - May 22nd, 2023 [May 22nd, 2023]
- Israeli Apartheid - The Legacy of the Ongoing Nakba at 75 [EN/AR ... - ReliefWeb - May 22nd, 2023 [May 22nd, 2023]
- PPP's CEC condemns attacks on army installations, calls for ... - Pakistan Today - May 22nd, 2023 [May 22nd, 2023]
- Tim Scott says Im running for president of the United States in announcement speech live - The Guardian US - May 22nd, 2023 [May 22nd, 2023]
- Opinion | America's Poverty Is Built by Design - POLITICO - May 22nd, 2023 [May 22nd, 2023]
- 'Pity these oppressed random attackers': Inside the thoughts of Canada's bail system - National Post - May 22nd, 2023 [May 22nd, 2023]
- How Can We Resist Book Bans? This Banned Author Has Ideas. - Truthout - May 22nd, 2023 [May 22nd, 2023]
- Owners of Nigeria and their multiple worlds - Guardian Nigeria - May 22nd, 2023 [May 22nd, 2023]
- When People Decide They Want Change, They Will Bring in Change - The Wire - May 22nd, 2023 [May 22nd, 2023]
- 5 Interesting Facts about Simon Bolivar - The Collector - May 22nd, 2023 [May 22nd, 2023]
- China Built Over A Million Uyghurs "Re-Education Camps" In 6 Years: Report - NDTV - May 22nd, 2023 [May 22nd, 2023]
- Queer folk, the hour to save ourselves has come - Daily Maverick - May 22nd, 2023 [May 22nd, 2023]
- Preposterous! Book ban adds bureaucracy and removes parents ... - IndyStar - May 22nd, 2023 [May 22nd, 2023]
- End Jew Hatred: Fight for social justice must be above political fray - The Jerusalem Post - May 18th, 2023 [May 18th, 2023]
- Political strife, not protest anymore - The Korea JoongAng Daily - May 18th, 2023 [May 18th, 2023]
- 'A Man Without a Gun Is Not a Citizen' - The Texas Observer - May 18th, 2023 [May 18th, 2023]
- State Department Report Says China Oppressed Tibetan Buddhist ... - Central Tibetan Administration - May 18th, 2023 [May 18th, 2023]
- Facing Reality on South Africa - Council on Foreign Relations - May 18th, 2023 [May 18th, 2023]
- Federal Charges of Political Activists Show the Racist and ... - Left Voice - May 18th, 2023 [May 18th, 2023]
- Age of Disorder || Pakistan on the Brink: Down with Capitalist PDM ... - International Socialist - May 18th, 2023 [May 18th, 2023]
- Tim Stevenson | Living with the Long Emergency: Rising Fascism ... - Brattleboro Reformer - May 18th, 2023 [May 18th, 2023]
- Members of new City Council weigh in on water bills - CBS Chicago - May 18th, 2023 [May 18th, 2023]
- DIIR Statement on 28th Anniversary of Enforced Disappearance of ... - Central Tibetan Administration - May 18th, 2023 [May 18th, 2023]
- KAN-WIN shares timeline of gender-based violence toward Asian ... - Daily Northwestern - May 18th, 2023 [May 18th, 2023]
- Alleged leaker fixated on guns and envisioned 'race war' - The Washington Post - May 18th, 2023 [May 18th, 2023]
- Employment and Labour pays tribute to Dr Dennis George - South African Government - May 18th, 2023 [May 18th, 2023]
- Opinion | Trump Cannot Be Unseen - The New York Times - May 18th, 2023 [May 18th, 2023]
- Toronto to rally against Homophobia, Biphobia, and Transphobia ... - NOW Toronto - May 18th, 2023 [May 18th, 2023]
- Election 2023: Te Pti Mori accuses Prime Minister Chris Hipkins of 'oppression' for telling parties to 'be careful' with demands - Newshub - May 18th, 2023 [May 18th, 2023]
- What's the current state of LGBTQ rights in Europe? - Euronews - May 18th, 2023 [May 18th, 2023]
- Baptist Health Foundation Receives $3 Million Gift from the Jos ... - South Florida Hospital News - May 18th, 2023 [May 18th, 2023]
- The Politics and Moral Economics of Seun Kutis Police Assault - Tekedia - May 18th, 2023 [May 18th, 2023]
- Is a temporary coalition of anger against the old regime a basis for ... - Sierra Leone Telegraph - May 18th, 2023 [May 18th, 2023]
- Mexico: against 'neoliberalism' or capitalism? The final year of ... - In Defence of Marxism - May 18th, 2023 [May 18th, 2023]
- White Christian Nationalism and the 2023 Montana Legislature ... - Daily Montanan - May 18th, 2023 [May 18th, 2023]
- Left-wing lawmakers press for federal reparations for Black Americans: 'We're here to demand it' - Fox News - May 18th, 2023 [May 18th, 2023]
- Underground cyphers are helping young Kashmiris reclaim their ... - Huck Magazine - May 18th, 2023 [May 18th, 2023]
- Tory MP uses controversial term connected to antisemitic conspiracies - The Jerusalem Post - May 18th, 2023 [May 18th, 2023]
- Fijis 1987 coup: Why did Prime Minister Rabuka apologise to the Indo-Fijian community? - The Indian Express - May 18th, 2023 [May 18th, 2023]
- Declaration on the Migrant Crisis: Socialists From the U.S., Mexico ... - Left Voice - May 18th, 2023 [May 18th, 2023]
- WNBA star Brittney Griner standing and listening to national anthem - Gainesville Sun - May 18th, 2023 [May 18th, 2023]
- Imran Khan to unveil next plan of action at a rally on Thursday - ANI News - May 18th, 2023 [May 18th, 2023]
- Delta Youths Threaten Showdown Over Exclusion In Multi-Billion ... - SaharaReporters.com - May 18th, 2023 [May 18th, 2023]
- Remarks by Homeland Security Advisor Dr. Liz Sherwood-Randall ... - The White House - May 2nd, 2023 [May 2nd, 2023]
- Are some human rights more important than others? Religious ... - Jacksonville Journal-Courier - May 2nd, 2023 [May 2nd, 2023]
- Postcolonial Plague: The Legacy of Apartheid South Africa in ... - Brown Political Review - May 2nd, 2023 [May 2nd, 2023]
- UN expert urges Japan to step up pressure on Myanmar junta - OHCHR - May 2nd, 2023 [May 2nd, 2023]
- CSIS confirms to MP that he and family were targeted by China - The Globe and Mail - May 2nd, 2023 [May 2nd, 2023]
- Opposition leader says govt sent a bureaucrat to talk with calan - Duvar English - May 2nd, 2023 [May 2nd, 2023]
- The Badger Herald Editorial Board: The bounds of free speech The ... - The Badger Herald - May 2nd, 2023 [May 2nd, 2023]
- AIbom NLC to Set up Monitoring Team on Petroleum Products - THISDAY Newspapers - May 2nd, 2023 [May 2nd, 2023]
- Amplifying Iranian Voices: The Call for Freedom and Democracy ... - National Council of Resistance of Iran (NCRI) - May 2nd, 2023 [May 2nd, 2023]
- As inequality deepens, who will rewrite the rules? - Al Jazeera English - May 2nd, 2023 [May 2nd, 2023]
- Discover the Brilliance of George Orwell: Books That Will Inspire You - Economic Times - May 2nd, 2023 [May 2nd, 2023]
- Nature has way of settling scores, says UP CM Yogi Adityanath on Atiq Ahmad's turf - Times of India - May 2nd, 2023 [May 2nd, 2023]
- Tens of thousands hold Labor Day rallies nationwide - The Korea Herald - May 2nd, 2023 [May 2nd, 2023]
- Incoming Nigerian Government Must Improve Poverty Wage ... - SaharaReporters.com - May 2nd, 2023 [May 2nd, 2023]
- Can the European Union Tackle Afghanistan's Crises? - The Diplomat - May 2nd, 2023 [May 2nd, 2023]
- Iran sees nationwide protests, night rallies marking Int'l Labor Day | - The Peoples Mojahedin Organization of Iran (PMOI) - May 2nd, 2023 [May 2nd, 2023]
- G20: Responsibilities of the people of PoK - ANI News - May 2nd, 2023 [May 2nd, 2023]
- From IWD to May Day: Connecting working women's struggles - Spring Magazine - May 2nd, 2023 [May 2nd, 2023]
- 1 May 2023 || The Working Class is Back! ISA - International Socialist - May 2nd, 2023 [May 2nd, 2023]
- Are the Marxists on to something? Catholic World Report - Catholic World Report - May 2nd, 2023 [May 2nd, 2023]
- Manoj Kumar Jha and Ghazala Jamil write: Why Pratap Bhanu Mehta is wrong about social justice politics and caste census - The Indian Express - May 2nd, 2023 [May 2nd, 2023]
- Generational crimes are being committed thick and fast. No wonder Australian kids dont vote conservative - The Guardian - May 2nd, 2023 [May 2nd, 2023]
- A Proclamation on Jewish American Heritage Month, 2023 - U.S. ... - US Embassy and Consulate in Poland - May 2nd, 2023 [May 2nd, 2023]