trusted computing artificial intelligence (AI) information warfare – Military & Aerospace Electronics

ARLINGTON, Va. U.S. military researchers are reaching out to industry to prevent enemy attempts to corrupt or spoof artificial intelligence (AI) systems by subtly altering or manipulating information the AI system uses to learn, develop, and mature.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) issued a solicitation on Wednesday (DARPA-PA-19-03-09) for the Reverse Engineering of Deceptions (RED) project, which aims at reverse engineering the toolchains of information deception attacks.

A deceptive information attack describes enemy attempts subtly to alters or manipulates information used by a human or machine learning system to alter a computational outcome in the adversarys favor.

Machine learning techniques are susceptible to enemy information warfare attacks at training time and when deployed. Similarly, humans are susceptible to being deceived by falsified images, video, audio, and text. Deception plays an increasingly central role in information warfare attacks.

Related: Research, applications, talent, training, and cooperation frame report on artificial intelligence (AI)

The Reverse Engineering of Deceptions (RED) effort will develop techniques that automatically reverse engineer the toolchains behind attacks such as multimedia falsification, enemy machine learning attacks, or other information deception attacks.

Recovering the tools and processes for such attacks provides information that may help identify an enemy. RED will seek to develop techniques that identify attack toolchains automatically, and develop scalable databases of attack toolchains.

RED Phase 1 will produce trusted-computing algorithms to identify the toolchains behind information deception attacks. The project's second phase will develop technologies for scalable databases of attack toolchains to support attribution and defense.

Related: Air Force researchers ask industry for SWaP-constrained embedded computing for artificial intelligence (AI)

The project also seeks to develop techniques that require little or no a-priori knowledge of specific deception toolchains; automatically cluster attack examples together to discover families of deception toolchains; generalize across several information deception scenarios like enemy machine learning and media manipulation; require just a few attacks to learn unique signatures; and scale to internet volumes of information.

Companies interested should upload 8-page proposals no later than 30 July 2020 to the DARPA BAA Website at https://baa.darpa.mil/. Email questions or concerns to Matt Turek, the DARPA RED program manager, at RED@darpa.mil.

More information is online at https://beta.sam.gov/opp/f108cad02f824285af5ca85e1f7481f4/view.

Read the rest here:

trusted computing artificial intelligence (AI) information warfare - Military & Aerospace Electronics

Related Posts

Comments are closed.