Artificial intelligence called threat to humanity, compared to nuclear weapons: Report – Washington Times

Artificial intelligence is revolutionizing warfare and espionage in ways similar to the invention of nuclear arms and ultimately could destroy humanity, according to a new government-sponsored study.

Advances in artificial intelligence, or AI, and a subset called machine learning are occurring much faster than expected and will provide U.S. military and intelligence services with powerful new high-technology warfare and spying capabilities, says a report by two AI experts produced for Harvards Belfer Center.

The range of coming advanced AI weapons include: robot assassins, superfast cyber attack machines, driverless car bombs and swarms of small explosive kamikaze drones.

According to the report, Artificial Intelligence and National Security, AI will dramatically augment autonomous weapons and espionage capabilities and will represent a key aspect of future military power.

The report also offers an alarming warning that artificial intelligence could spin out of control: Speculative but plausible hypotheses suggest that General AI and especially superintelligence systems pose a potentially existential threat to humanity.

The 132-page report was written by Gregory C. Allen and Taniel Chan for the director of the Intelligence Advanced Research Projects Activity, (IARPA), the U.S. intelligence communitys research unit.

The study calls for policies designed to preserve American military and intelligence superiority, boost peaceful uses of AI, and address the dangers of accidental or adversarial attacks from automated systems.

The report predicts that AI will produce a revolution in both military and intelligence affairs comparable to the emergence of aircraft, noting unsuccessful diplomatic efforts in 1899 to ban the use of aircraft for military purposes.

The applications of AI to warfare and espionage are likely to be as irresistible as aircraft, the report says. Preventing expanded military use of AI is likely impossible.

Recent AI breakthroughs included a $35 computer that defeated a former Air Force pilot in an air combat simulator, and a South Korean program that beat a person at Go, a chesslike board game.

AI is rapidly growing from the exponential expansion of computing power, the use of large data sets to train machine learning systems, and significant and rapidly increasing private sector investment.

Just as cyber weapons are being developed by both major powers and underdeveloped nations, automated weaponry such as aerial drones and ground robots likely will be deployed by foreign militaries.

In the short term, advances in AI will likely allow more autonomous robotic support to warfighters, and accelerate the shift from manned to unmanned combat missions, the report says, noting that the Islamic State has begun using drones in attacks.

Over the long term, these capabilities will transform military power and warfare.

Russia is planning extensive automated weapons systems and according to the report plans to have 30 percent of its combat forces remotely controlled or autonomous by 2030.

Currently, the Pentagon has restricted the use of lethal autonomous systems.

Future threats could also come from swarms of small robots and drones.

Imagine a low-cost drone with the range of a Canada Goose, a bird which can cover 1,500 miles in under 24 hours at an average speed of 60 miles per hour, the report said. How would an aircraft carrier battle group respond to an attack from millions of aerial kamikaze explosive drones?

AI-derived assassinations also are likely in the future by robots that will be difficult to detect. A small, autonomous robot could infiltrate a targets home, inject the target with a lethal dose of poison, and leave undetected, the report said. Alternatively, automatic sniping robots could assassinate targets from afar.

Terrorists also are expected in the future to develop precision-guided improvised explosive devices that could transit long distances autonomously. An example would be autonomous self-driving car bombs.

AI also could be used in deadly cyber attacks, such as hacking cars and forcing them to crash, and advanced AI cyber capabilities also will enhance cyber warfare capabilities by overwhelming human operators.

Robots also will be able to inject poisoned data into large data sets in ways that could create false images for warfighters looking to distinguish between enemy and friendly aircraft, naval systems or ground weapons.

Electronic cyber robots in the future will automate the human-intensive process of both defending networks from attacks, and probing enemy networks and software for weaknesses used in attacks.

Another danger is that in the future hostile actors will steal or replicate military and intelligence AI systems.

The report urged the Pentagon to develop counter-AI capabilities for both offensive and defensive operations.

GPS SPOOFING AND USS McCAIN

One question being asked by the Navy in the aftermath of this weeks deadly collision between the destroyer USS John S. McCain and an oil tanker is whether the collision was the result of cyber or electronic warfare attacks.

Chief of Naval Operations Adm. John Richardson was asked about the possibility Monday and said that while there is no indication yet that outside interference caused the collision, investigators will examine all possibilities, including some type of cyber attack.

Navy sources close to the probe say there is no indication cyber attacks or electronic warfare caused the collision that killed 10 sailors as the ship transited the Straits of Malacca near Singapore.

But the fact that the McCain was the second agile Navy destroyer to be hit by a large merchant ship in two months has raised new concerns about electronic interference.

Seven died on the USS Fitzgerald, another guided-missile destroyer that collided with a merchant ship in waters near Japan in June.

The incidents highlight the likelihood that electronic warfare will be used in a future conflict to cause ship collisions or groundings.

Both warships are equipped with several types of radar capable of detecting nearby shipping traffic miles away. Watch officers on the bridge were monitoring all approaching ships.

The fact that crews of the two ships were unable to see the approaching ships in time to maneuver away has increased concerns about electronic sabotage.

One case of possible Russian electronic warfare surfaced two months ago. The Department of Transportations Maritime Administration warned about possible intentional GPS interference on June 22 in the Black Sea, where Russian ships and aircraft in the past of have challenged U.S. Navy warships and surveillance aircraft.

According to the New Scientist, an online publication that first reported the suspected Russian GPS spoofing, the Maritime Administration notice referred to a ship sailing near the Russian port of Novorossiysk that reported its GPS navigation falsely indicated the vessel was located more than 20 miles inland at Gelendzhik Airport, close to the Russian resort town of the same name on the Black Sea.

The navigation equipment was checked for malfunctions and found to be working properly. The ship captain then contacted nearby ships and learned that at least 20 ships also reported that signals from their automatic identification system (AIS), a system used to broadcast ship locations at sea, also had falsely indicated they were at the inland airport.

Todd Humphreys, a professor who specializes in robotics at the University of Texas, suspects the Russians in June were experimenting with an electronic warfare weapon designed to lure ships off course by substituting false electronic signals to navigation equipment.

On the U.S. destroyers, Mr. Humphreys told Inside the Ring that blaming two similar warship accidents on human negligence seems difficult to accept.

With the Fitzgerald collision fresh on their minds, surely the crew of the USS John McCain would have entered the waters around the Malacca Strait with extra vigilance, he said. And yes, its theoretically possible that GPS spoofing or AIS spoofing was involved in the collision. Nonetheless I still think that crew negligence is the most likely explanation.

Military vessels use encrypted GPS signals that make spoofing more difficult.

Spoofing the AIS on the oil tanker that hit the McCain is also a possibility, but would not explain how the warship failed to detect the approaching vessel.

One can easily send out bogus AIS messages and cause phantom ships to appear on ships electronic chart displays across a widespread area, Mr. Humphreys said

Mr. Humphreys said he suspects Navy investigators will find three factors behind the McCain disaster: The ship was not broadcasting its AIS location beacon; the oil tankers collision warning system may have failed or the Navy crew failed to detect the approaching tanker.

Contact Bill Gertz on Twitter @BillGertz.

View post:

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report - Washington Times

Related Posts

Comments are closed.