Artificial intelligence could encourage war, experts fear

Hollywood fantasy: The reality of AI development is a little more nuanced and a lot less advanced than movies might have us believe. Photo: Supplied

It's the theme of so many dystopian sci-fi books and movies: a super intelligent machine in charge of lethal military hardware becomes self-aware and decides to wreak havoc. But could it actually happen?

At the Association for the Advancement of Artificial Intelligence's annual conference in Texas last month, a workshop was held on the ethics of AI development and a panel discussed whether or not so-called 'lethal autonomous weapons' should be banned.

"There are many arguments, legal, ethical, political and technical for a ban," Toby Walsh, head of theOptimisation Research Groupat Australia's research body NICTA and chair of the proceedings, told Fairfax Media.

"One that particularly appeals to me is that [autonomous weapons] will lower the barrierto war. If one side can launch an attack, without fear of bodies cominghome, then it is much easier to slip into battle," Professor Walsh said.

Advertisement

While the advent of drones might already have the bar falling, those in favour of a ban would look to stop a potential arms race in "killer robots" before it begins. One the other hand, there are plenty of voices against a ban.

"Machines are not inherently dangerous", said Francesca Rossi, president of the International Joint Conference on Artificial Intelligence and a participant in the AAAI panel, who points out the huge difference between super-intelligence and sentience.

"We should build [autonomous weapons]by specifying all the relevant context for the desired goal to be achieved by the machine. Otherwise, a goal could be reached by violating some basic assumptions on how we want a machine to behave. Since machines are not sentient, their behaviour depends on how a human built them,"Professor Rossi said.

The specification of "all" relevant context could prove a troublesome task, however, since as Professor Walsh points out, "many ethical principles that we holdas universal are not", and ethics and decision-making processes across different cultures vary.

More:

Artificial intelligence could encourage war, experts fear

Discussion | CodeX Fellow Jerry Kaplan on the Law of Artificial Intelligence – Video


Discussion | CodeX Fellow Jerry Kaplan on the Law of Artificial Intelligence
On February 19, 2015, CodeX The Stanford Center for Legal Informatics hosted a discussion with Jerry Kaplan, Visiting Lecturer in Computer Science and Code...

By: stanfordlawschool

Continue reading here:

Discussion | CodeX Fellow Jerry Kaplan on the Law of Artificial Intelligence - Video

(173) Free Knowledge-ISA-Specialised Systems(Artificial intelligence)-Part 1-Version 1 – Video


(173) Free Knowledge-ISA-Specialised Systems(Artificial intelligence)-Part 1-Version 1
Study material of DISA made easy: This is the First part of the Module 4 Chapter 6 "Specialised Systems" wherein "Artificial Intelligence" are explained. This is the Post Qualification...

By: Chandra Shekhar

Read the original here:

(173) Free Knowledge-ISA-Specialised Systems(Artificial intelligence)-Part 1-Version 1 - Video

Is AI 'our biggest existential threat'?

As man-made robots get smarter, will they eventually outpace man?

A few of the world's smartest technology leaders certainly think so. In recent days, they've taken to sounding the alarm bell about the potential dangers of Artificial Intelligence (AI).

Tesla CEO Elon Musk called AI "our biggest existential threat" while British scientist Stephen Hawking said AI could "spell the end of the human race." In January, Read MoreMicrosoft co-founder Bill Gates sided with Musk, adding, "[I] don't understand why some people are not concerned."

Read More Think tank: Study AI before letting it take over

Yet on the other side of the argument are people like Microsoft co-founder, Paul Allen. In 2013, he founded the Allen Institute for Artificial Intelligence in Seattle, whose mission is to advance the study of AI. The man who heads the organization thinks the fears are overblown.

"Robots are not coming to get you," said Allen Institute CEO Oren Etzioni. In an interview with CNBC, he said: "We quite simply have to separate science from science fiction."

Etzioni said Elon Musk and others may be missing the distinction between intelligence and autonomy. One implies streamlined computer functions, while the other means machines think and operate independently.

Etzioni offered two Artificial Intelligence examples. In 1997, IBM's Deep Blue chess computer beat then world champion Garry Kasparov. In 2011, IBM's Watson supercomputer beat two champions on the game show "Jeopardy."

"These are highly targeted savants," said Etzioni. "They say Watson didn't even know it won. And Deep Blue will not play another chess game unless you push a button."

Etzioni said that the machines "have no free will, they have no autonomy. They're no more likely to do damage than your calculator is likely to do its own calculations."

Read more here:

Is AI 'our biggest existential threat'?

'Chappie' Director Optimistic About AI

Artificial intelligence might be smarter than us but it's not as scary.

Peering into the vast unknown before us can be terrifying, and we look to technology and science to light the way. But when Elon Musk says AI is "summoning the demon," Bill Gates says people should be concerned about AI, and Stephen Hawking feels the whole thing "could spell the end of the human race," it's hard not to fear the rise of the machines.

The latest big-budget depiction of AI, Chappie, takes place just a few years in the future in a lawless Johannesburg, where robot police officers are armed with artificial intelligence and heavy firepower. Their creator, Deon Wilson (Dev Patel), has populated his home with smaller, friendlier bots that run on the same level of limited AI as their law-enforcing brethren. But Deon seeks to create true AI via Chappie, a decommissioned police robot that is kidnapped by down-on-their-luck criminals (played outstandingly by two members of South African rap-rave band Die Antwoord, Yolandi Visser and Ninja).

Co-writing and directing Chappie gave Neill Blomkamp a reverence for the spark of life itself. In an interview with PCMag, he said he no longer believes that it's "as simple as running a bunch of electrical currents through a really complex CPU and just having the results of that be consciousness and sentience."

Blomkamp's interest in AI precedes his involvement in Chappie. He spent the past few years going through a rabbit hole of blog posts about AI and emerged from it wanting to do more than just read.

"I'm not classically religious in any sense but I almost would describe how I feel now in a slightly more religious sense because I don't know how else to describe it," he said.

Despite the reverence he developed for the unknowable source of life, he's very critical of what humans actually do with itin Chappie and his other films, District 9 and Elysium. The societies onscreen may be dystopias, but they are nevertheless an accurate depiction of the petty indignities and grotesque brutalities that mankind has perpetrated upon itself. When asked why he went so Mad Max with the city of his birth in Chappie, Blomkamp took a beat and then said, "That literally is just current-day Joburg."

It's humanity that you take a dim view of when you watch the flesh-and-blood characters project onto Chappie their greed, egos, and lust for power. Chappie himself adheres to the rule placed upon him by his creator: "no crimes." If artificial intelligence is programmed to follow our law and not our example, we might be all the better off for it.

There is one scene in the movie in which the character of Chappie plays false and, without giving too much away, seeks revenge. The moment is very much the violent catharsis the audience wants, but does not seem to be something that a machine, even an artificially intelligent one, would find meaningful.

"That's a very interesting thing that was really difficult to balance in the movie because the human audience member wants the revenge and the artificial intelligence may want none of the revenge," Blomkamp said. "On an artificial intelligence basis, things like revenge and violence and anger are biological. Those aren't rational things, they're a hormonal, biological response to something. A non-biological organism that isn't governed by those factors doesn't need to behave that way."

Read the rest here:

'Chappie' Director Optimistic About AI

The future of A.I.?

On March 6Neill Blomkamps movie Chappie adds more high-tech hardware a long list of big-screen robots, continuing our fascination with Artificial Intelligence. Hollywood takes on AI range from 1984s The Terminator, to Steven Spielbergs A.I. Artificial Intelligence, and the recent Scarlett Johansson-voiced Her, just to name a few.

FoxNews.com asked futurist and SeriousWonder.com CEO Gray Scott which cinematic visions got it right and which were way off.

You have to talk about The Terminator if youre talking about Artificial Intelligence. I actually think that thats way off, he said. I dont think that an artificially intelligent system that has superhuman intelligence will be violent. I do think that it will disrupt our culture. We are looking at a system where we could look out into the world and see machines that are smarter than us and weve never really reacted well to that kind of situation before. So, I think Chappie is interesting because its more about how we react to the systems.

Scott notes that the learning process addressed in Chappie also stand out. In Chappie you see this sort of young robot thats learning through maybe deep learning how to see the world really, look out into the world, and learn step by step, he explained. Whats so interesting is that with Chappie youre getting to see how human behavior reacts to artificial intelligence and I dont think its always going to be positive.

The futurist explained where this learning process stands now in real life. We do know that we can set certain algorithms for machines to do certain things - now that may be a simple task. A factory robot that moves one object from here to there, he said. Thats a very simple top down solution. But when we start creating machines that learn for themselves, that is a whole new area that weve never been in before. Were starting to see the preliminary versions of that on the market now.

Aritificial Intelligence is certainly sparking debate at the moment thanks to The Future of Life Institutes open letter outlining the research priorities for beneficial AI, and a recent warning from Stephen Hawking about the technology.

I think its good that were having the conversation now - we dont want this to become a part of our culture without having the discussion first, said Scott. We want to implement, and I think this is what they are sort of saying, I dont think they are saying its going to destroy us, I think what theyre saying is we need to have that conversation now. What do we put in place, what kind of algorithms can we put in place, to keep it from becoming violent if that is in fact where it goes?

He added: I think that kind of conversation because we do have 25-30 years as a lead up to true Artificial Intelligence, the kind thats autonomous. If it even happens that soon. So I think its good that were having that conversation and its coming from people in those arenas.

For more on the future, and how we may one day hear from AI, click the video above for our Tech Take with Gray Scott!

Fox News Entertainment Producer Ashley Dvorkin covers celebrity news, red carpets, TV, music, and movies. Dvorkin, winner of the 2011 CMA Media Achievement Award, is also host of "Fox 411 Country," "Star Traveler," Fox 411 Big Screen," and "Fox on Reddit."

Read this article:

The future of A.I.?

Google's AI software can learn 'Space Invaders' without reading the instructions

System will eventually be used to work out complex problems Software can do better than humans on Atari video games from the 1980s

By Associated Press and Mark Prigg For Dailymail.com

Published: 13:19 EST, 25 February 2015 | Updated: 12:30 EST, 26 February 2015

51 shares

19

View comments

Computers already have bested human champions in 'Jeopardy!' and chess, but artificial intelligence now has gone to master an entirely new level: 'Space Invaders.'

Google scientists have revealed AI software that can do better than humans on dozens of Atari video games from the 1980s, like video pinball, boxing, and 'Breakout.'

The firm's software was able to learn the game, working out the rules itself in a major step forward.

Google scientists have revealed AI software that can do better than humans on dozens of Atari video games from the 1980s, like video pinball, boxing, and Sapce Invaders.

View original post here:

Google's AI software can learn 'Space Invaders' without reading the instructions

CNET Update – ‘Chappie’ stirs up questions about artificial intelligence – Video


CNET Update - #39;Chappie #39; stirs up questions about artificial intelligence
http://cnet.co/1zZ6mix Some of the biggest names in tech have warned about the dangers of creating AI, and machines that can think are at the center of Sony #39;s upcoming film. CNET #39;s Bridget...

By: CNET

Read more from the original source:

CNET Update - 'Chappie' stirs up questions about artificial intelligence - Video

The Attention to Detail of Cryengine’s Artificial Intelligence Sets a New Precedent – Video


The Attention to Detail of Cryengine #39;s Artificial Intelligence Sets a New Precedent
The groundbreaking video game series Crysis is known for its flawless design and rewarding gameplay, not to mention its bar-raising visuals. But little known is that, behind the scenes, the...

By: kintustis

Follow this link:

The Attention to Detail of Cryengine's Artificial Intelligence Sets a New Precedent - Video

Artificial Intelligence App Review – Create The Most Advanced, Sophisticated Trading Platform Ever! – Video


Artificial Intelligence App Review - Create The Most Advanced, Sophisticated Trading Platform Ever!
Artificial Intelligence App: http://doiop.com/enigmacodef Free Bonus: http://www.toplaunchreview.com/free-bonus/ The Artificial Intelligence APP Review: Artificial Intelligence APP is a single...

By: TopLaunchReview

Here is the original post:

Artificial Intelligence App Review - Create The Most Advanced, Sophisticated Trading Platform Ever! - Video

Embrace AI, don't fear it: Expert

As man-made robots get smarter, will they eventually outpace man?

A few of the world's smartest technology leaders certainly think so. In recent days, they've taken to sounding the alarm bell about the potential dangers of Artificial Intelligence (AI).

Tesla CEO Elon Musk called AI "our biggest existential threat" while British scientist Stephen Hawking said AI could "spell the end of the human race." In January, Read MoreMicrosoft co-founder Bill Gates sided with Musk, adding, "[I] don't understand why some people are not concerned."

Read More Think tank: Study AI before letting it take over

Yet on the other side of the argument are people like Microsoft co-founder, Paul Allen. In 2013, he founded the Allen Institute for Artificial Intelligence in Seattle, whose mission is to advance the study of AI. The man who heads the organization thinks the fears are overblown.

"Robots are not coming to get you," said Allen Institute CEO Oren Etzioni. In an interview with CNBC, he said: "We quite simply have to separate science from science fiction."

Etzioni said Elon Musk and others may be missing the distinction between intelligence and autonomy. One implies streamlined computer functions, while the other means machines think and operate independently.

Etzioni offered two Artificial Intelligence examples. In 1997, IBM's Deep Blue chess computer beat then world champion Garry Kasparov. In 2011, IBM's Watson supercomputer beat two champions on the game show "Jeopardy."

"These are highly targeted savants," said Etzioni. "They say Watson didn't even know it won. And Deep Blue will not play another chess game unless you push a button."

Etzioni said that the machines "have no free will, they have no autonomy. They're no more likely to do damage than your calculator is likely to do its own calculations."

More here:

Embrace AI, don't fear it: Expert

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

Artificial intelligence has gone through some dismal periods, which those in the field gloomily refer to as AI winters. This is not one of those times; in fact, AI is so hot right now that tech giants like Google, Facebook, Apple, Baidu, and Microsoft are battling for the leading minds in the field. The current excitement about AI stems, in great part, from groundbreaking advances involving what are known as convolutional neural networks. This machine learning technique promises dramatic improvements in things like computer vision, speech recognition, and natural language processing. You probably have heard of it by its more layperson-friendly name: Deep Learning.

Few people have been more closely associated with Deep Learning than Yann LeCun, 54. Working as a Bell Labs researcher during the late 1980s, LeCun developed the convolutional network technique and showed how it could be used to significantly improve handwriting recognition; many of the checks written in the United States are now processed with his approach. Between the mid-1990s and the late 2000s, when neural networks had fallen out of favor, LeCun was one of a handful of scientists who persevered with them. He became a professor at New York University in 2003, and has since spearheaded many other Deep Learning advances.

More recently, Deep Learning and its related fields grew to become one of the most active areas in computer research. Which is one reason that at the end of 2013, LeCun was appointed head of the newly-created Artificial Intelligence Research Lab at Facebook, though he continues with his NYU duties.

LeCun was born in France, and retains from his native country a sense of the importance of the role of the public intellectual. He writes and speaks frequently in his technical areas, of course, but is also not afraid to opine outside his field, including about current events.

IEEE Spectrum contributor Lee Gomes spoke with LeCun at his Facebook office in New York City. The following has been edited and condensed for clarity.

Yann LeCun on...

IEEE Spectrum: We read about Deep Learning in the news a lot these days. Whats your least favorite definition of the term that you see in these stories?

Yann LeCun: My least favorite description is, It works just like the brain. I dont like people saying this because, while Deep Learning gets an inspiration from biology, its very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldnt deliver.

Spectrum: So if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?

LeCun: I need to think about this. [Long pause.] I think it would be machines that learn to represent the world. Thats eight words. Perhaps another way to put it would be end-to-end machine learning. Wait, its only five words and I need to kind of unpack this. [Pause.] Its the idea that every component, every stage in a learning machine can be trained.

Read more:

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter