Military of the Beast : DARPA to build drones that have artificial intelligence (Jan 01, 2015) – Video


Military of the Beast : DARPA to build drones that have artificial intelligence (Jan 01, 2015)
SOURCE: http://www.cbsnews.com News Articles: DARPA aims to create small drones to zoom into enemy buildings http://www.foxnews.com/tech/2014/12/31/drones-to...

By: SignsofThyComing

Visit link:

Military of the Beast : DARPA to build drones that have artificial intelligence (Jan 01, 2015) - Video

Simple Pictures That State-of-the-Art AI Still Cant Recognize

Look at these black and yellow bars and tell me what you see. Not much, right? Ask state-of-the-art artificial intelligence the same question, however, and it will tell you theyre a school bus. It will be over 99 percent certain of this assessment. And it will be totally wrong.

Computers are getting truly, freakishly good at identifying what theyre looking at. They cant look at this picture and tell you its a chihuahua wearing a sombrero, but they can say that its a dog wearing a hat with a wide brim. A new paper, however, directs our attention to one place these super-smart algorithms are totally stupid. It details how researchers were able to fool cutting-edge deep neural networks using simple, randomly generated imagery. Over and over, the algorithms looked at abstract jumbles of shapes and thought they were seeing parrots, ping pong paddles, bagels, and butterflies.

The findings force us to acknowledge a somewhat obvious but hugely important fact: Computer vision and human vision are nothing alike. And yet, since it increasingly relies on neural networks that teach themselves to see, were not sure precisely how computer vision differs from our own. As Jeff Clune, one of the researchers who conducted the study, puts it, when it comes to AI, we can get the results without knowing how were getting those results.

One way to find out how these self-trained algorithms get their smarts is to find places where they are dumb. In this case, Clune, along with PhD students Anh Nguyen and Jason Yosinski, set out to see if leading image-recognizing neural networks were susceptible to false positives. We know that a computer brain can recognize a koala bear. But could you get it to call something else a koala bear?

To find out, the group generated random imagery using evolutionary algorithms. Essentially, they bred highly-effective visual bait. A program would produce an image, and then mutate it slightly. Both the copy and the original were shown to an off the shelf neural network trained on ImageNet, a data set of 1.3 million images, which has become a go-to resource for training computer vision AI. If the copy was recognized as somethinganythingin the algorithms repertoire with more certainty the original, the researchers would keep it, and repeat the process. Otherwise, theyd go back a step and try again. Instead of survival of the fittest, its survival of the prettiest, says Clune. Or, more accurately, survival of the most recognizable to a computer as an African Gray Parrot.

Eventually, this technique produced dozens images that were recognized by the neural network with over 99 percent confidence. To you, they wont seem like much. A series of wavy blue and orange lines. A mandala of ovals. Those alternating stripes of yellow and black. But to the AI, they were obvious matches: Star fish. Remote control. School bus.

In some cases, you can start to understand how the AI was fooled. Squint your eyes, and a school bus can look like alternating bands of yellow and black. Similarly, you could see how the randomly generated image that triggered monarch would resemble butterfly wings, or how the one that was recognized as ski mask does look like an exaggerated human face.

But it gets more complicated. The researchers also found that the AI could routinely be fooled by images of pure static. Using a slightly different evolutionary technique, they generated another set of images. These all look exactly alikewhich is to say, nothing at all, save maybe a broken TV set. And yet, state of the art neural networks pegged them, with upward of 99 percent certainty, as centipedes, cheetahs, and peacocks.

The fact that were cooking up elaborate schemes to trick these algorithms points to a broader truth about artificial intelligence today: Even when it works, we dont always know how it works. These models have become very big and very complicated and theyre learning on their own, say Clune, who heads the Evolving Artificial Intelligence Laboratory at the University of Wyoming. Theres millions of neurons and theyre all doing their own thing. And we dont have a lot of understanding about how theyre accomplishing these amazing feats.

Studies like these are attempts to reverse engineer those models. They aim to find the contours of the artificial mind. Within the last year or two, weve started to really shine increasing amounts of light into this black box, Clune explains. Its still very opaque in there, but were starting to get a glimpse of it.

Read the original:

Simple Pictures That State-of-the-Art AI Still Cant Recognize

IT Digest: Artificial intelligence studied

January 4 at 3:07 PM

Machine learning

Scholars to study artificial intelligence

Stanford University is anchoring a study to examine the long-term effects of artificial intelligence.

Led and funded by Eric Horvitz, managing director of Microsoft research and a Stanford University alumnus, the 100-year study will be overseen by a committee with rotating members who will track progress at five-year intervals.

They plan to pay close attention to focus on how artificial intelligence affects national security, psychology, ethics, law, privacy and democracy, among other topics.

So far, professors from Stanford, Harvard University, Carnegie Mellon University, the University of California at Berkeley and the University of British Columbia are joining.

[W]e feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our childrens children, Stanford President John Hennessy said in a statement.

Mohana Ravindranath

Mobile devices

See the original post:

IT Digest: Artificial intelligence studied

Don't worry, artificial intelligence is not a job stealer its a job enabler

'We are only just dipping our toes into the vast ocean of AI and the benefits it can offer'

Terminators roaming vast wastelands, crazed computers trapping astronauts, central intelligences instigating World War III. These are just some of the scenarios that Hollywood has dreamt up, which are giving artificial intelligence (AI) a bad name.

Weve even seen Stephen Hawking express his doubts about AI, saying: Creating AI will be the biggest event in human history it might also be the last. It seems that a notion has developed that robots and the AI behind them are out to get us. Or, at the very least, our jobs.

So how accurate is the glitz and glam of Hollywood, or indeed the doom and gloom of Professor Hawking? Its easy to get caught up in the apocalyptic view Hollywood presents, but the reality, as you would expect, is very different AI actually offers many benefits.

>See also:How artificial intelligence will make humans smarter

Plus, the most optimistic predictions suggest a genuinely freethinking artificial intelligence is decades if not centuries away, and a consensus on whether workers will eventually be replaced by synthetic replicants is even more distant.

The truth of the matter is that we are only just dipping our toes into the vast ocean of AI and the benefits it can offer. Most of us will be familiar with Apples Siri or Microsofts Cortana, personal assistants that live on our smartphones and scrape the internet for information before making recommendations based on available data. At the moment, these digital assistants are just that: assistants helpful when asked to be helpful, but little more and certainly not freethinking.

Working purely in a reactive way, this technology is designed to augment our own knowledge. Whether its double checking how you should rewire a plug, choosing the best route to take, or getting a little extra team support in a pub quiz, these applications are slowly becoming more and more useful. The early generations of these technologies have proven to be hugely popular and as a consequence we are seeing something of a progression from reactive service.

Rather than simply existing as audio versions of search engines, algorithms are being developed that can learn trends and styles. For example, if you are using the assistant to plot routes that you need to take, the technology can adapt to live information and make suggestions based on real-time information. This technology really is in its infancy, but it is something that is increasingly coming up within senior meetings at big enterprise companies.

As such, the incorporation of AI into our working lives is something CIOs and IT managers are trying to wrap their arms around at the moment. Many are already contemplating how it will form a part of their IT strategies in the years to come.

See the original post:

Don't worry, artificial intelligence is not a job stealer its a job enabler

Get smart: how smart machines are bringing us closer together

'I disagree with Stephen Hawking and Elon Musk and heres why'

Youve all seen the films and heard the prognostications of people like Stephen Hawking and Elon Musk. They portray artificial intelligence (AI) as something humans should fear. They argue that this kind of technology is driving us apart, and worse yet, that it will lead to some sort of conflict even an existential threat to our society.

However, as someone who works with AI every day and who sees how it is helping businesses and consumers across the service sector, I am going to have to disagree with Professor Hawking and Mr. Musk and heres why.

Did you know that the airplane flying you home for the holidays uses an AI system that has made commercial aviation safer than ever before?

Not flying home? Driving? Well, you can thank sophisticated robots, trained by expert car builders, for your carefree drive home.

>See also:How artificial intelligence will make humans smarter

What about those financial earnings reports, sports recaps or stock profiles you peer over every morning? Have you ever wondered how news publications get stories out so quickly? Well, some of those reports you are reading are actually written by robots.

So, AI systems are more intertwined with our everyday life than many may believe. These systems make our travel safer and keep us more informed about the world around us. But how is this technology bringing us closer together?

Far from excluding humans, AI systems augment our reasoning capacities and empower us to make more informed real-time decisions.

Continued here:

Get smart: how smart machines are bringing us closer together

Dr.Harsha Subasinghe , CEO of CODGEN inspirational speech at Sahasak Award Ceremony 2014 – Video


Dr.Harsha Subasinghe , CEO of CODGEN inspirational speech at Sahasak Award Ceremony 2014
Dr. Harsha Subasinghe obtained his BEng(Hons) in Electronic and Computing from Middlesex University in UK. He then completed this master #39;s degree in Information technology and the PhD in...

By: Sri Lanka Inventors Commission

Read this article:

Dr.Harsha Subasinghe , CEO of CODGEN inspirational speech at Sahasak Award Ceremony 2014 - Video

Imam Zaid Shakir on how to deal with Islamophobia and anti-Islamic sentiment – Video


Imam Zaid Shakir on how to deal with Islamophobia and anti-Islamic sentiment
"Artificial Intelligence" is meant to refer to a useless pursuit advertised today as opposed to "real intelligence" which is lacking in real people.

By: poorman r

See the rest here:

Imam Zaid Shakir on how to deal with Islamophobia and anti-Islamic sentiment - Video

Artificial Intelligence | UC BerkeleyX on edX | Course About Video – Video


Artificial Intelligence | UC BerkeleyX on edX | Course About Video
Enroll in Artificial Intelligence from UC BerkeleyX at https://www.edx.org/course/artificial-intelligence-uc-berkeleyx-188-1x Artificial Intelligence UC Berkeley #39;s upper division course CS188:...

By: edX

Read more here:

Artificial Intelligence | UC BerkeleyX on edX | Course About Video - Video

The biggest threat to humanity? The INTERNET

Centre for the Study of Existential Risk (CSER) project has been set up to monitor artificial intelligence and technological advances The web was cited as a catalyst in the Egyptian coup in 2011, for example Global cyber attacks have the potential to bring down governments They threaten businesses, which in turn could damage global economies Elsewhere, criminals and terrorists operate on the so-called Deep Web This could lead to global wars, which could culminate in World War III ArtificialIntelligence is fuelled byadvancements in web-enabled devices Professor Stephen Hawking and Elon Musk have previously voiced concerns that AI could threaten humanity

By Victoria Woollaston for MailOnline

Published: 04:49 EST, 31 December 2014 | Updated: 14:31 EST, 31 December 2014

191 shares

252

View comments

The web has democratised information and learning, brought families and loved ones together as well as helped businesses connect and compete in a global economy.

But the internet has a dark side - it hosts underhand dealings, has its very own criminal underbelly, not to mention a rising mob culture.

The threat such technological advances pose to society is so serious, there is now a team of Cambridge researchers studying the existential risks.

The Centre for the Study of Existential Risk (CSER) project has been set up in Cambridge to monitor artificial intelligence and technological advances. The web was cited as a catalyst in the Egyptian coup in 2011, for example, while global cyber attacks have the potential to bring down governments

Continue reading here:

The biggest threat to humanity? The INTERNET

What is Artificial Intelligence (AI)? Webopedia

Main TERM A

By Vangie Beal

Artificial intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes the following areas of specialization:

Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.

In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.

Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators. There are also voice recognition systems that can convert spoken sounds into written words, but they do not understandwhat they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly.

In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations.

Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a number of disciplines such as voice recognition and natural-language processing.

There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog.

TECH RESOURCES FROM OUR PARTNERS

Read more here:

What is Artificial Intelligence (AI)? Webopedia

What next for the future tech of 2014?

The year gone by brought us more robots, worries about artificial intelligence, and difficult lessons on space travel. The big question: where's it all taking us?

NASA has a vision of sending astronauts to Mars aboard a rocket like this. In 2014, its Orion spacecraft took a small test-flight in that direction. NASA/MSFC

Every year, we capture a little bit more of the future -- and yet the future insists on staying ever out of reach.

Consider space travel. Humans have been traveling beyond the atmosphere for more than 50 years now -- but aside from a few overnights on the moon four decades ago, we have yet to venture beyond low Earth orbit.

Or robots. They help build our cars and clean our kitchen floors, but no one would mistake a Kuka or a Roomba for the replicants in "Blade Runner." Siri, Cortana and Alexa, meanwhile, are bringing some personality to the gadgets in our pockets and our houses. Still, that's a long way from HAL or that lad David from the movie "A.I. Artificial Intelligence."

Self-driving cars? Still in low gear, and carrying some bureaucratic baggage that prevents them from ditching certain technology of yesteryear, like steering wheels.

And even when these sci-fi things arrive, will we embrace them? A Pew study earlier this year found that Americans are decidedly undecided. Among the poll respondents, 48 percent said they would like to take a ride in a driverless car, but 50 percent would not. And only 3 percent said they would like to own one.

"Despite their general optimism about the long-term impact of technological change," Aaron Smith of the Pew Research Center wrote in the report, "Americans express significant reservations about some of these potentially short-term developments" such as US airspace being opened to personal drones, robot caregivers for the elderly or wearable or implantable computing devices that would feed them information.

Let's take a look at how much of the future we grasped in 2014 and what we could gain in 2015.

In 2014, earthlings scored an unprecedented achievement in space exploration when the European Space Agency landed a spacecraft on a speeding comet, with the potential to learn more about the origins of life. No, Bruce Willis wasn't aboard. Nobody was. But when the 220-pound Philae lander, carried to its destination by the Rosetta orbiter, touched down on comet 67P/Churyumov-Gerasimenko on November 12, some 300 million miles from Earth, the celebration was well-earned.

Read the original post:

What next for the future tech of 2014?

The biggest threat to humanity? The INTERNET: Experts raise concerns about the web's potential to incite violence …

Centre for the Study of Existential Risk (CSER) project has been set up to monitor artificial intelligence and technological advances The web was cited as a catalyst in the Egyptian coup in 2011, for example Global cyber attacks have the potential to bring down governments They threaten businesses, which in turn could damage global economies Elsewhere, criminals and terrorists operate on the so-called Deep Web This could lead to global wars, which could culminate in World War III ArtificialIntelligence is fuelled byadvancements in web-enabled devices Professor Stephen Hawking and Elon Musk have previously voiced concerns that AI could threaten humanity

By Victoria Woollaston for MailOnline

Published: 04:49 EST, 31 December 2014 | Updated: 04:57 EST, 31 December 2014

40 shares

97

View comments

The web has democratised information and learning, brought families and loved ones together as well as helped businesses connect and compete in a global economy.

But the internet has a dark side - it hosts underhand dealings, has its very own criminal underbelly, not to mention a rising mob culture.

The threat such technological advances pose to society is so serious, there is now a team of Cambridge researchers studying the existential risks.

The Centre for the Study of Existential Risk (CSER) project has been set up in Cambridge to monitor artificial intelligence and technological advances. The web was cited as a catalyst in the Egyptian coup in 2011, for example, while global cyber attacks have the potential to bring down governments

See the original post:

The biggest threat to humanity? The INTERNET: Experts raise concerns about the web's potential to incite violence ...

Breakthroughs in Artificial Intelligence from 2014 | MIT …

The holy grail of artificial intelligencecreating software that comes close to mimicking human intelligenceremains far off. But 2014 saw major strides in machine learning software that can gain abilities from experience. Companies in sectors from biotech to computing turned to these new techniques to solve tough problems or develop new products.

The most striking research results in AI came from the field of deep learning, which involves using crude simulated neurons to process data.

Work in deep learning often focuses on images, which are easy for humans to understand but very difficult for software to decipher. Researchers at Facebook used that approach to make a system that can tell almost as well as a human whether two different photos depict the same person. Google showed off a system that can describe scenes using short sentences.

Results like these have led leading computing companies to compete fiercely for AI researchers. Google paid more than $600 million for a machine learning startup called DeepMind at the start of the year. When MIT Technology Review caught up with the companys founder, Demis Hassabis, later in the year, he explained how DeepMinds work was shaped by groundbreaking research into the human brain.

The search company Baidu, nicknamed Chinas Google, also spent big on artificial intelligence. It set up a lab in Silicon Valley to expand its existing research into deep learning, and to compete with Google and others for talent. Stanford AI researcher and onetime Google collaborator Andrew Ng was hired to lead that effort. In our feature-length profile, he explained how artificial intelligence could turn people who have never been on the Web into users of Baidus Web search and other services.

Machine learning was also a source of new products this year from computing giants, small startups, and companies outside the computer industry.

Microsoft drew on its research into speech recognition and language comprehension to create its virtual assistant Cortana, which is built into the mobile version of Windows. The app tries to enter a back-and-forth dialogue with people. Thats intended both to make it more endearing and to help it learn what went wrong when it makes a mistake.

Startups launched products that used machine learning for tasks as varied as helping you get pregnant, letting you control home appliances with your voice, and making plans via text message .

Some of the most interesting applications of artificial intelligence came in health care. IBM is now close to seeing a version of its Jeopardy!-winning Watson software help cancer doctors use genomic data to choose personalized treatment plans for patients . Applying machine learning to a genetic database enabled one biotech company to invent a noninvasive test that prevents unnecessary surgery.

Using artificial intelligence techniques on genetic data is likely to get a lot more common now that Google, Amazon, and other large computing companies are getting into the business of storing digitized genomes.

Here is the original post:

Breakthroughs in Artificial Intelligence from 2014 | MIT ...

2014 in Computing: Breakthroughs in Artificial Intelligence

The holy grail of artificial intelligencecreating software that comes close to mimicking human intelligenceremains far off. But 2014 saw major strides in machine learning software that can gain abilities from experience. Companies in sectors from biotech to computing turned to these new techniques to solve tough problems or develop new products.

The most striking research results in AI came from the field of deep learning, which involves using crude simulated neurons to process data.

Work in deep learning often focuses on images, which are easy for humans to understand but very difficult for software to decipher. Researchers at Facebook used that approach to make a system that can tell almost as well as a human whether two different photos depict the same person. Google showed off a system that can describe scenes using short sentences.

Results like these have led leading computing companies to compete fiercely for AI researchers. Google paid more than $600 million for a machine learning startup called DeepMind at the start of the year. When MIT Technology Review caught up with the companys founder, Demis Hassabis, later in the year, he explained how DeepMinds work was shaped by groundbreaking research into the human brain.

The search company Baidu, nicknamed Chinas Google, also spent big on artificial intelligence. It set up a lab in Silicon Valley to expand its existing research into deep learning, and to compete with Google and others for talent. Stanford AI researcher and onetime Google collaborator Andrew Ng was hired to lead that effort. In our feature-length profile, he explained how artificial intelligence could turn people who have never been on the Web into users of Baidus Web search and other services.

Machine learning was also a source of new products this year from computing giants, small startups, and companies outside the computer industry.

Microsoft drew on its research into speech recognition and language comprehension to create its virtual assistant Cortana, which is built into the mobile version of Windows. The app tries to enter a back-and-forth dialogue with people. Thats intended both to make it more endearing and to help it learn what went wrong when it makes a mistake.

Startups launched products that used machine learning for tasks as varied as helping you get pregnant, letting you control home appliances with your voice, and making plans via text message .

Some of the most interesting applications of artificial intelligence came in health care. IBM is now close to seeing a version of its Jeopardy!-winning Watson software help cancer doctors use genomic data to choose personalized treatment plans for patients . Applying machine learning to a genetic database enabled one biotech company to invent a noninvasive test that prevents unnecessary surgery.

Using artificial intelligence techniques on genetic data is likely to get a lot more common now that Google, Amazon, and other large computing companies are getting into the business of storing digitized genomes.

More here:

2014 in Computing: Breakthroughs in Artificial Intelligence