Even computers are fooled by optical illusions!

Computer scientists developed images that are unrecognisable to humans To machines they appear as different objects, such as a robin or cheetah This is because machines focus on pixels in an image that are different But humans identify objects by drawing relationships between features Images confuse the machine by exploiting the gaps in their knowledge This could potentially be used to hack into computers in the future

By Ellie Zolfagharifard for MailOnline

Published: 10:58 EST, 18 December 2014 | Updated: 11:46 EST, 18 December 2014

Elon Musk recently likened artificial intelligence to 'summoning the demon'.

The SpaceX founder, along with other scientists such Stephen Hawking, are concerned by the rapid pace of progress in machine intelligence.

But computers may not be as clever as we believe. In fact, a study in the US suggests that artificial intelligence could be fooled by simple optical illusions.

Computer scientists from the University of Wyoming and Cornell University in New York were able to hack the way a computer views objects using unique images that appear as static to humans. In these experiments, the machine was almost completely convinced it had labelled these images correctly

The findings have wide implications, because they mean hackers could someday exploit machines that rely on their ability to recognise their surroundings.

Computer scientists from the University of Wyoming and Cornell University developed a range of images that are unrecognisable to humans, but meaningful to computers.

'It is easy to produce images that are completely unrecognisable to humans, but that state-of-the-art [deep neural networks] believe to be recognisable objects,' the team wrote in a paper posted to ArXiv.

More:

Even computers are fooled by optical illusions!

Randy Rayess Shares His Views on Artificial Intelligence, Robotics and Deep Learning – Video


Randy Rayess Shares His Views on Artificial Intelligence, Robotics and Deep Learning
Will artificial intelligence define the future of technology? Randy Rayess explains more about AI, robotics and Deep Learning during a session at the Wharton...

By: VenturePact

Continued here:

Randy Rayess Shares His Views on Artificial Intelligence, Robotics and Deep Learning - Video

What Artificial Intelligence Is Not | TechCrunch

Editors note: Rob Smith is CEO of Pecabu.

Artificial Intelligence has been in the media a lot lately. So much so that its only a matter of time before it graduates to meaningless buzz word status like big data and cloud. Usually I would be a big supporter. Being in the AI space, any attention to our often overlooked industry is welcome. But there seems to be more misinformation out there than solid facts.

The general public seems to view AI as the mythical purple unicorn of technology; Elusive, powerful, mysterious, dangerous and most likely made up. And while there is plenty of debate in the scientific community, I can at least tell you what AI is definitely not.

First of all, AI is nothing to be frightened of. Its not a sentient being like SkyNet or an evil red light bulb like HAL. Fundamentally, AI is nothing more than a computer program smart enough to accomplish tasks that typically require human quality analysis. Thats it, not a mechanized, omnipresent war machine.

Secondly, AIs are not alive. While AIs are capable of performingtasks otherwise performed by human beings, they are not alive like we are. They have no genuine creativity, emotions or desires other than what we program into them or they detect from the environment. Unlike in science fiction (emphasis on the fiction) AIs would have no desire to mate, replicate or have a small AI family.

Next, AIs are generally not very ambitious. Its true that in very limited context, an AI can think similarly to us and set tasks for itself. But its general purpose and reason for existence is ultimately defined by us at inception. Like any program or technology, we define what its role in our society will be. Rest assured, they will have no intention to enslave humanity and rule us as our AI overlord.

Additionally, AI is not a single entity. Computer programs, even artificially intelligent ones, work far better as specialists rather than generalists. A more likely scenario for achieving artificial intelligence within our lifetime is through a network of sub programs handling vision (computer vision), language (NLP), adaptation (machine learning), movement (robotics)etc. AI is not a he or a she or even an it, AI is more like a they.

Finally AI, like all computer programs, are ultimately controlled by humans. Of course AI can be designed with malicious intent and weaponized like nuclear or biological technology, but thats not a fault of the science but of ourselves.

While Elon Musk is a personal hero of mine, and a genius on so many levels, his recent comments on artificial intelligence have been a little less than brilliant. He mentions that AI is more dangerous than nuclear weapons and that we may summon an AI demon (his words, not mine). My only explanation is that he must have fallen asleep watching Terminator.

In the meantime, companies such as IBM, Google and Apple are developing the next generation of AI-powered applications, using small bits of specialized AI code to replace the human element in many tiring, dangerous or time-consuming jobs. These are very specific, almost tunnel-vision-like programs that only improve our society and should instill fear in no one.

Read the original here:

What Artificial Intelligence Is Not | TechCrunch

Howard Gold's No-Nonsense Investing: Are the machines really taking over?

For decades, futurists have worried about computers getting human intelligence. Dystopian films from Stanley Kubricks 2001: A Space Odyssey to the Terminator and Matrix movies showed smart machines wreaking havoc on humans.

Now serious thinkers have sounded the alarm about artificial intelligence, while robotics and automation already have caused profound social and economic dislocation.

Two weeks ago, famed physicist Stephen Hawking told the BBC, The development of full artificial intelligence could spell the end of the human race.

It would take off on its own, and re-design itself at an ever-increasing rate, he warned. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded.

And the brilliant entrepreneur Elon Musk, co-founder of PayPal and CEO of Tesla Motors TSLA, +2.48% and SpaceX, called AI our biggest existential threat.

With artificial intelligence we are summoning the demon, he said.

Meanwhile, two researchers from the University of Oxford have estimated that computerization will put nearly half the jobs in the United States in jeopardy, including some creative professions that were thought to be immune.

Occupations that require subtle judgment are also increasingly susceptible to computerization, wrote Carl Benedikt Frey and Michael A. Osborne. To many such tasks, the unbiased decision making of an algorithm represents a comparative advantage over human operators.

Even for investing commentary? Just kidding I hope.

So, are the machines really taking over? I interviewed two leading researchers in AI and came away a little reassured, but not much. AI is progressing, but some technical barriers may delay immediate quantum leaps in machine intelligence.

Continue reading here:

Howard Gold's No-Nonsense Investing: Are the machines really taking over?

Stanford to host 100-year study on artificial intelligence

By Chris Cesare

Russ Altman, a professor of bioengineering and of computer science at Stanford, will serve as faculty director of the One Hundred Year Study on Artificial Intelligence.

Stanford University has invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz, who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity, Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI's long-term implications.

Now, together with Russ Altman, a professor of bioengineering and of computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

"Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life," said Stanford President John Hennessy, who helped initiate the project. "Given Stanford's pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children's children."

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

Altman will serve as faculty director and both he and Horvitz will be ex officio members of the committee. Together, the seven researchers will form the first AI100 standing committee. It and subsequent committees will identify the most compelling topics in AI at any given time, and convene a panel of experts to study and report on these issues.

Horvitz envisions this process repeating itself every several years, as new topics are chosen and the horizon of AI technology is scouted.

Link:

Stanford to host 100-year study on artificial intelligence

100-year study to examine effects of artificial intelligence

Scientists have begun what they say will be a century-long study of the effects of artificial intelligence on society, including on the economy, war and crime, officials atStanford Universityannounced this week.

The project, hosted by the university, is unusual not just because of its duration but because it seeks to track the effects of these technologies as they reshape the roles played by human beings in a broad range of endeavours.

"My take is that A.I. is taking over," said Sebastian Thrun, a well-known roboticist who led the development of Google's self-driving car. "A few humans might still be 'in charge', but less and less so."

Artificial intelligence describes computer systems that perform tasks traditionally requiring human intelligence and perception. In 2009, the president of the Association for the Advancement of Artificial Intelligence, Eric Horvitz, organised a meeting of computer scientists in California to discuss the possible ramifications of A.I. advances. The group concluded that the advanceswere largely positiveand lauded the "relatively graceful" progress.

Advertisement

But now, in the wake of recent technological advances in computer vision, speech recognition and robotics, scientists say they are increasingly concerned that artificial intelligence technologies may permanently displace human workers,roboticise warfareand make of Orwellian surveillance techniques easier to develop, among other disastrous effects.

Dr. Horvitz, now the managing director of the Redmond, Washington, campus of Microsoft Research, last year approached John Hennessy, a computer scientist and president of Stanford University, about the idea of a long-term study that would chart the progress of artificial intelligence and its effect on society. Dr. Horvitz and wife, Mary Horvitz, agreed to fund the initiative, called the One Hundred Year Study on Artificial Intelligence.

In an interview, Dr. Horvitz said he was unconvinced by recent warnings that superintelligent machines were poised to outstrip human control and abilities. Instead, he believes these technologies will have positive and negative effects on society.

"Loss of control of A.I. systems has become a big concern," he said. "It scares people." Rather than simply dismiss these dystopian claims, he said, scientists instead must monitor and continually evaluate the technologies.

"Even if the anxieties are unwarranted, they need to be addressed," Dr. Horvitz said.

The rest is here:

100-year study to examine effects of artificial intelligence

Artificial intelligence still has more to offer

At a recent conference hosted by Silicon Valley Forum, industry experts shared their insights as to where artificial intelligence, machine learning and deep learning are headed. A number of insights are pointing to the significant contribution AI will bring to the table when its true potential is unleashed.

"This notion that evolution ends with humans is silly," stated keynoter Steve Jurvetson, partner and managing director of DFJ. "I think what humans really mean is we don't want to compete with something smarter than us in our lifetime. I think you can shift our selfish sense of supremacy to a symbolic trajectory of progress."

Some speakers said progress in AI would give systems the ability to talk to each other quickly and simply, while others believe the ability to reason and make inferences will be the true differentiator in intelligence. Regardless, Citrix Startup Accelerator's CTO Michael Harries said, any entrepreneurs that aren't familiarising themselves with AI have "rocks in their heads."

According to Modar Alaoui, AI's immediate future lies in ambient intelligence in smartphones and smart cars. Alaoui is the founder and CEO of Eyeris, which develops artificial intelligence for facial recognition. Several speakers said they would like to see artificially intelligent robots or computers that learn without being told, then "self-tune" after solving a problem.

Flexibility is key to AI's future

"Robots have an ability to adapt to their environment; they have the ability to learn. But the ability to go on and extend that model is really intelligence. I think we will see that, but that's the jump we haven't made," Kevin Albert, CEO and co-founder of robotics startup Pneubotics, noted.

Jeff Hawkins, CEO and co-founder of Numenta, a firm that has developed a computational framework for AI, said: Intelligence shouldn't be measured by any particular task. What characterises intelligence is extreme flexibility building a flexible learning system. [Some AI is] focused on being human-like; our work here is not being human-like at all. It's about understanding the general principles of intelligence that we can apply to all kinds of problems.

While there is a variety of ways to attack the development and fine tuning of artificial intelligence, including training machines "like children," according to panellists, Hawkins believes reverse engineering the neural cortex is the fastest way to intelligent machines. Neuroscience has shown that language and touch work on the same principles, and Hawkins expects a machine's abilities to unfold in a similar way once scientists are able to tap inherent potential.

"Once we understand those principles of the neocortex, we can modify them, we don't need to be true to evolutionary biology," Hawkins said. "We still have so much to learn about the basics of how biology works. Progress is incremental but also exponential. We're going to finish this off in less than five years, I believe."

If the thought of enlightened machines in the next five years is too much, Hawkins assured attendees that artificial intelligence isn't inherently dangerous. The ability to self-replicate is dangerous, however.

More here:

Artificial intelligence still has more to offer

What Artificial Intelligence Is Not

Editors note: Rob Smith is CEO of Pecabu.

Artificial Intelligence has been in the media a lot lately. So much so that its only a matter of time before it graduates to meaningless buzz word status like big data and cloud. Usually I would be a big supporter. Being in the AI space, any attention to our often overlooked industry is welcome. But there seems to be more misinformation out there than solid facts.

The general public seems to view AI as the mythical purple unicorn of technology; Elusive, powerful, mysterious, dangerous and most likely made up. And while there is plenty of debate in the scientific community, I can at least tell you what AI is definitely not.

First of all, AI is nothing to be frightened of. Its not a sentient being like SkyNet or an evil red light bulb like HAL. Fundamentally, AI is nothing more than a computer program smart enough to accomplish tasks that typically require human quality analysis. Thats it, not a mechanized, omnipresent war machine.

Secondly, AIs are not alive. While AIs are capable of performingtasks otherwise performed by human beings, they are not alive like we are. They have no genuine creativity, emotions or desires other than what we program into them or they detect from the environment. Unlike in science fiction (emphasis on the fiction) AIs would have no desire to mate, replicate or have a small AI family.

Next, AIs are generally not very ambitious. Its true that in very limited context, an AI can think similarly to us and set tasks for itself. But its general purpose and reason for existence is ultimately defined by us at inception. Like any program or technology, we define what its role in our society will be. Rest assured, they will have no intention to enslave humanity and rule us as our AI overlord.

Additionally, AI is not a single entity. Computer programs, even artificially intelligent ones, work far better as specialists rather than generalists. A more likely scenario for achieving artificial intelligence within our lifetime is through a network of sub programs handling vision (computer vision), language (NLP), adaptation (machine learning), movement (robotics)etc. AI is not a he or a she or even an it, AI is more like a they.

Finally AI, like all computer programs, are ultimately controlled by humans. Of course AI can be designed with malicious intent and weaponized like nuclear or biological technology, but thats not a fault of the science but of ourselves.

While Elon Musk is a personal hero of mine, and a genius on so many levels, his recent comments on artificial intelligence have been a little less than brilliant. He mentions that AI is more dangerous than nuclear weapons and that we may summon an AI demon (his words, not mine). My only explanation is that he must have fallen asleep watching Terminator.

In the meantime, companies such as IBM, Google and Apple are developing the next generation of AI-powered applications, using small bits of specialized AI code to replace the human element in many tiring, dangerous or time-consuming jobs. These are very specific, almost tunnel-vision-like programs that only improve our society and should instill fear in no one.

Visit link:

What Artificial Intelligence Is Not

Community Discussion: How will artificial intelligence change our lives?

Earlier this year, Google acquired an Artificial Intelligence startup called DeepMind for $628 million after it demonstrated that its software could learn to play old-school Atari games better than any human. In an interview with MIT Technology Review published last week, DeepMind cofounder Demis Hassabis revealed that Google is setting up an internal ethics board to consider the possible downsides of advanced artificial intelligence. This seems like a good idea, as intelligent software may soon be better than humans at many more things than Donkey Kong. Another DeepMind cofounder, Shane Legg, believes theres a 90% chance that a human-level AI will arrive by 2050 and also that it may try to kill us; Its my number 1 risk for this century.

Even if the robots dont kill us they will almost certainly take our jobs, and not just the unskilled and repetitive jobs. In the same MIT interview, Hassabis gushed over the possibility for AI scientists that can generate and test new hypotheses about disease in the lab. IBMs Watson is already advising doctors and doing legal research. Journalists are under threat from Quill, an automated narrative generation platform that can analyse data and turn it into articles. Employees at a Lowes in San Jose recently got a robotic coworker, and security guards at Microsofts Silicon Valley campus have beenjoined by a droid that looks like a Dalek designed by George Lucas. A study last year by Oxford Universitys Martin School concluded that 47% of all US jobs may be lost to automation within the next two decades.

But is this a cause for gloom? A recent survey of 1,896 experts and academics found that a slight majority (52%) believes that technology will create more jobs than it destroys by 2025. Respondents also have hope that the coming changes will be an opportunity to reassess our societys relationship to employment itselfby returning to a focus on small-scale or artisanal modes of production, or by giving people more time to spend on leisure, self-improvement, or time with loved ones. There may also be significant environmental benefits. Artificial Intelligence is helping to improve energy efficiency, which in the words of the International Energy Association Energy Efficiency Market Report 2014, represents the most important plank in efforts to decarbonise the global energy system and achieve the worlds climate objectives. Autonomous vehicles may replace the human driven variety some time between 2030 and 2050, reducing emissions by improving travel efficiency.

How do you think we should prepare for the coming robot age? Are you concerned (or hopeful) that automation will change your work and lifestyle? Has it already? What are your hopes and concerns? Let us know in the comments!

Follow this link:

Community Discussion: How will artificial intelligence change our lives?

The beauty of Ethical Robots | Nikolaos Mavridis | TEDxTransmedia – Video


The beauty of Ethical Robots | Nikolaos Mavridis | TEDxTransmedia
This talk was given at a local TEDx event, produced independently of the TED Conferences. With his extensive knowledge in Robotics and Artificial Intelligence, Dr. Mavritis doesn #39;t explore...

By: TEDx Talks

Read more:

The beauty of Ethical Robots | Nikolaos Mavridis | TEDxTransmedia - Video

Facebook to prevent you from posting content that you might regret later

Washington: Facebook is building an artificial intelligence tool that would warn people when they are about to do something they might regret later such as uploading an embarrassing photo on the social networking site.

Yann LeCun, who heads the Facebook Artificial Intelligence Research (Fair) lab, and his team are now laying the basic groundwork for the tool.

LeCun wants to build a kind of Facebook digital assistant that will recognise when you are uploading an embarrassing photo from a late-night party.

Facebook is building an artificial intelligence tool that would warn people when they are about to do something they might regret later.

In a virtual way, LeCun said, this assistant would tap you on the shoulder and say: "Uh, this is being posted publicly. Are you sure you want your boss and your mother to see this?"

Such a tool would rely on image recognition technology that can distinguish between your drunken self and your sober self, 'Wired' reported.

The larger aim, LeCun said, is to create things like the digital assistant that can closely analyse not only photos but all sorts of other stuff posted to Facebook.

"You need a machine to really understand content and understand people and be able to hold all that data," he said.

LeCun's Facebook lab has already developed algorithms that examine a user's overall Facebook behaviour in an effort to identify the right content for their news feed - content they are likely to click on - and they will soon analyse the text users type into status posts, automatically suggesting relevant hashtags.

LeCun and his team are also looking towards AI systems that can understand Facebook data in more complex ways. "Imagine that you had an intelligent digital assistant which would mediate your interaction with your friends and also with content on Facebook," LeCun said.

See the original post here:

Facebook to prevent you from posting content that you might regret later

Facebook tool to keep you from uploading embarrassing photos

Washington, Dec 11:

Facebook is building an artificial intelligence tool that would warn people when they are about to do something they might regret later such as uploading an embarrassing photo on the social networking site.

Yann LeCun, who heads the Facebook Artificial Intelligence Research (Fair) lab, and his team are now laying the basic groundwork for the tool.

LeCun wants to build a kind of Facebook digital assistant that will recognise when you are uploading an embarrassing photo from a late-night party.

In a virtual way, LeCun said, this assistant would tap you on the shoulder and say: Uh, this is being posted publicly. Are you sure you want your boss and your mother to see this?

Such a tool would rely on image recognition technology that can distinguish between your drunken self and your sober self, Wired reported.

The larger aim, LeCun said, is to create things like the digital assistant that can closely analyse not only photos but all sorts of other stuff posted to Facebook.

You need a machine to really understand content and understand people and be able to hold all that data, he said.

LeCuns Facebook lab has already developed algorithms that examine a users overall Facebook behaviour in an effort to identify the right content for their news feed content they are likely to click on and they will soon analyse the text users type into status posts, automatically suggesting relevant hashtags.

LeCun and his team are also looking towards AI systems that can understand Facebook data in more complex ways.

The rest is here:

Facebook tool to keep you from uploading embarrassing photos