CES 2015: THE FURO-S SMART SERVICE ROBOT WANTS TO HELP YOU CATCH YOUR FLIGHT – Video


CES 2015: THE FURO-S SMART SERVICE ROBOT WANTS TO HELP YOU CATCH YOUR FLIGHT
Meet the Furo-S Smart Service Robot, an intelligent kiosk with strong artificial intelligence and is ready to serve. Created by Future Robot, the Furo-S (lovingly nicknamed Rosie after the...

By: Popular Science

See the original post here:

CES 2015: THE FURO-S SMART SERVICE ROBOT WANTS TO HELP YOU CATCH YOUR FLIGHT - Video

Bitspiration 2014: Kickstarter – neuro:on sleep well (K. Adamczyk) – Video


Bitspiration 2014: Kickstarter - neuro:on sleep well (K. Adamczyk)
Kamil Adamczyk (22) is the founder and CEO of Intelclinic, a customer electronics company that creates devices, which use artificial intelligence methods for precise biological signals processing....

By: PROIDEAconferences

More here:

Bitspiration 2014: Kickstarter - neuro:on sleep well (K. Adamczyk) - Video

The intelligent enterprise: how businesses will use cognitive computing in 2015

'What we will start to see more of in the short-term is improved analysis and speed, which will make it appear more like the computer is thinking- but it's a process that relies on us.' - Hugh Cox, Rosslyn Analytics

Speaking to students at MIT in October, Elon Musk, engineer and CEO of Telsa Motors and SpaceX, called artificial intelligence 'our biggest existential threat.' He may be the man behind the first commercial flights to the International Space Station, but it's hard to avoid feeling he may have his head in the clouds when it comes to what is science and what is science fiction. At the same time we have films like 2014's 'Her' that depict a not-so-distant future where smart operating systems can have their own emotions and identities, and eventually become so intelligent that they supersede us. While autonomous A.I has been a trope in our culture for many years, the hype and speculation certainly hasn't abated in 2014.

But far from excluding humans, A.I systems based on cognitive computing technology have the potential to augment our reasoning capabilities and empower us to make better informed real-time decisions- and are already doing so.

'People will always remain key in the decision making process- cognitive computing will just require them to impact decisions in a different way and at a different stage,' explains Hugh Cox, chief data officer of Rosslyn Analytics. 'Human expertise, knowledge and experience will continue to be collected, meaning that as time progresses computers will become more adept at making decisions, removing the need for human interaction. But its important to note; the most advanced cognitive computing tool will never replace humans because we have contextual insight that computers simply dont possess.'

> See also: How artificial intelligence and augmented reality with change the way you work

According to IBM's Senior Vice President John E. Kelly, we're on the cusp of the 'third era' of computing- one of cognitive computing. In the age of tabulating machines, vaccuum systems and the first calculators, we fed data directly into computers on punch cards. Later on, in the programmable era, we learnt how to take processes and put them into the machine, controlled by the programming we inflict on the system. But in the forthcoming era of cognitive computing, computers will work directly with humans 'in a synergetic association' where relationships between human and computer blur.

The main benefits of this kind of synergy will be the ability to access the best of both worlds- productivity and speed from machines, emotional intelligence and the ability to handle the unknown from humans.

'They will interact in such a way that the computer helps the human unravel vast stores of information through its advanced processing speeds' says Kelly,' but the creativity of the human creates the environment for such an unlocking to occur.'

Reigning champion

The most well-known representative of 'cognitive computing' right now is IBM's Watson system. In 2011, the computer famously appeared on -and won- US gameshow 'Jeopardy!' by providing questions in response to clues posed in natural human language, which included nuances such as puns, slang and jargon. It was able to quickly execute hundreds of algorithms simultaneously to find the right answer, ranking its confidence in their accuracy and responding within three seconds.

See the article here:

The intelligent enterprise: how businesses will use cognitive computing in 2015

Minimax – Alpha Beta Pruning (Artificial Intelligence) by Ice Blended – Video


Minimax - Alpha Beta Pruning (Artificial Intelligence) by Ice Blended
This is the Assignment 3 for the Artificial Intelligence subject. Our group #39;Ice Blended #39; have been instruct by our lecturer Pn. Hamimah Mohd Jamil to produc...

By: khalishah ulfah suharto

See the original post here:

Minimax - Alpha Beta Pruning (Artificial Intelligence) by Ice Blended - Video

Let’s Play SIMCITY (A look at the intelligence of the AI Municipal Bus sytem) – Video


Let #39;s Play SIMCITY (A look at the intelligence of the AI Municipal Bus sytem)
I decided to watch my newly built Municipal bus terminal run and realized why we have tremendous wait times for passengers. It is not because we don #39;t have enough buses, it is because the AI...

By: Don Johnson

Continue reading here:

Let's Play SIMCITY (A look at the intelligence of the AI Municipal Bus sytem) - Video

Artificial Intelligence and Manufacturing (Part One)

Artificial Intelligence and Manufacturing ( Part One)

American manufacturing has come a long way in automating factories with robots and Computers. Over the last 40 years palletizer systems and industrial robots have replaced humans in lots of back breaking and repetitive jobs. Millions of clerical jobs have also been replaced by computers and software.

These advances in technology along with movies like the Terminator and Star Wars has led to a lot of speculation about how far artificial intelligence can be developed. In the Terminator film, someone makes a microprocessor so advanced that it makes machines self aware and then they connect all computers on the internet to cause an atomic war. The suggestion is that microprocessors can become so sophisticated that they can think like humans.

In 1965, DR. Herbert Simon, one of the founders of artificial intelligence (AI) said, Machines will be capable in 20 years of doing any work a man can do. Marvin Minsky, another AI guru from MIT said, within a generation the problem of creating artificial intelligence will be substantially solved. Moshe Vardi, a computer scientist at Rice University in Houston said, Everything that humans can do machines can do

Professors at Universities and computer scientists also add to the excitement by promoting artificial intelligence with futuristic potential as they try to get their share of federal grant money. The big question that comes up is, when will computers be able to emulate humans and become self aware and intelligent?

Link:

Artificial Intelligence and Manufacturing (Part One)

No need to panic artificial intelligence has yet to create a doomsday machine

6 hours ago by Tony Prescott, The Conversation Is artificial super-intelligence lurking nearby, under wraps? eugenia_loli, CC BY

The possibility that advanced artificial intelligence (AI) might one day turn against its human creators has been repeatedly raised of late. Renowned physicist Stephen Hawking, for instance, surprised by the ability of his newly-upgraded speech synthesis system to anticipate what he was trying to say, has suggested that, in the future, AI could surpass human intelligence and ultimately bring about the end of humankind.

Hawking is not alone in worrying about superintelligent AI. A growing number of futurologists, philosophers and AI researchers have expressed concerns that artificial intelligence could leave humans outsmarted and outmanoeuvred. My view is that this is unlikely, as humans will always use an improved AI to improve themselves. A malevolent AI will have to outwit not only raw human brainpower but the combination of humans and whatever loyal AI-tech we are able to command a combination that will best either on their own.

There are many examples already: Clive Thompson, in his book Smarter Than You Think describes how in world championship chess, where AIs surpassed human grandmasters some time ago, the best chess players in the world are not humans or AIs working alone, but human-computer teams.

While I don't believe that surpassing raw (unaided) human intelligence will be the trigger for an apocalypse, it does provide an interesting benchmark. Unfortunately, there is no agreement on how we would know when this point has been reached.

Beyond the Turing Test

An established benchmark for AI is the Turing Test, developed from a thought experiment described by the late, great mathematician and AI pioneer Alan Turing. Turing's practical solution to the question: "Can a machine think?" was an imitation game, where the challenge is for a machine to converse on any topic sufficiently convincingly that a human cannot tell whether they are communicating with man or machine.

In 1991 the inventor Hugh Loebner instituted an annual competition, the Loebner Prize, to create an AI or what we would now call a chatbot that could pass Turing's test. One of the judges at this year's competition, Ian Hocking, reported in his blog that if the competition entrants represent our best shot at human-like intelligence, then success is still decades away; AI can only match the tip of the human intelligence iceberg.

I'm not overly impressed either by the University of Reading's recent claim to have matched the conversational capability of a 13-year-old Ukrainian boy speaking English Imitating child-like intelligence, and the linguistic capacity of a non-native speaker, falls well short of meeting the full Turing Test requirements.

Indeed, AI systems equipped with pattern-matching, rather than language understanding, algorithms have been able to superficially emulate human conversation for decades. For instance, in the 1960s the Eliza program was able to give a passable impression of a psychotherapist. Eliza showed that you can fool some people some of the time, but the fact that Loebner's US$25,000 prize has never been won demonstrates that, performed correctly, the Turing test is a demanding measure of human-level intelligence.

Visit link:

No need to panic artificial intelligence has yet to create a doomsday machine

AI still can't recognise these simple pictures

Look at these black and yellow bars and tell me what you see. Not much, right? Ask state-of-the-art artificial intelligence the same question, however, and it will tell you they're a school bus. It will be over 99 percent certain of this assessment. And it will be totally wrong.

Computers are getting truly, freakishly good at identifying what they're looking at. They can't look at this pictureand tell you it's a chihuahua wearing a sombrero, but they can say that it's a dog wearing a hat with a wide brim. A new paper, however, directs our attention to one place these super-smart algorithms are totally stupid. It details how researchers were able to fool cutting-edge deep neural networks using simple, randomly generated imagery. Over and over, the algorithms looked at abstract jumbles of shapes and thought they were seeing parrots, ping pong paddles, bagels, and butterflies.

The findings force us to acknowledge a somewhat obvious but hugely important fact: Computer vision and human vision are nothing alike. And yet, since it increasingly relies on neural networks that teach themselves to see, we're not sure preciselyhowcomputer vision differs from our own. As Jeff Clune, one of the researchers who conducted the study, puts it, when it comes to AI, "we can get the results without knowing how we're getting those results."

Evolving Images to Fool AI One way to find out how these self-trained algorithms get their smarts is to find places where they are dumb. In this case, Clune, along with PhD students Anh Nguyen and Jason Yosinski, set out to see if leading image-recognising neural networks were susceptible to false positives. We know that a computer brain can recognise a koala bear. But could you get it to call something else a koala bear?

To find out, the group generated random imagery using evolutionary algorithms. Essentially, they bred highly-effective visual bait. A program would produce an image, and then mutate it slightly. Both the copy and the original were shown to an "off the shelf" neural network trained on ImageNet, a data set of 1.3 million images, which has become a go-to resource for training computer vision AI. If the copy was recognised as something -- anything -- in the algorithm's repertoire with more certainty the original, the researchers would keep it, and repeat the process. Otherwise, they'd go back a step and try again. "Instead of survival of the fittest, it's survival of the prettiest," says Clune. Or, more accurately, survival of the most recognisable to a computer as an African Gray Parrot.

Eventually, this technique produced dozens images that were recognised by the neural network with over 99 percent confidence. To you, they won't seem like much. A series of wavy blue and orange lines. A mandala of ovals. Those alternating stripes of yellow and black. But to the AI, they were obvious matches: Star fish. Remote control. School bus.

Peering Inside the Black Box In some cases, you can start to understand how the AI was fooled. Squint your eyes, and a school bus can look like alternating bands of yellow and black. Similarly, you could see how the randomly generated image that triggered "monarch" would resemble butterfly wings, or how the one that was recognised as "ski mask" does look like an exaggerated human face.

But it gets more complicated. The researchers also found that the AI could routinely be fooled by images of pure static. Using a slightly different evolutionary technique, they generated another set of images. These all look exactly alike -- which is to say, nothing at all, save maybe a broken TV set. And yet, state of the art neural networks pegged them, with upward of 99 percent certainty, as centipedes, cheetahs, and peacocks.

To Clune, the findings suggest that neural networks develop a variety of visual cues that help them identify objects. These cues might seem familiar to humans, as in the case of the school bus, or they might not. The results with the static-y images suggest that, at least sometimes, these cues can be very granular. Perhaps in training, the network notices that a string of "green pixel, green pixel, purple pixel, green pixel" is common among images of peacocks. When the images generated by Clune and his team happen on that same string, they trigger a "peacock" identification. The researchers were also able to elicit an identification of "lizard" with abstract images that looked nothing alike, suggesting that the networks come up with a handful of these cues for each object, any one of which can be enough to trigger a confident identification.

The fact that we're cooking up elaborate schemes to trick these algorithms points to a broader truth about artificial intelligence today: Even when it works, we don't always know how it works. "These models have become very big and very complicated and they're learning on their own," say Clune, who heads the Evolving Artificial Intelligence Laboratory at the University of Wyoming. "There's millions of neurons and they're all doing their own thing. And we don't have a lot of understanding about how they're accomplishing these amazing feats."

Read more:

AI still can't recognise these simple pictures