IBM funds group to build artificial intelligence apps for Watson supercomputer

What Will You Do With Watson?IBM

IBM has announced that it will invest $1 billion (600 million) in forming a new division that will fund outside developers and companies to create new artificial intelligence apps for the Watson supercomputer that famously beat humans on the US TV quiz show Jeopardy!

The 2,000-employee strong Watson Group will be based in New York City and will be headed up by Michael Rhodin, who was previously senior vice president of IBM's software solutions group.

It aims to support startups that are building cognitive apps through the Watson Developers Cloud. Since the Watson Ecosystem was announced in November last year, over 750 businesses and entrepreneurs applied in the hope of building the next generation of cognitive apps for Watson, IBM claims.

Watson won Jeopardy! in 2011, but since proving its intelligence to the US telly-watching public has developed both physically and in the range of services it provides. Originally the supercomputer was the size of a master bedroom (is that a scientific measurement, IBM?) to the size of three pizza boxes -- a reduction of 90 percent.

It is also being used in a variety of industries, including healthcare, retail and banking to deal with big data. Particularly interesting is its work in the medical field, where it has been devising personalised treatment plans for individual cancer patients. IBM has taught it to respond to queries in new formats, including by drawing pictures, and it's currently trying to train Watson to analyse the content of videos, rather than just using their metadata.

"IBM has transformed Watson from a quiz-show winner, into a commercial cognitive computing breakthrough that is helping businesses engage customers, healthcare organisations personalise patient care, and entrepreneurs build businesses," said Rhodin in a statement.

"Watson is one of the most significant innovations in IBM's 100 year history, and one that we want to share with the world. With these investments we strive to make new markets, reach new buyers and transform industries and professions."

Go here to read the rest:

IBM funds group to build artificial intelligence apps for Watson supercomputer

Computer science: The learning machines

BRUCE ROLFF/SHUTTERSTOCK

Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and cats1.

Google Brain's discovery that the Internet is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

Such advances make for exciting times in artificial intelligence (AI) the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.

AI has gone from failure to failure, with bits of progress. This could be another leapfrog, says Yann LeCun, director of the Center for Data Science at New York University and a deep-learning pioneer.

Over the next few years we'll see a feeding frenzy. Lots of people will jump on the deep-learning bandwagon, agrees Jitendra Malik, who studies computer image recognition at the University of California, Berkeley. But in the long term, deep learning may not win the day; some researchers are pursuing other techniques that show promise. I'm agnostic, says Malik. Over time people will decide what works best in different domains.

Back in the 1950s, when computers were new, the first generation of AI researchers eagerly predicted that fully fledged AI was right around the corner. But that optimism faded as researchers began to grasp the vast complexity of real-world knowledge particularly when it came to perceptual problems such as what makes a face a human face, rather than a mask or a monkey face. Hundreds of researchers and graduate students spent decades hand-coding rules about all the different features that computers needed to identify objects. Coming up with features is difficult, time consuming and requires expert knowledge, says Ng. You have to ask if there's a better way.

IMAGES: ANDREW NG

In the 1980s, one better way seemed to be deep learning in neural networks. These systems promised to learn their own rules from scratch, and offered the pleasing symmetry of using brain-inspired mechanics to achieve brain-like function. The strategy called for simulated neurons to be organized into several layers. Give such a system a picture and the first layer of learning will simply notice all the dark and light pixels. The next layer might realize that some of these pixels form edges; the next might distinguish between horizontal and vertical lines. Eventually, a layer might recognize eyes, and might realize that two eyes are usually present in a human face (see 'Facial recognition').

Continued here:

Computer science: The learning machines

A Former Mars Rover Scientist Says The Boomer Generation Could Be Independent Forever

When the Mars Rover was in production in the '90s, NASA senior computer scientist Rich Levinson noticed a limitation in its ability to make reactive decisions. The Rover could avoid falling off a cliff, but it didn't have the capability to backtrack or plan other routes of navigation. That's when he learned about a little-known term and much-needed brain process called executive function.

According to the National Center for Disabilities, executive function is a set of mental processes needed to perform activities such as planning, organizing, strategizing, paying attention to and remembering details, and managing time and space. Ranging from mild to severe, the cognitive impairment of executive function affects more than 16 million people according to the most recent CDC report. The growing senior population is particularly at risk as they are expected to comprise 20% of the total U.S. population by the year 2050, according to the U.S. Census Bureau.

At the time, executive function wasn't talked about much among clinicians, let alone the public, yet Levinson connected the dots.

"I was looking at the brain's operational properties at the same time as we were studying autonomy for robotics and realized there was a connection between executive function and the robotic systems that we were trying to combine planning and reaction to Artificial Intelligence," he recollects.

If you increase your planning time, you can explore more possibilities and then compile them down into reflexes. Then when you get into a situation where you have to make a very quick reactive response, you actually can do a little more. In 1996 Levinson started NASA spin-off BrainAid to address those very problems.

Enter Planning and Execution Assistant and Trainer (PEAT), Brainaids customizable smart-planning software. Unlike other task managing systems, PEAT automatically reorganizes a person's schedule based on their real-time task approval and customizable app integration. PEAT's cloud-based dashboard allows clinicians to view and share data collected from users' actions. The clinical integration helped teachers of autism organization PACE log outbursts and behaviors of autism students.

PEAT's customizable plug-in strategies help patients in overwhelming situations by selecting icons that walks them through therapist-suggested prompts such as "Wait five seconds before you speak," or "take a walk."

"We log when the patient presses the reaction icon and select a coping strategy so the therapist knows when they are doing the strategies on their own, Levinson says.

Entering its seventh year funded by the U.S. Department of Defense, PEAT is helping ameliorate another four-lettered cognitive killer--PTSD. The Palo Alto Department of Veterans Affairs clinical neuropsychologist Harriet Katz Zeiner says PEAT is a much-desired invisible aid for the vets. "We don't find PEAT being rejected like others [cognitive aids] because it's great at being unobtrusive in a social setting, she says. Its not like a cane and the world doesn't havent to know that it truly is an assistant.

PEAT began integrating with wearable assistance in 2007, with an experimental RFID reader bracelet called iBracelet that would fail but set the stage for a new wave of wearables. Today, BrainAid is working with the AFrame Digital smartwatch to monitor heart rates and send the data to PEAT, which will then automatically cue the user of coping strategies. Last summer, the company began integrating with the Pebble smartwatch, which acts as a leash for mobile devices and displays tasks on its simple interface.

More here:

A Former Mars Rover Scientist Says The Boomer Generation Could Be Independent Forever

History of artificial intelligence – Wikipedia, the free …

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. This cycle of boom and bust, of "AI winters" and summers, continues to haunt the field. Undaunted, there are those who make extraordinary predictions even now.[2]

Progress in AI has continued, despite the rise and fall of its reputation in the eyes of government bureaucrats and venture capitalists. Problems that had begun to seem impossible in 1970 have been solved and the solutions are now used in successful commercial products. However, no machine has been built with a human level of intelligence, contrary to the optimistic predictions of the first generation of AI researchers. "We can only see a short distance ahead," admitted Alan Turing, in a famous 1950 paper that catalyzed the modern search for machines that think. "But," he added, "we can see much that must be done."[3]

McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[5] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[6] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots), and speculation, such as Samuel Butler's "Darwin among the Machines." AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[9]Hero of Alexandria,[10]Al-Jazari and Wolfgang von Kempelen.[12] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it."[13][14]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor "formal"reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to "algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.[15]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[16] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[17] Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[18]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[19]Hobbes famously wrote in Leviathan: "reason is nothing but reckoning".[20]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate."[21] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

Follow this link:

History of artificial intelligence - Wikipedia, the free ...

EmoSPARK: An "artificial intelligence console" that wants to make you happy

For as long as weve been imagining emotionally intelligent machines, we have pictured something at least mildly resembling the human form. From George Lucas C-3PO to the recently-developed Robokind Zeno R25, our vision for robotic companionship has typically involved two arms and two legs. Taking a different approach is inventor of the EmoSpark console Patrick Rosenthal, who aims to bring artificial intelligence to consumers in the form of a cube small enough to fit in the palm of your hand.

The EmoSpark console is a 90 x 90 x 90 mm (3.5 x 3.5 x 3.5 in) Wi-Fi and Bluetooth enabled cube that interacts with a users emotions using a combination of content analysis and face-tracking software. In addition to distinguishing between each member of the household, the device uses custom developed technology that Rosenthal says enables it to differentiate between basic human feelings and create emotion profiles of not just everybody it interacts with, but also itself.

While the technology behind face-tracking is well established, what we've done differently is use it to track and process different emotions," Rosenthal tells Gizmag. "The EmoSpark Cube contains a unique chip invented by myself called the Emotional Processing Unit. This allows the cube to build up its own Emotional Profile Graph (EPG) as it interacts with its users. The cube saves all this information and, just like a fingerprint, will over time will keep an emotional print of each family member with which it interacts.

Users communicate with the cube by either typing or talking to it through their television, or remotely via a smartphone, tablet or computer. By analyzing this data and using its face-tracking technology, the cube is designed to acquaint itself with the user over time by gauging their likes, dislikes and different moods based on eight primary human emotions: joy, sadness, trust, disgust, fear, anger, surprise and anticipation.

Initially, the cube works to improve your mood and overall happiness by connecting to and recommending particular songs and videos or content on sites such as Facebook and YouTube. As the relationship between the cube and user develops, the device becomes more skilled in the art of conversation and nuanced in its offers of comfort something Rosenthal considers a significant mark of progress in artificial intelligence and integral to the technology.

The major breakthrough was in developing a credible model to synthesize emotions in a machine and creating a machine that can reply to a question not based on a script, but on a system compatible with the human emotional spectrum, says Rosenthal. A system that will be able to reply to a free association test, not only based on logic, but also based on its emotional status at the time you ask it a question.

This means that over time the cube will develop a personality of its own, the rate of which is largely determined by how often the user engages with it. The emotional learning will never end, the cube will always learn and its EPG will change over time but its logarithmic, said Rosenthal. It will learn much more when it is young and developing, I would say it depends more on the frequency of use than time.

While confident he has created a foundation for the assimilation of artificially intelligent machines into the consumer space, Rosenthal hopes to harness a keen general interest in artificial intelligence by handing control over to developers. The cube will have open API (Application Programming Interface) to allow developers to create new blocks of technologies in the form of apps in Google Play store, said Rosenthal. So the conversational engine, voice and speech recognition are all modules that will be upgraded or will be replaced, so the user can make their own cube.

The EmoSpark cube also doubles as an e-learning tool. It comes connected to Freebase, a collection of online knowledge owned by Google, which Rosenthal says enables it to answer questions on over 39 million topics. It can also be used to control robotic devices, bringing emotional feedback capabilities to a NAO robot or turning a Sphero ball into a virtual pet with its own emotions, for example.

Android powered, the cube contains 1.8 GHz CPU along with 2 GB of DDR3 memory and Rosenthals custom-built 20 MHz EPU (Emotion Processing Unit). It has an internal antenna, built-in Wi-Fi 802.11b/g/n capability and features USB 2.0, MicroUSB and HDMI 1.4 ports.

Read this article:

EmoSPARK: An "artificial intelligence console" that wants to make you happy

Artificial intelligence mobile devices for talk to flora and fauna (Unedited) – Video


Artificial intelligence mobile devices for talk to flora and fauna (Unedited)
Dilan wijerathne is studying at department of electrical and computer engineering of Open university of Sri Lanka. he conducting a project "JEEVA" exploratio...

By: Dilan Buddhika

Continued here:

Artificial intelligence mobile devices for talk to flora and fauna (Unedited) - Video

A Small Talk At The Back Of Beyond – Full Gameplay – No Commentary – Artificial Intelligence Zzz – Video


A Small Talk At The Back Of Beyond - Full Gameplay - No Commentary - Artificial Intelligence Zzz
Very interesting however annoying it is. Good concept though! Hope you enjoy this! Leaving a LIKE means alot to me! Thank you!

By: redwhiteTAPE

Original post:

A Small Talk At The Back Of Beyond - Full Gameplay - No Commentary - Artificial Intelligence Zzz - Video

Intelligent disaster relief

The "fragmented" coordination between relief actors in the Philippines following Typhoon Haiyan last month underscores the need for artificial intelligence to streamline disaster response, says a team behind such an effort. The ORCHID project, a consortium of UK universities and private firms, aims to make this possible by combining human and artificial intelligence into an efficient complementary unit known as a Human Agent Collective (HAC).

The computer systems being developed can assume tasks such as directing surveillance drones, resource management and search planning, says David Jones, head of Rescue Global, the disaster response organization responsible for testing the software next year.

"Coordination of such a large response [after a disaster] is so challenging without technological assistance that makes data more accessible," he sayson mission in the Philippines.

"Bringing humans and artificial intelligence together is the only way to get the job done better."

Computers' data-crunching abilities mean they are good at making sense of the huge amounts of information generated during an emergency from local status reports, social media, and the array of organizations involved in the relief effort.

By collecting and analyzing these data, HAC systems can flexibly implement a number of activities vital for disaster response, says Jones.

These include planning the flight paths of surveillance drones, verifying the authenticity of information coming in from social media, facilitating data sharing and organizing human teams based on their skill sets and current needs on the ground.

Machines not only complete many of these jobs better than humans, but by taking on these complex calculations they allow experts to concentrate on more nuanced tasks such as analyzing the content of photographs or video, and strategic planning.

For HAC systems to be successful, this division of labor must be accounted for and the right balance found between artificial and human input, says Sarvapali Ramchurn, ORCHID applications theme leader from the UK-based University of Southampton.

Read the original:

Intelligent disaster relief

Artificial Intelligence: Skyrim – The Elder Scrolls V, Video 02 – Video


Artificial Intelligence: Skyrim - The Elder Scrolls V, Video 02
Review of the updated AI mod for Skyrim! To download this mod, visit either of the sites below: Skyrim Workshop on Steam http://steamcommunity.com/sharedfiles/filedetails/?id=174433163 Skyrim...

By: Ether Dynamics

Read the original post:

Artificial Intelligence: Skyrim - The Elder Scrolls V, Video 02 - Video

Artificial Intelligence | Neuro AI – Artificial neural network

Defining Artificial Intelligence

The phrase Artificial Intelligence was first coined by John McCarthy four decades ago. One representative definition is pivoted around comparing intelligent machines with human beings. Another definition is concerned with the performance of machines which historically have been judged to lie within the domain of intelligence.

Yet none of these definitions have been universally accepted, probably because the reference of the word intelligence which is an immeasurable quantity. A better definition of artificial intelligence, and probably the most accurate would be: An artificial system capable of planning and executing the right task at the right time rationally. Or far simpler: a machine that can act rationally.

With all this a common questions arises:

Does rational thinking and acting include all characteristics of an intelligent system?

If so, how does it represent behavioral intelligence such as learning, perception and planning?

If we think a little, a system capable of reasoning would be a successful planner. Moreover, a system can act rationally only after acquiring knowledge from the real world. So the property of perception is a perquisite of building up knowledge from the real world.

With all this we may conclude that a machine that lacks of perception cannot learn, therefore cannot acquire knowledge.

To understand the practical meaning or artificial intelligence we must illustrate some common problems. All problems that are dealt with artificial intelligence solutions use the common term state.

A state represents the status of a solution at a given step during the problem solving procedure. The solution of a problem is a collection of states. The problem solving procedure or algorithm applies an operator to a state to get the next state. Then, it applies another operator to the resulting state to derive a new state.

See original here:

Artificial Intelligence | Neuro AI - Artificial neural network

9th Int. gvSIG Conference: Integration of collective and artificial intelligence in i3Geo – Video


9th Int. gvSIG Conference: Integration of collective and artificial intelligence in i3Geo
9th International gvSIG Conference: Conceptualizing the integration of collective and artificial intelligence in i3Geo software.

By: gvsig

Continue reading here:

9th Int. gvSIG Conference: Integration of collective and artificial intelligence in i3Geo - Video