Artificial Intelligence: How Algorithms Make Systems Smart

Algorithm is a word that one hears used much more frequently than in the past. One of the reasons is that scientists have learned that computers can learn on their own if given a few simple instructions. Thats really all that algorithms are mathematical instructions. Wikipedia states that an algorithm is a step-by-step procedure for calculations.

Algorithms are used for calculation, data processing, and automated reasoning. Whether you are aware of it or not, algorithms are becoming a ubiquitous part of our lives. Some pundits see danger in this trend. For example, Leo Hickman (@LeoHickman) writes, The NSA revelations highlight the role sophisticated algorithms play in sifting through masses of data. But more surprising is their widespread use in our everyday lives. So should we be more wary of their power? ["How algorithms rule the world," The Guardian, 1 July 2013] Its a bit hyperbolic to declare that algorithms rule the world; but, I agree that their use is becoming more widespread. Thats because computers are playing increasingly important roles in so many aspects of our lives. I like the HowStuffWorks explanation:

To make a computer do anything, you have to write a computer program. To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then executes the program, following each step mechanically, to accomplish the end goal. When you are telling the computer what to do, you also get to choose how its going to do it. Thats where computer algorithms come in. The algorithm is the basic technique used to get the job done.

The only point that explanation gets wrong is that you have to tell a computer exactly what you want it to do step by step. Rather than follow only explicitly programmed instructions, some computer algorithms are designed to allow computers to learn on their own (i.e., facilitate machine learning). Uses for machine learning include data mining and pattern recognition. Klint Finley reports, Todays internet is ruled by algorithms. These mathematical creations determine what you see in your Facebook feed, what movies Netflix recommends to you, and what ads you see in your Gmail. ["Wanna Build Your Own Google? Visit the App Store for Algorithms," Wired, 11 August 2014].

As mathematical equations, algorithms are neither good nor evil. Clearly, however, people with both good and bad intentions have used algorithms. Dr. Panos Parpas, a lecturer in the department of computing at Imperial College London, told Hickman, [Algorithms] are now integrated into our lives. On the one hand, they are good because they free up our time and do mundane processes on our behalf. The questions being raised about algorithms at the moment are not about algorithms per se, but about the way society is structured with regard to data use and data privacy. Its also about how models are being used to predict the future. There is currently an awkward marriage between data and algorithms. As technology evolves, there will be mistakes, but it is important to remember they are just a tool. We shouldnt blame our tools.

Algorithms are nothing new. As noted above, they are simply mathematical instructions. Their use in computers can be traced back to one of the giants in computational theory Alan Turing. Back in 1952, Turing published a set of equations that tried to explain the patterns we see in nature, from the dappled stripes adorning the back of a zebra to the whorled leaves on a plant stem, or even the complex tucking and folding that turns a ball of cells into an organism. ["The Powerful Equations That Explain The Patterns We See In Nature," by Kat Arney (@harpistkat), Gizmodo, 13 August 2014] Turing became famous during the Second World War because he helped break the Enigma code. Sadly, Turing took his own life two years after publishing his book. Fortunately, Turings impact on the world didnt end with his suicide. Arney reports that scientists are still using his algorithms to discover patterns in nature. Arney concludes:

In the last years of Alan Turings life he saw his mathematical dream a programmable electronic computer sputter into existence from a temperamental collection of wires and tubes. Back then it was capable of crunching a few numbers at a snails pace. Today, the smartphone in your pocket is packed with computing technology that would have blown his mind. Its taken almost another lifetime to bring his biological vision into scientific reality, but its turning out to be more than a neat explanation and some fancy equations.

Although Turings algorithms have been useful in identifying how patterns emerge in nature, other correlations generated by algorithms have been more suspect. Deborah Gage (@deborahgage) reminds us, Correlation is different than causality. ["Big Data Uncovers Some Weird Correlations," The Wall Street Journal, 23 March 2014] She adds, Finding surprising correlations has never been easier, thanks to the flood of data thats now available. Gage reports that one company found that deals closed during a new moon are, on average, 43% bigger than when the moon is full. Other weird correlations that have been discovered include, People answer the phone more often when its snowy, cold or very humid; when its sunny or less humid they respond more to email. A preliminary analysis shows that they also buy more when its sunny, although certain people buy more when its overcast. The online lender ZestFinance Inc. found that people who fill out their loan applications using all capital letters default more often than people who use all lowercase letters, and more often still than people who use uppercase and lowercase letters correctly. Gage continues:

Are sales deals affected by the cycles of the moon? Is it possible to determine credit risk by the way a person types? Fast new data-crunching software combined with a flood of public and private data is allowing companies to test these and other seemingly far-fetched theories, asking questions that few people would have thought to ask before. By combining human and artificial intelligence, they seek to uncover clever insights and make predictions that could give businesses an advantage in an increasingly competitive marketplace.

ZestFinance Chief Executive Douglas Merrill told Gage, Data scientists need to verify whether their findings make sense. Machine learning isnt replacing people. Part of the problem is that most machine learning systems dont combine reasoning with calculations. They simply spit out correlations whether they make sense or not. Gage reports, ZestFinance discarded another finding from its software that taller people are better at repaying loans, a hypothesis that Mr. Merrill calls silly. By adding reasoning to machine learning systems correlations and insights become much more useful. Part of the problem, writes Catherine Havasi (@havasi), CEO and co-founder of Luminoso, Is that when we humans communicate, we rely on a vast background of unspoken assumptions. We assume everyone we meet shares this knowledge. It forms the basis of how we interact and allows us to communicate quickly, efficiently, and with deep meaning. ["Who's Doing Common-Sense Reasoning And Why It Matters," TechCrunch, 9 August 2014] She adds, As advanced as technology is today, its main shortcoming as it becomes a large part of daily life in society is that it does not share these assumptions.

Read the original:

Artificial Intelligence: How Algorithms Make Systems Smart

Meet Microsofts other artificial intelligence

Chris Burns

If you live in China, you may already know about Microsofts "other" artificial intelligence. Cortana lives in the United States and is spreading across the world - but were not talking about Cortana. Were talking about XiaoIce - aka "Little Ice."

Microsoft China is working on a service called XiaoIce. This service is already ready to roll in Beta form for several apps and is being integrated with a few more in the very near future. Its pretty awesome - you speak with it, and it speaks back to you - like a text-only version of Cortana, but ready to dive deep into a wide variety of apps.

But whats it doing in China? And why isnt it here in the United States?

XiaoIce works with Sina Weibo, its going to work with WhatsApp, it works with text chat. So whats the deal?

Just this past month, Microsoft announced that itd be bringing Cortana to China with Windows Phone 8.1 - its nickname was (and is) "Xiao Na." XiaoIce is a sort of derivative of that. Heres what it can do in a basic chat - translated from Mandarin:

Jerry: XiaoIce, will it rain tomorrow?

XiaoIce: Tomorrow the weather in Beijing will be nice with a maximum high of 25*C. No rain to get you soaked.

Jerry: How about the day after tomorrow?

XiaoIce: The day after tomorrow, the weather in Beijing will also be fair, with a max high of 23*C.

Read more:

Meet Microsofts other artificial intelligence

Future of Artificial Intelligence (6) | CNBC 25 Innovations | CNBC International – Video


Future of Artificial Intelligence (6) | CNBC 25 Innovations | CNBC International
Future of Artificial Intelligence is at number 6 in CNBC #39;s top 25 Innovations to look forward to in the next 25 years. Will machines work in harmony with hum...

By: CNBC International

Continued here:

Future of Artificial Intelligence (6) | CNBC 25 Innovations | CNBC International - Video

Artificial Intelligence – Autonomous Rover 4WD with Raspberry, only one webcam and obstacles – Video


Artificial Intelligence - Autonomous Rover 4WD with Raspberry, only one webcam and obstacles
This is an Artificial Intelligence project with Rover 4WD and Raspberry. The Rover is programmed to bypass obstacles and reach the parking, using only one camera and no proximity sensor. ...

By: solsw

Read more here:

Artificial Intelligence - Autonomous Rover 4WD with Raspberry, only one webcam and obstacles - Video

Will future fighter jets be flown by ROBOTS?

The Pentagon in Virginia is planning to introduce artificial intelligence to a future generation of fighter jets The plan is to use AI as co-pilots to humans and to help with sensory data and possibly with landings on aircraft carriers Such technology may be used in the US Navy's upcoming F/A-XX jet And it may also feature in the US Air Force's F-X fighter jet Both are being designed to enter operation by 2030 at the earliest

By Jonathan O'Callaghan for MailOnline

Published: 12:17 EST, 4 September 2014 | Updated: 15:44 EST, 4 September 2014

31 shares

23

View comments

Who will be flying the military aircraft of tomorrow? According to the Pentagon, it may partially fall in the hands of artificial intelligence (AI).

Reports say that both the US Navy and Air Force are planning next-generation fighters that don't have just a human pilot.

Future fighter jets may have an AI co-pilot on board that can help with sensory data in addition to autonomously landing the plane on an aircraft carrier.

Scroll down to watch an autonomous aircraft dodge a missile

Read more here:

Will future fighter jets be flown by ROBOTS?

Google launches quantum processor, artificial intelligence project

Summary: The tech giant is partnering with UCSB to build quantum processors designed for applications in the field of artificial intelligence, poaching an acclaimed physicist in the process.

Google is set to launch a new research project in order to build quantum information processors for the field of artificial intelligence.

Hartmut Neven, Google's director of engineering, announced the initiative on the tech giant's research blog. The team, led by physicist John Martinis from the University of California Santa Barbara (UCSB), will research and develop new quantum information processors based on superconducting electronics with the aim of expanding artificial intelligence technologies.

The Google executive said Martinis and his team at UCSB have made "great strides" in building superconducting quantum electronic components, and the researcher was also recently awarded the London Prize for his "pioneering advances in quantum control and quantum information processing."

While standard computers handle binary data -- which is expressed in zeroes and ones -- quantum computing surrounds the behavior of sub-atomic particles. Some theorists believe that qubits, which could act as both types of binary at the same time, may be able to exploit all combinations of bits at the same time, which could vastly improve the speed and power of computing.

The new team will be hosted by the Quantum Artificial Intelligence Lab, a collaborative effort between Google, NASA Ames Research Center and the Universities Space Research Association (USRA). However, Martinis will be an employee of both the university and Google, and his team will still work with UCSB students and have access to UCSB fabrication facilities.

Neven said:

"With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture."

While Google will now set the team to building its own quantum processor designs, the company says it will continue to collaborate with D-Wave scientists and to experiment with the "Vesuvius" machine at NASA.

Going beyond autonomous, self-driving cars, Wi-Fi balloons and robots, Google has shown an increased interest in artificial over the past several years. In January, the tech giant acquired British artificial intelligence firm Deepmind for what is believed to be $400 million.

Read the original:

Google launches quantum processor, artificial intelligence project

Building a robot with human touch

Dr Nikolas Blevins, a head and neck surgeon at Stanford Health Care, and Hollin Calloway, a third-year resident, using haptic technology, which allows surgeons to practice with 3D software. Photo: Jason Henry / The New York Times

In factories and warehouses, robots routinely outdo humans in strength and precision. Artificial intelligence software can drive cars, beat grandmasters at chess and leave "Jeopardy!" champions in the dust.

But machines still lack a critical element that will keep them from eclipsing most human capabilities anytime soon: a well-developed sense of touch.

Consider Dr. Nikolas Blevins, a head and neck surgeon at Stanford Health Care who routinely performs ear operations requiring that he shave away bone deftly enough to leave an inner surface as thin as the membrane in an eggshell.

Technology will need to advance robotic touch and motion control if robots are ever to collaborate with humans in roles like food service worker, medical orderly, office secretary, or health care assistant, robotic experts say. Photo: HDT Robotics

Blevins is collaborating with roboticists J. Kenneth Salisbury and Sonny Chan on designing software that will make it possible to rehearse these operations before performing them. The program blends X-ray and magnetic resonance imaging data to create a vivid three-dimensional model of the inner ear, allowing the surgeon to practice drilling away bone, to take a visual tour of the patient's skull and to virtually "feel" subtle differences in cartilage, bone and soft tissue. Yet no matter how thorough or refined, the software provides only the roughest approximation of Blevins' sensitive touch.

"Being able to do virtual surgery, you really need to have haptics," he said, referring to the technology that makes it possible to mimic the sensations of touch in a computer simulation.

The software's limitations typify those of robotics, in which researchers lag in designing machines to perform tasks that humans routinely do instinctively. Since the first robotic arm was designed at the Stanford Artificial Intelligence Laboratory in the 1960s, robots have learned to perform repetitive factory work, but they can barely open a door, pick themselves up if they fall, pull a coin out of a pocket or twirl a pencil.

The correlation between highly evolved artificial intelligence and physical ineptness even has a name: Moravec's paradox, after robotics pioneer Hans Moravec, who wrote in 1988, "It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility."

Advances in haptics and kinematics, the study of motion control in jointed bodies, are essential if robots are ever to collaborate with humans in hoped-for roles like food service worker, medical orderly, office secretary and health care assistant.

Read more from the original source:

Building a robot with human touch

Making sense of touch compute

In factories and warehouses, robots routinely outdo humans in strength and precision. Artificial intelligence software can drive cars, beat grandmasters at chess and leave Jeopardy! champions in the dust.

But machines still lack a critical element that will keep them from eclipsing most human capabilities anytime soon: a well-developed sense of touch.

Consider Dr. Nikolas Blevins, a head and neck surgeon at Stanford Health Care who routinely performs ear operations requiring that he shave away bone deftly enough to leave an inner surface as thin as the membrane in an eggshell.

Blevins is collaborating with roboticists J. Kenneth Salisbury and Sonny Chan on designing software that will make it possible to rehearse these operations before performing them. The program blends X-ray and magnetic resonance imaging data to create a vivid three-dimensional model of the inner ear, allowing the surgeon to practice drilling away bone, to take a visual tour of the patients skull and to virtually feel subtle differences in cartilage, bone and soft tissue. Yet no matter how thorough or refined, the software provides only the roughest approximation of Blevins sensitive touch.

Being able to do virtual surgery, you really need to have haptics, he said, referring to the technology that makes it possible to mimic the sensations of touch in a computer simulation.

The softwares limitations typify those of robotics, in which researchers lag in designing machines to perform tasks that humans routinely do instinctively. Since the first robotic arm was designed at the Stanford Artificial Intelligence Laboratory in the 1960s, robots have learned to perform repetitive factory work, but they can barely open a door, pick themselves up if they fall, pull a coin out of a pocket or twirl a pencil.

The correlation between highly evolved artificial intelligence and physical ineptness even has a name: Moravecs paradox, after robotics pioneer Hans Moravec, who wrote in 1988, It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility.

Advances in haptics and kinematics, the study of motion control in jointed bodies, are essential if robots are ever to collaborate with humans in hoped-for roles like food service worker, medical orderly, office secretary and health care assistant.

It just takes time, and its more complicated, Ken Goldberg, a roboticist at the University of California, Berkeley, said of such advances. Humans are really good at this, and they have millions of years of evolution.

Touch impulses

Read more from the original source:

Making sense of touch compute

Google To Partner With Award-Winning Quantum Computer Researchers

September 3, 2014

Chuck Bednar for redOrbit.com Your Universe Online

One of the worlds largest consumer technology companies is entering into the quantum computing market, as Google announced this week that it plans to team with researchers at UC Santa Barbara to build processors based on superconducting electronics.

The Quantum Artificial Intelligence Lab, which was launched by Google in May, is operated out of NASAs Ames Research Center in Moffat Field, California and uses a quantum computer from D-Wave Systems to study the application of quantum optimization to difficult problems in artificial intelligence. The Universities Space Research Association (USRA) is also a project partner.

On Tuesday, Google Director of Engineering Hartmut Neven confirmed that John Martinis and his team at UC Santa Barbara were also joining the research project. Martinis, who was recently presented with the London Prize for his work in quantum control and quantum information processing, and his colleagues have made great strides in building superconducting quantum electronic components of very high fidelity, Neven said.

With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture, he added, noting that they would continue to work with D-Wave scientists and planned to upgrade their Vesuvius machine to a 1000 qubit Washington processor.

According to Reuters reporters Subrat Patnaik and Arnab Sen, while Google is best known for its work on search engines, mobile device technology, self-driving cars and robotics projects, the Mountain View, California-firm has also been increasingly interested in the field of artificial intelligence even going as far as acquiring AI startup DeepMind Technologies Ltd in January to gain an edge in the burgeoning field.

GigaOMs Derrick Harris explained that even though Google is not yet severing ties with D-Wave, it ultimately is planning to develop their own quantum computing hardware. After all, he explains, the company has long designed its own servers and switches, and is pushing an artificial intelligence agenda that includes smartphones, robots and driverless cars. If Google, or anyone, is going to solve the very hard AI problems these technologies present, they probably cant sit around and wait for someone else to build the right systems for them.

Both the UCSB and D-Wave systems require cooling to nearly absolute zero, or minus 459 degrees Fahrenheit. But there are some technical differences, added Don Clark of the Wall Street Journal. Earlier this year, Martinis and his associates published research featuring a five-qubit array that showed advances in correcting certain errors that can occur during the fragile conditions that create quantum effects.

Martinis told Clark he is hopeful the new project will produce technology that will not lose its memory as quickly as earlier hardware, and that he expected his team would actually benefit from Googles affiliation from D-Wave. We view this as a complementary approach to what D-Wave is doing, he explained.

See the article here:

Google To Partner With Award-Winning Quantum Computer Researchers

Google Unveils Quantum Computing Research Initiative

Google Inc. (GOOG) is broadening efforts to create its own cutting-edge computer technology, seeking to use more artificial intelligence in designs that could someday speed up its services.

The company yesterday unveiled a hardware initiative to develop and build processors for its Quantum Artificial Intelligence group, which focuses on technology capable of super-fast calculations based on principles of quantum mechanics. A research team from the University of California at Santa Barbara is joining the initiative, Mountain View, California-based Google said on its research blog.

Google, which spent almost $8 billion on research and development last year, is investing in fresh computing ideas as it looks to keep the lead in markets such as Internet search and online advertising. Quantum technology is seen by some in the technology industry as a transformative way for computers to analyze vast amounts of data. Such advances would be especially useful in Googles main businesses, as well as newer projects like Web-connected devices and cars.

With an integrated hardware group, the Quantum AI team will now be able to implement and test new designs, the company said on its blog.

The computers promise to be faster than traditional ones at solving tricky problems that require sorting through and analyzing large volumes of digital information. One of Googles Quantum AI researchers, Masoud Mohseni, has co-authored papers with leading academics in the field, and the company has been seen as helping lead the push into this new technology.

Google rival Microsoft Corp. (MSFT), the worlds largest software maker, is also pursuing this area via its Quantum Architectures and Computation Group.

To contact the reporters on this story: Jack Clark in San Francisco at jclark185@bloomberg.net; Brian Womack in San Francisco at bwomack1@bloomberg.net

To contact the editors responsible for this story: Pui-Wing Tam at ptam13@bloomberg.net Jillian Ward, Ben Livesey

Press spacebar to pause and continue. Press esc to stop.

The rest is here:

Google Unveils Quantum Computing Research Initiative

Robots as teachers: school in Abu Dhabi leads with effective artificial intelligence for teaching – Video


Robots as teachers: school in Abu Dhabi leads with effective artificial intelligence for teaching
Robots as teachers: school in Abu Dhabi leads with effective artificial intelligence for teaching.

By: asianetnews

See the rest here:

Robots as teachers: school in Abu Dhabi leads with effective artificial intelligence for teaching - Video

Bankers beware: City 'will soon be run by robots'

I believe in Moravecs Paradox, Mr Coplin, Microsofts UK-based chief envisioning officer, told The Telegraph, referring to the Eighties hypothesis discovered by artificial intelligence and robotics researchers. This states that what we think is easy, robots find really hard, and what we think it really hard, robots find easy, he said. Complex maths equations are hard for humans but take nanoseconds for a computer, but moving around and picking things up is easy for us, while being almost impossible for a robot.

Algorithms are already commonplace on City trading floors, and are used in many industries, from online retail to internet dating. High-frequency trading, governed by algorithms, is already one of the most profitable trading classes. But, according to Mr Coplin, in 10 years people will no longer be required to manage these algorithms. Decisions will be taken directly by the artificial intelligence.

Everyone thinks of Terminator and Skynet [the computer that becomes self-aware and attempts to destroy mankind in James Camerons 1984 film] when I start talking about this, but technology affords us a tremendous opportunity to play to our strengths as humans, and stand on the shoulders of robotic giants, said Mr Coplin.

Microsoft has tasked Mr Coplin with exploring the new trends that will shape the world of work in the coming years.

I am hunting for the game-changers of the next 10 years, he said.

Mr Coplin believes that the rise of big data and innovations in the field of ambient intelligence smart technology that responds to the presence of people are going to bring about radical changes in the workplace.

I call my mobile a smartphone but even though it has information about where I am and who I speak to, it doesnt do anything with that information. It doesnt deliver a service.

In the future, ambient intelligence will allow devices to anticipate your needs and respond in real time. Your phone will send automated email responses based on keywords and contributing factors such as location, time of day, and calendar entries. Business processes will be increasingly automated, freeing up humans to do more useful things, Mr Coplin said.

Big data is not a new concept but technologists are increasingly interested in finding new ways that these mountains of data can be read and interpreted.

Microsoft is an active participant in this field of research. It recently trialled a new feature for Skype, its voice over IP service, which allows users to select a language and translates their speech in real-time.

Read the original:

Bankers beware: City 'will soon be run by robots'

Bankers beware: Robot revolution set to push humans aside

London's financial district, known as the City, will soon be run more by robots than by people, it's claimed. Photo: Reuters

Robots will be running Britain's financial sector within 10 years, rendering investment bankers, analysts and even quants redundant, it has been claimed.

Artificial intelligence is about to outpace human ability, according to Dave Coplin, a senior Microsoft executive.

Computers will not only be able to undertake complex mathematical equations but draw logical, nuanced conclusions, reducing the need for human interference, he said.

This will render certain professions redundant, while other "human only" skills will become increasingly valuable.

Advertisement

"I believe in Moravec's Paradox," Mr Coplin, Microsoft's UK-based chief envisioning officer, told The Telegraph, referring to the 1980s hypothesis discovered by artificial intelligence and robotics researchers.

"This states that what we think is easy, robots find really hard, and what we think it really hard, robots find easy," he said.

"Complex maths equations are hard for humans but take nanoseconds for a computer, but moving around and picking things up is easy for us, while being almost impossible for a robot."

Meanwhile, he said, professions currently viewed as commodities will become specialist human skills.

See the rest here:

Bankers beware: Robot revolution set to push humans aside

Ggle Lunch 8/25/14 Internet Neutrality and Reflective Artificial Intelligence – Video


Ggle Lunch 8/25/14 Internet Neutrality and Reflective Artificial Intelligence
John McGowan converses about internet neutrality laws and the political concerns involved with different levels of internet access. Chuck chats about how kids are cut off from the outside...

By: Chuck Pawlik

Originally posted here:

Ggle Lunch 8/25/14 Internet Neutrality and Reflective Artificial Intelligence - Video