Big Data is the new Artificial Intelligence

This is the first of a couple columns about a growing trend in Artificial Intelligence (AI) and how it is likely to be integrated in our culture. Computerworld ran an interesting overview article on the subject yesterday that got me thinking not only about where this technology is going but how it is likely to affect us not just as a people. but as individuals. How is AI likely to affect me? The answer is scary.

Today we consider the general case and tomorrow the very specific.

The failure of Artificial Intelligence. Back in the 1980s there was a popular field called Artificial Intelligence, the major idea of which was to figure out how experts do what they do, reduce those tasks to a set of rules, then program computers with those rules, effectively replacing the experts. The goal was to teach computers to diagnose disease, translate languages, to even figure out what we wanted but didnt know ourselves.

It didnt work.

Artificial Intelligence or AI, as it was called, absorbed hundreds of millions of Silicon Valley VC dollars before being declared a failure. Though it wasnt clear at the time, the problem with AI was we just didnt have enough computer processing power at the right price to accomplish those ambitious goals. But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.

The human speed bump. Its ironic that a key idea behind AI was to give language to computers yet much of Googles success has been from effectively taking language away from computers -- human language that is. The XML and SQL data standards that underly almost all web content are not used at Google where they realized that making human-readable data structures made no sense when it was computers -- and not humans -- that would be doing the communicating. Its through the elimination of human readability, then, that much progress has been made in machine learning.

You see in todays version of Artificial Intelligence we dont need to teach our computers to perform human tasks: they teach themselves.

Google Translate, for example, can be used online for free by anyone to translate text back and forth between more than 70 languages. This statistical translator uses billions of word sequences mapped in two or more languages. This in English means that in French. There are no parts of speech, no subjects or verbs, no grammar at all. The system just figures it out. And that means theres no need for theory. It works, but we cant say exactly why because the whole process is data driven. Over time Google Translate will get better and better, translating based on what are called correlative algorithms -- rules that never leave the machine and are too complex for humans to even understand.

Google Brain. At Google they have something called Google Vision that currently has 16000 microprocessors equivalent to about a tenth of our brains visual cortex. It specializes in computer vision and was trained exactly the same way as Google Translate, through massive numbers of examples -- in this case still images (BILLIONS of still images) taken from YouTube videos. Google Vision looked at images for 72 straight hours and essentially taught itself to see twice as well as any other computer on Earth. Give it an image and it will find another one like it. Tell it that the image is a cat and it will be able to recognize cats. Remember this took three days. How long does it take a newborn baby to recognize cats?

Here is the original post:

Big Data is the new Artificial Intelligence

Related Posts

Comments are closed.