Artificial intelligence revisited

Ever thought of what would it feel like to live during the age of the gold rush? Digging hard, excitedly looking for gold. Miners get envious when others find gold, causing them to dig more furiously, determined to strike gold themselves. What happens, though, when someone hits a rock that looks exactly like gold, but really isnt? All the miners get curious. They all wonder, will the outside world take this for gold? One brave chap goes and markets it. Lo and behold! The world likes it! Now the miners start scavenging for this rock. In that, they forget their quest for the real gold. Except, maybe, a few grumpy miners who found the first gold.

Back in the 80s, after Steve Jobs brought about the personal computer revolution, computer engineers began running around like miners in a gold rush. But, there is one other endeavor I want to point out today: The quest for artificial intelligence, the quest to duplicate the human mind or, if I may, the quest for real artificial intelligence.

Douglas Hofstadter, a Ph.D. in Physics at the University of Oregon, won the Pulitzer prize in 1980 for his seminal book Gdel, Escher, Bach: An Eternal Golden Braid. The book, also known as GEB, sprung from his summer excursion in 1972. While camping around in forests and besides lakes, he thought about thinking itself. He thought about the brain and its wondrous ability to create an abstract concept called thought. He aligned his reasoning with self evaluating mathematical systems and realized that there lay an answer in Kurt Gdels 1931 mathematical proof that mathematical systems could generate statements not just about numbers but about itself. Hofstadters was the first quest for real artificial intelligence in the history of mankind.

GEB introduced the field of artificial intelligence, an inter-disciplinary study of logic, math, cognition and neuroscience. The miners had started looking for gold, digging furiously.

IBM was one such miner. Relentless in its search for artificial intelligence, its stronghold in the computer industry worked to its advantage as a computer could only perform mathematical calculations of that cadre.

In 1988, IBM undertook a project called Candide. Candide started by accepting defeat on the path set forward by Hofstadter, deeming the problem of constructing mathematical systems on understanding the constructs of language, semantics and symmetry as too complex. Instead, they found a rock that looked just like gold. They called it machine learning.

Machine learning is so similar to the way a child learns that it is a surprise they dont call it human learning. The underlying concept is that of learning by analogy. If a mathematical system is trained with huge amounts of data, the system learns the patterns and will predict an outcome based on the patterns it sees. This is exactly how a baby learns a language.

But heres why it is not the real deal. A fully-grown human brain can do more than a babys brain. Boil things down to their fundamental truths and reason up from there, Elon Musk, the founder of SpaceX and Tesla Motors, said in his Ted Talk. That is the physics approach. That is the exact opposite of reasoning by analogy.

The problem with analogies is that they are limiting. Analogies can only help one hop about horizontally. But it requires reasoning to dive deeper down vertically. A system that works by analogy will be able to predict, by training through huge amounts of data or by evaluating all permutations and combinations, that Obama is an important person or that I use one in a plural sense in my writing. But it will not be able to discover gravity, nor general relativity.

Yes, IBM marketed it, and the world liked it! A lot of swag that the information technology world has is accredited to this move of IBM. It enables weather forecast, auto spell-check, search engines, Siri and Deep Blue the IBM computer that beat Gary Kasparov in the famous chess game of 1997.

Continued here:

Artificial intelligence revisited

Related Posts

Comments are closed.