Is There a Clear Path to General AI? – CMSWire

PHOTO:John Lockwood

People frequently mix up two pairs of terms when talking about artificial intelligence: Strong vs. Weak AI, and General vs. Narrow AI. The key to understanding the difference lies in which perspective we want to take: are we aiming for a holy grail that, once found, will mean solving one of mankinds biggest questions or are we merely aiming to build a tool to make us more efficient at a task?

The Strong vs. Weak AI dichotomy is largely a philosophical one, made prominent in 1980 by American philosopher John Searle. Philosophers like Searle are looking to answer the question of whether we can theoretically and practically build machines that truly think and experience cognitive states, such as understanding, believing, wanting, hoping. As part of that endeavor, some of them examine the relationship between these states and any possibly corresponding physical states in the observable world of the human body: when we are in the state of believing something, how does that physically manifest itself in the brain or elsewhere?

Searle concedes that computers, the most prominent form of such machines in our current times, are powerful tools that can help us study certain aspects of human thought processes. However, he calls thatWeak AI, as its not the real thing. He contrasts that with "Strong AI as follows:But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

While this philosophical perspective is fascinating in and of itself, it remains largely elusive to modern day practical efforts in the field of AI. Philosophers are thinkers, meant to raise the right questions at the right time to help us think through the implications of our doings. They are rarely builders. The builders among us, the engineers, seek to solve practical problems in the physical world. Note that this is not a question of whose aims are more noble, but merely a question of perspective.

Engineers seeking to build systems that are of practical use today are more interested in the distinction of General vs. Narrow AI. That distinction is one of the applicability of a system at hand. We call something Narrow AI if it is built to perform one function, or a set of functions in a particular domain, and that alone. In reality, that is the only form of AI we have at our disposal today. All of the currently available systems are built for one task alone.

The biggest revelation for any non-expert here is that an AI system's performance in one task does not generalize. If you've built a system that has learned to play chess, your system cannot play the ancient Chinese game of Go, not even with some additional modifications. And if you have a system that plays Go better than any human, no matter how hard that task seemed before such a program finally got built in 2017, that system will NOT generalize to any other task. Just because a system performs one task well does not mean it will soon (a term used often by people writing and talking about technology in general) perform seemingly related tasks well, too. Each new task that is different in nature (and there are many of those different natures) is a tedious and laborious job for the engineers and designers who build these systems.

So if the opposite of Narrow AI is General AI, youre essentially talking about a system that can perform any task you throw at it. The original idea behind General AI was to build a system that could learn any kind of task through self-training, without requiring examples pre-labeled by humans. (Note that this is still different from Searles notion of Strong AI, in that you could theoretically build General AI without building true thinking it could still just be a simulation of the real thing.)

Related Article: Confused by AI Hope and Fear? You're Not Alone

Lets do a thought experiment (a common tool of any philosopher who wants to think through an idea or theory). What if we interconnected each and every narrow AI solution ever built on planet Earth? What if we essentially built an IoA, an Internet of AIs? There are companies out there that have built:

If we standardized the interfaces for all of these solutions, and those for the hundreds and thousands of other tasks we face in our lives, wouldnt we then essentially have built General AI? One AI system of systems that can solve whatever you throw at it?

Certainly not. A hodgepodge of backend systems that each accomplish one task in a proprietary way is certainly not the same as one system that is equipped with general learning capabilities and can thus self-teach any skill needed. It is also far from being the sort of Strong AI that philosophers have in mind, as humans are definitely not a conglomerate of differently built subcomponents for each and every task we can conduct.

But then again does it matter? Wouldn't such a readily available system of systems essentially give us an omnipotent tool to help us with any imaginable task we face? It certainly would! And to someone oblivious to its inner structure, it would even appear to be that long-sought magical AI weve been shown in books and movies for decades.

The problem is this: such an Internet of AIs will never become reality. Our worlds capitalist nature essentially prohibits the sharing of intellectual property at the scale needed for such an endeavor. For any of the systems mentioned above, there are probably dozens of firms out there that make money having re-solved the same problem over and over again. Googles translation engine does a fine job, but so too does Facebooks, Microsofts, IBMs, DeepLs, SysTrans, Yandexs, Babylons, Apertiums ... some of them use a common foundation that academic circles have produced over the years, but many dont. Humans are not wired to combine their forces to a common greater good of such majestic proportions we are observing that fateful trait of ours in matters both short-term (coronavirus) and long-term (global warming).

So until our very DNA changes, which would further a change of our societal systems, we are stuck with Narrow AI, which will continue to bring meaningful innovation to us and make us more efficient over time in each of the domains it tackles but the holy grails of Strong or General AI will remain a dream.

Tobias Goebel is a conversational technologist and evangelist with over 15 years of experience in the customer service and contact center technology space. He has held roles spanning engineering, consulting, pre-sales, product management, and product marketing, and is a frequent blogger and speaker on Customer Experience topics.

Continued here:

Is There a Clear Path to General AI? - CMSWire

Related Posts

Comments are closed.