No AI Overlords?: What Is Larson Arguing and Why Does It Matter? – Walter Bradley Center for Natural and Artificial Intelligence

Yesterday, we were looking at the significance of AI researcher Erik J. Larsons new book, The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, contrasting it with claims that AI will merge with or replace us. Some such claims are made by industry insiders like Ray Kurzweil. But more often we hear them from science celebs like the late Stephen Hawking and Richard Dawkins, who, on these topics, are more known than knowledgeable.

So why does Larson think they are wrong? He offers two arguments. First, as information theorist William Dembski explains, is that there are some kinds of thinking that, by their nature, computers dont do:

With regard to inference, he shows that a form of reasoning known as abductive inference, or inference to the best explanation, is for now without any adequate computational representation or implementation. To be sure, computer scientists are aware of their need to corral abductive inference if they are to succeed in producing an artificial general intelligence.

True, theyve made some stabs at it, but those stabs come from forming a hybrid of deductive and inductive inference. Yet as Larson shows, the problem is that neither deduction, nor induction, nor their combination are adequate to reconstruct abduction. Abductive inference requires identifying hypotheses that explain certain facts of states of affairs in need of explanation. The problem with such hypothetical or conjectural reasoning is that that range of hypotheses is virtually infinite. Human intelligence can, somehow, sift through these hypotheses and identify those that are relevant. Larsons point, and one he convincingly establishes, is that we dont have a clue how to do this computationally.

Abduction? Heres an example, one of a series:

Example # 1 Suppose you have two friends, David and Matt, who recently had a fight that ended their friendship.

Shortly afterwards, someone tells you that you saw David and Matt together at the movies. The best explanation for what they just told you is that David and Matt made peace and are friends again.

In all the examples presented, the conclusions do not logically derive from the premises.

In example 1, about David and Matt, if we accept that both premises are true, it could be that these two examinations were casually seen in the cinema. In addition, we do not have statistics on fights or friendship.

The conclusion that they are friends again is not logical, in fact, but it is the Better explanation Possible for the fact that they have been seen together. The same applies to all other cases. What is an abductive argument? (With examples), Life Persona

Abduction is often called an inference to the best explanation. Computers have difficulty with this type of decision-making, probably because it is not strictly computational. There is nothing to compute. A different sort of thought process is at work.

Dembski continues,

His other argument for why an artificial general intelligence is nowhere near lift-off concerns human language. Our ability to use human language is only in part a matter of syntactics (how letters and words may be fit together). It also depends on semantics (what the words mean, not only individually, but also in context, and how words may change meaning depending on context) as well as on pragmatics (what the intent of the speaker is in influencing the hearer by the use of language).

Larson argues that we have, for now, no way to computationally represent the knowledge on which the semantics and pragmatics of language depend. As a consequence, linguistic puzzles that are easily understood by humans and which were identified over fifty years ago as beyond the comprehension of computers are still beyond their power of comprehension. Thus, for instance, single-sentence Winograd schemas, in which a pronoun could refer to one of two antecedents, and where the right antecedent is easily identified by humans, remains to this day opaque to machines machines do no better than chance in guessing the right antecedents. Thats one reason Siri and Alexa are such poor conversation partners.

Heres an example of a Winograd schema:

[f]rom AI pioneerTerry Winograd:The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Theres a verb choice quiz embedded in the sentence, and the task for System A is to select the right one. If System A has common sense, the answer is obvious enough.

Strangely, not only squirrels with superhuman memories but advanced AI systems running on IBM Blue Gene supercomputers (who might play fabulous chess), hit brick walls with such questions. The quiz, as originally put by Winograd, so flummoxes modern AI that another AI pioneer, the University of TorontosHector Levesqueand colleagueErnest Davis,devised an test for AI based on the Winograd Schema, as it came to be called. The focus is on the pronouns in a sentence, for example, they. Thus the updated question reads:

The city councilmen refused the demonstrators a permit because they feared violence.Who feared violence?

Readers find it easy to supply the right noun or noun phrase, the city councilmen. Its obviousits just common sensewho else would fear violence?

But now change the verb to advocated and the common sense stays, but the answer changes (the demonstrators). Winograd Schema quizzes are small potatoes to almost any native speaker of English past the age of (what?) five? ten?. But it repeatedly flummoxes any and all the AI systems that are purporting to be charging inexorably toward superintelligence. It seems like theres a small problem with the logic here if such systems fail on easy language questions and they do. Analysis, Superintelligent AI is still a myth at Mind Matters News

The contest was abandoned in 2016 when the Google Brain teams computing power got nowhere with this type of problem. Larson recounts some misadventures this type of deficit has generated: Microsofts trashmouth chatbot Tay and University of Readings smartmouth chatbot Eugene Goostman were fun. Mistaking a school bus for a punching bag (which happened in a demonstration) would not be so funny.

There are workarounds for these problems, to be sure. But they are not problems that bigger computers and better programming can simply solve. Some of the thought processes needed are just not computations, period.

Dembski (pictured) concludes, After reading this book, believe if you like that the singularity is right around the corner, that humans will soon be pets of machines, that benign or malevolent machine overlords are about to become our masters. But after reading this book, know that such a belief is unsubstantiated and that neither science nor philosophy backs it up.

Next: Why did a prominent science writer come to doubt the AI takeover?

You may also wish to read the first part: New book massively debunks our AI overlords: Aint gonna happen AI researcher and tech entrepreneur Erik J. Larson expertly dissects the AI doomsday scenarios. Many thinkers in recent years have tried to stem the tide of hype but, as Dembski points out, no one has done it so well.

Visit link:

No AI Overlords?: What Is Larson Arguing and Why Does It Matter? - Walter Bradley Center for Natural and Artificial Intelligence

Related Posts

Comments are closed.