The Famous AI Turing Test Put In Reverse And Upside-Down, Plus Implications For Self-Driving Cars – Forbes

AI and the Turing Test, turned round and round.

How will we know when the world has arrived at AI?

To clarify, there are lots of claims these days about computers that embody AI, implying that the machine is the equivalent of human intelligence, but you need to be wary of those rather brash and outright disingenuous assertions.

The goal of those that develop AI consists of one day being able to have a computer-based system that can exhibit human intelligence, doing so in the widest and deepest of ways that human intelligence exists and showcases itself.

There is not any such AI as yet devised.

The confusion over this matter has gotten so out-of-hand that the field of AI has been forced into coming up with a new moniker to express the outsized revered goal of AI, proclaiming now that the goal is to arrive at Artificial General Intelligence (AGI).

This is being done in hopes of emphasizing to laymen and the public-at-large that the vaunted and desired AI would include common-sense reasoning and a slew of other intelligence-like capacities that humans have (for details about the notion of Strong AI versus Weak AI, along with Narrow AI too, see my explanation at this link here).

Since there is quite some muddling going on about what constitutes AI and what does not, you might wonder how we will ultimately be able to ascertain whether AI has been unequivocally attained.

We rightfully should insist on having something more than a mere provocateur proclamation and we ought to remain skeptical about anyone that holds forth an AI system that they declare is the real deal.

Looks alone would be insufficient to attest to the arrival.

There are plenty of parlor stunts in the AI bag-of-tricks that can readily fool many into believing that they are witnessing an AI of amazing human-like qualities (see my coverage of such trickery at this link here).

No, just taking someones word for AI having been accomplished or simply kicking the tires of the AI to feebly gauge its merits is insufficient and inarguably will not do.

There must be a better way.

Those within the AI field have tended to consider a type of test known as the Turing Test to be the gold standard for seeking to certify AI as being the venerated AI or semantically the AGI.

As named after its author, Alan Turing, the well-known mathematician and early pioneer in the computer sciences, the Turing Test was devised in 1950 and remains pertinent still today (heres a link to the original paper).

Parsimoniously, the Turing Test is relatively easy to describe and indubitably straightforward to envision (for my deeper analysis on this, see the link here).

Heres a quick rundown about the nature of the Turing Test.

Imagine that we had a human hidden behind a curtain, and a computer hidden behind a second curtain, such that you could not by sight alone discern what or who is residing behind the two curtains.

The human and the computer are considered contestants in a contest that will be used to try and figure out whether AI has been reached.

Some prefer to call them subjects rather than contestants, due to the notion that this is perhaps more of an experiment than it is a game show, but the point is that they are participants in a form of challenge or contest involving wits and intelligence.

No arm wrestling is involved, and nor any other physical acts.

The testing process is entirely about intellectual acumen.

A moderator serves as an interrogator (also referred to as a judge because of the designated deciding role in this matter) and proceeds to ask questions of the two participants that are hidden behind the curtains.

Based on the answers provided to the questions, the moderator will attempt to indicate which curtain hides the human and which curtain hides the computer. This is a crucial judging aspect. Simply stated, if the moderator is unable to distinguish between the two contestants as to which is the human and which is the computer, presumably the computer has sufficiently proven that it is the equivalent of human intelligence.

Turing originally coined this the imitation game since it involves the AI trying to imitate the intelligence of humans. Note that the AI does not necessarily have to be crafted in the same manner as humans, and thus there is no requirement that the AI has a brain or uses neurons and such. Thus, those devising AI are welcome to use Legos and duct tape if that will do the job to achieve the equivalence of human intelligence.

To successfully pass the Turing Test, the computer embodying AI will have had to answer the posed questions with the same semblance of intelligence as a human. An unsuccessful passing of the Turing Test would occur if the moderator was able to announce which curtain housed the computer, thus implying that there was some kind of telltale clue that gave away the AI.

Overall, this seems to be a rather helpful and effective way to ferret out AI that is the aspirational AGI versus AI that is something less so.

Of course, like most things in life, there are some potential gotchas and twists to this matter.

Imagine we have set up a stage with two curtains and a podium for the moderator. The contestants are completely hidden from view.

The moderator steps up to the podium and asks one of the contestants how to make a bean burrito, and then asks the other contestant how to make a bologna sandwich. Lets assume that the answers are apt and properly describe the effort involved in making a bean burrito and in making a bologna sandwich, respectively so.

The moderator decides to stop asking any further questions.

Voila, the moderator announces, the AI is indistinguishable from human intelligence and therefore this AI is declared forthwith as having reached the pinnacle of AI, the long sought after AGI.

Should we accept this decree?

I dont think so.

This highlights an important element of the Turing Test, namely that the moderator needs to ask a sufficient range and depth of questions that will help root out the embodiment of intelligence. When the questions are shallow or insufficient, any conclusion reached is spurious at best.

Please know too that there is not a specified set of questions that have been vetted and agreed upon as the right ones to be asked during a Turing Test. Sure, some researchers have tried to propose the types of questions that ought to be asked, but this is an ongoing debate and to some extent illuminates that we are still not even quite sure of what intelligence per se consists of (it is hard to identify metrics and measures for that which is relatively ill-defined and ontologically squishy).

Another issue exists about the contestants and their behavior.

For example, suppose the moderator asks each of the contestants whether they are human.

The human can presumably answer yes, doing so honestly. The AI could say that it is not a human, opting to be honest, but then this decidedly ruins the test and seemingly undermines the spirit of the Turing Test.

Perhaps the AI should lie and say that it is the human. There are ethicists though that would decry such a response and argue that we do not want AI to be a liar, therefore no AI should ever be allowed to lie.

Of course, the human might lie, and deny that they are the human in this contest. If we are seeking to make AI that is the equivalent of human intelligence, and if humans lie, which we all know that humans certainly do lie from time-to-time, shouldnt the AI also be allowed to lie?

Anyway, the point is that the contestants can either strive to aid the Turing Test or can try to undermine or distort the Turing Test, which some say is fine, and that it is up the moderator to figure out what to do.

Alls fair in love and war, as they say.

How tricky do we want the moderator to be?

Suppose the moderator asks each of the contestants to calculate the answer to a complex mathematical equation. The AI can speedily arrive at a precise answer of 8.27689459, while the human struggles to do the math by hand and come up with an incorrect answer of 9.

Aha, the moderator has fooled the AI into revealing itself, and likewise the human into revealing that they are a human, doing so by asking a question that the computer-based AI readily could answer and that a human would have a difficult time answering.

Believe it or not, for this very reason, AI researchers have proposed the introduction of what some describe as Artificial Stupidity (for detailed facets of this topic, see my coverage here). The idea is that the AI will purposely attempt to be stupid by sharing answers as though they were prepared by a human. In this instance, the AI might report that the answer is 8, thus the response is a lot like the one by the human.

You can imagine that having AI purposely try to make mistakes or falter (this is coined as the Dimwit ploy by AI, see my explanation at this link here), seems distasteful, disturbing, and not something that everyone necessarily agrees is a good thing.

We do allow for humans to make guffaws, but having AI that does so, especially when it knows better would seem like a dangerous and undesirable slippery slope.

The Reverse Turing Test Rears Its Head

Ive now described for you the overall semblance of the Turing Test.

Next, lets consider a variation that some like to call a Reverse Turing Test.

Heres how that works.

The human contestant decides they are going to pretend that they are the AI. As such, they will attempt to provide answers that are indistinguishable from the AIs type of answers.

Recall that the AI in the conventional Turing Test is trying to seem indistinguishable from a human. In the Reverse Turing Test, the human contestant is trying to reverse the notion and act as though they were the AI and therefore indistinguishable from the AI.

Well, that seems mildly interesting, but why would the human do this?

This might be done for fun, kind of laughs for people that enjoy developing AI systems. It could also be done as a challenge, trying to mimic or imitate an AI system, and betting whether you can do so successfully or not.

Another reason and one that seems to have more chops or merit consists of doing what is known as a Wizard of Oz.

When a programmer is developing software, they will sometimes pretend that they are the program and use a facade front-end or interface to have people interact with the budding system, though those users do not know that the programmer is watching their interaction and ready to interact too (doing so secretively from behind the screen and without revealing their presence).

Doing this type of development can reveal how the end-users are having difficulties using the software, and meanwhile, they remain within the flow of the software by the fact that the programmer intervened, quietly, to overcome any of the computer system deficiencies that might have disrupted the effort.

Perhaps this makes clear why it is often referred to as a Wizard of Oz, involving the human staying in-the-loop and secretly playing the role of Oz.

Getting back to the Reverse Turing Test, the human contestant might be pretending to be the AI to figure out where the AI is lacking, and thus be better able to enhance the AI and continue on the quest toward AGI.

In that manner, a Reverse Turing Test can be used for perhaps both fun and profit.

Turing Test Upside-Down And Right Side Up

Some believe that we might ultimately be headed toward what is sometimes called the Upside-Down Turing Test.

Yes, thats right, this is yet another variant.

In the Upside-Down Turing Test, replace the moderator with AI.

Say what?

This less discussed variant involves having AI be the judge or interrogator, rather than a human doing so. The AI asks questions of the two contestants, still consisting of an AI and a human, and then renders an opinion about which is which.

Your first concern might be that the AI seems to have two seats in this game, and as such, it is either cheating or simply a nonsensical arrangement. Those that postulate this variant are quick to point out that the original Turing Test has a human as a moderator and a human as a contestant, thus, why not allow the AI to do the same.

The instant retort is that humans are different from each other, while AI is presumably the same thing and not differentiable.

Thats where those interested in the Upside-Down Turing Test would say you are wrong in that assumption. They contend that we are going to have multitudes of AI, each of which will be its own differentiable instance, and be akin to how humans are each distinctive instances (in brief, the argument is that AI will be polylithic and heterogeneous, rather than monolithic or homogeneous).

The counterargument is that the AI is presumably going to be merely some kind of software and a machine, all of which can be readily combined into other software and machines, but that you cannot readily combine humans and their brains. We each have a brain intact within our skulls, and there are no known means to directly combine them or mesh them with others.

Anyway, this back-and-forth continues, each proffering a rejoinder, and it is not readily apparent that the Upside-Down variant can be readily discarded as a worthwhile possibility.

As you might imagine, there is an Upside-Down Turing Test and also an Upside-Down Reverse Turing Test, mirroring the aspect of the conventional Turing Test and its counterpart the Reverse Turing Test (some, by the way, do not like the use of Upside-Down and instead insist that this added variant is merely another offshoot of the Reverse Turing Test).

You might begrudgingly agree to let the AI be in two places at once, and have one AI as the interrogator and one as a contestant.

What good does that do anyway?

One thought is that it helps to potentially further showcase whether AI is intelligent, which might be evident as to the questioning and the nature of how the AI digests the answers being provided, illustrating the AIs capacity as the equivalent of a human judge or interrogator.

Thats the mundane or humdrum explanation.

Are you ready for the scary version?

It has to do with intelligence, as Ill describe next.

Some believe that AI will eventually exceed human intelligence, arriving at Artificial Super Intelligence (ASI).

The word super is not meant to imply superman or superwoman kinds of powers, and instead of that, the intelligence of the AI is beyond our human intelligence, though not necessarily able to leap tall buildings or move faster than a speeding bullet.

Nobody can say what this ASI or superintelligence might be able to think of, and perhaps we as humans are so limited in our intelligence that we cannot see beyond our limits. As such, the ASI might be intelligent in ways that we cannot foresee.

Thats why some are considering AI or AGI to potentially be an existential threat to humanity (this is something that for example Elon Musk has continued to evoke, see my coverage at this link here), and the ASI presumed to be even more so a potential menace.

If you are interested in this existential threat argument, as Ive pointed out repeatedly (see the link here), there are just as many ways to conjure that the AI or AGI or ASI will help mankind and aid us in flourishing as there are the doomsday scenarios of our being squashed like a bug. Also, there is a rising tide of interest in AI Ethics, fortunately, which might aid in coping with, avoiding, or mitigating the coming AI calamities (for more on AI Ethics, see my discussion at this link here).

That being said, it certainly makes sense to be prepared for the doom-and-gloom scenario, due to the rather obvious discomfort and sad result that would accrue going down that path. I presume that none of us want to be summarily crushed out of existence like some annoying and readily dispatched pests.

Returning to the Upside-Down Turing Test, it could be that an ASI would sit in the moderator's seat and be judging whether conventional AI has yet reached the aspirational level of AI that renders it able to pass the Turing Test and be considered indistinguishable from human intelligence.

Depending on how far down the rabbit hole you want to go on this, at some point the Turing Test might have two seats for the ASI, and one seat for AI. This means that the moderator would be an ASI, while there is conventional AI as a contestant and another ASI as the other contestant.

Notice that there is not a human involved at all.

Maybe we ought to call this the Takeover Turing Test.

No humans needed; no humans allowed.

Conclusion

It is unlikely that AI is going to be crafted simply for the sake of making AI, and instead, there will be a purpose-driven rationale for why humans opt to create AI.

One such purpose involves the desire to have self-driving cars.

A true self-driving car is one that has AI driving the car and there is no need for a human driver. The only role of a human would be as a passenger, but not at all as a driver.

A vexing question right now is what level or degree of AI is needed to achieve self-driving cars.

Some believe that until AI has arrived at the aspirational AGI, we will not have true self-driving cars. Indeed, those with such an opinion would likely say that the AI has to achieve sentience, perhaps doing so in a moment of switchover from automation into a spark of being that is called the moment of singularity (for more on this, see my analysis at this link here).

Hogwash, some counter, and insist that we can get AI that is not necessarily Turing Test worthy but that can nonetheless safely and properly drive cars.

To be clear, right now there is not any kind of AI self-driving car that approaches anything like AGI, and so for the moment, we are faced with trying to decide if plain vanilla AI can be sufficient to drive a car. Quick aside, for those interested in AI, some refer to any symbolic approach to AI as GOFAI or Good Old-Fashioned Artificial Intelligence, which is both endearing and to some degree a backhanded slight, all at the same time (see more at my explanation here).

Follow this link:

The Famous AI Turing Test Put In Reverse And Upside-Down, Plus Implications For Self-Driving Cars - Forbes

Related Posts

Comments are closed.