On The Beguiling Question Of Whether AI Can Form Intent, Including The Case Of Self-Driving Cars – Forbes

Posted: June 7, 2020 at 9:45 am

Can AI have intent, and if so, how will we know.

Lets start with a bit of a game or puzzle if you will.

These remarks all have something in common:

The devil made me do it

I didnt mean to be mean to you

Something just came over me

I wanted to do it

You got what was coming to you

My motives were pure

Whats that all about?

Ill wait a moment for you to mull over that question and form a potential answer.

Okay, now that you gave that some thought, you could answer that those are all various ways in which someone might express their intent or intentions.

In some instances, the person is seemingly expressing their intent directly, while in other cases they appear to be avoiding being pinned down on their own intentions and are trying to toss the intent onto the shoulders of someone or something else.

When we express our intent, there is no particular reason to necessarily believe that it is true per se.

A person can tell you their intentions and yet be lying through their teeth.

Or, a person can offer their intentions and genuinely believe that they are forthcoming in their indication, and yet it might be entirely fabricated and concocted as a kind of rationalization after-the-fact.

Consider too that a person might be offering acrid cynical remarks, for which their intention is buried or hidden within their words, and you accordingly need to somehow decipher or tease out the real meaning of their quips.

There is also the straightforward possibility that the person is utterly clueless about their own intention, and thus are unable to precisely state what their intent is.

And so on.

This naturally leads us to contemplate what intent or intention purports to consist of.

The common definition of intent or intention is that it involves the act of determining something that you want and plan to do, and usually emphasizes that the effort of intent encompasses mentally determining upon some action or result.

By referring to the mind or mental processing, the word intent opens quite a Pandoras box.

Simply stated, there is no ironclad way to know what someones mind contains or did contain.

We do not have any means to directly and fully interrogate the brain and have it showcase to us the origins of thoughts and how they came to exist. Our brains and our minds are locked away in our skulls, and the only path to figuring out what is going on consists of poking around from the outside or marginally so from the inside.

Now, yes, you can try using an MRI and other techniques to try and gauge the electromagnetic or biochemical activity of the brain, but be clear that this is a far cry from being able to connect-the-dots directly and be able to definitively indicate that this thought or that thought was derived from these neurons and those neurons.

We have not yet reversed engineered the brain sufficiently to make those kinds of uncontestable proclamations.

Overall, one could even argue that the whole concept of intent and intentions is somewhat obtuse and perhaps a construct of what we want to believe about our actions. Some would say that we want to believe that we do things for a reason, and therefore we offer that there is this thing called intent and thus it offers a rational explanation for what otherwise might be nothing of the kind.

For those that relish debating about the topic of free-will, perhaps none of us have any capability of intent and we are all pre-programmed to carry out acts, none of which relates to any personal intent and we are simply acting as puppets on a string (for more on my remarks about AI and free will, see this link here).

I dont want to go too far off the rails here but did want to mention the philosophical viewpoint that intent might not exist in any ordinary manner and we cannot assume as such that it does.

Since we are on a roll here about thinking widely, there is a handy catchphrase about intent from George Bernard Shaw that offers additional food for thought: We know there is intention and purpose in the universe, because there is intention and purpose in us.

Notice that this is quite reassuring, namely that since we generally believe that there is intention within us, ergo this somehow implies that there is an intention in the universe, and therefore we are able to remain sanguine and be comforted that everything has a meaning and intention (though some might counter-argue that the universe and we are all completely random and purposeless).

While we are on teetering on the edge of this precipice, lets keep going.

Maybe intent and intention is really a cover-up for the acts of humanity.

If you do something adverse, the intent might be a means to placate others about your dastardly deed and act as a distractor from the act committed.

On the other hand, maybe your act was well-intended, yet it led to something adverse, inadvertently and not by design, therefore your intention ought to be given due weight and consideration.

Time to quote another riveting insight about intent, this one by the revered George Washington: A mans intentions should be allowed in some respects to plead for his actions.

Note that Washingtons quote refers to mans intentions, but we can reasonably allow the meaning to include all of mankind, making the quote to encompass both men and women, restated as a persons intentions should be allowed in some respects to plead for their actions.

Overall, mankind certainly seems to have accepted the stark and generally unchallenged belief that there are intentions and that those intentions are crucial to the acts we undertake.

That being the case, what else has intentions?

Does your beloved pet dog or cat have intentions?

Do all animals have intentions of one kind or another?

There is an acrimonious debate about the idea that animals can form intentions.

Some say that it is obviously the case that they do, while others contend that they quite obviously cannot do so. The usual basis for arguing that animals cannot have intentions is that they mentally are too limited and that only humans have the mental capacity to form intent or intentions. Be careful making that brash claim to any dog or cat lover.

Can a toaster have an intention?

I ask because the other day, my toaster burnt my toast.

Did the toaster do so intentionally, or was it an unintentional act?

You might be irked at such a question and immediately recoil that the toaster obviously lacks any semblance of intent. It is merely a mindless machine that makes toast.

There isnt any there, there.

Without the ingredient or essential component of mental processing, you would seem to be hard-pressed to ascribe intent to something so ordinary and mechanical.

This brings us to a most intriguing twist and the intended focus of this discussion, namely, where does AI fit into this murky matter of intent and intention.

AI systems are increasingly becoming a vital part of our lives.

There are AI systems that do life-impacting diagnoses of X-ray charts and seem to discern whether there is disease present. There are AI systems that decide whether you can get a car loan that you wanted to obtain. Etc.

Is AI more akin to humans and therefore able to form intent, or is AI more similar to a toaster and unable to have any substance of intent?

Lest you think this is an entirely abstract point and not worthy of real-world attention, consider the legal ramifications of whether AI is able to form intent and whether this is noteworthy or not.

In our approach to jurisprudence, we give a tremendous amount of importance to intent, sometimes referred to as scienter in legal circles, and in criminal law make use of intent to ascertain the nature of the crime that can be assigned and the penalty that might ride with the crime undertaken.

As such, this AI-related intent insight by a legal research scholar seems especially apt here: Because intent tests often serve as a gatekeeper, limiting the scope of claims, they may entirely prevent certain claims or legal challenges from being raised when AI is involved.

And, after providing examples of AI used by the government and AI used by financial planning systems, the researcher offers these sobering thoughts: All of these problems threaten to leave AI unregulated either because defendants that use AI may never be held liable (e.g., the governments use of AI may prevent a showing of discriminatory intent) or claimants that rely on AI may be left without redress (e.g., because a plaintiff that uses AI to make investment decisions is unable to show reliance).

A toaster that goes awry will hopefully be a mildly adverse consequence (I can choose to eat the burnt toast or toss it into the trash), while if an AI system that is able to drive a car goes awry, the result can be catastrophic.

Using AI for the driving of cars is a life-or-death instance of AI that is emerging for use in our daily lives.

When you see a car going down the street and there isnt a human driver at the wheel, you are tacitly accepting the belief that the AI is able to drive the car and will not suddenly veer into a crowd of pedestrians or plow into a car ahead of it.

You might counter-argue that the same can said of human drivers, whereby when a human driver is at the wheel, you likewise are accepting the belief that the human will not suddenly ram into pedestrians or into other cars.

If the human did so, wed all be quickly looking for intent.

Can we do the same for AI driving systems in terms of the actions that they undertake, and does it make sense to even try to ascertain such AI-based intent?

Todays question then is this: As an example of AI and intent, do we expect AI-based true self-driving cars to embody intention and if so, what does it consist of and how would we know that it exists?

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Intent

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Lets return to the discussion about intent.

Is the AI that can perform self-driving the same as a toaster?

Intuitively, we might right away proffer that the AI is not at all like a toaster and that making such a callous suggestion undercuts what the AI is accomplishing in being able to drive a car.

Before we dig further into this aspect, Id like to set the record straight about the AI that is able to drive a car.

Some assume that the AI needed to drive a car must be sentient, able to think and perform mental processing on an equivalent basis of humans. So far, thats not the case, and it seems that well be able to have AI-based self-driving cars without crossing over into the vaunted singularity (the singularity is considered the moment or occurrence of having AI that transforms from being everyday computational and becoming sentient, having the same unspecified and ill-understood spark that mankind seems to have, for more on this topic see my analysis here).

For the moment, remove sentience from this discussion as to the capabilities of AI, and assume that the AI being depicted is computer-based and has not yet achieved human-like equivalency of intelligence. If AI does someday arrive at the singularity, presumably we would need to have an altogether new dialogue about intent, since at that point the AI would be apparently the same as human intelligence in one manner or another and the role of intent in its actions would rightfully come onto the table, for sure.

Consider then these forms of intent:

1.Inscrutable Intent

2.Explicated Intent

3.AI Developer Intent

4.Inserted Intent

5.Induced Intent

6.Emergent Intent

Each of these forms of intent are significant in their own right and are not mutually exclusive, indeed they are overlapping and at times closely interrelated.

Understanding AI Intent

Lets start with the notion of inscrutable intent.

It could be that the AI system has an intent and yet we have no means to figure out what the intent is.

For example, the use of Machine Learning (ML) and Deep Learning (DL) oftentimes uses large-scale artificial neural networks (ANNs), which are essentially computer-based simulations of somewhat along the lines of what we believe brains do, though the ML/DL of today is extremely simplistic in comparison and not at all akin to the complexities of the human brain.

In any case, the ML/DL is essentially a mathematical model that is computationally being performed, out of which there is not necessarily any logical basis to explain the inner workings. There are just calculations and arithmetics taking place. As such, it is generally considered inscrutable if there is no ready means to translate this into something meaningful in words and sentences that would constitute an articulated indication of intent.

Next, consider explicated intent.

Some believe that we might be able to do a type of translation of what is happening inside the AI system, and as such, there is a rising call for XAI, known as explainable AI. This is AI that in one fashion or another has been designed and developed to provide an explanation for what it is doing, and thus one might say that could showcase explicated intent.

Many argue that you can just drop the whole worry about AI intention and look instead at the AI developer that crafted the AI.

Since AI is a human-created effort, the human or humans that put it together are the intenders, and therefore the intention of the AI is found within the intentions of those humans.

View post:

On The Beguiling Question Of Whether AI Can Form Intent, Including The Case Of Self-Driving Cars - Forbes

Related Posts