Elon Musk Seemingly Used The Superhuman False Narrative In Advancing Teslas Self-Driving Car Ambitions – Forbes

Elon Musk sent a tweet this week referring to Tesla's self-driving tech as potentially being ... [+] "superhuman" which raises interesting questions about AI.

Superhuman.

What does that mean?

What does that mean to you?

Well, Elon Musk has suggested that Tesla cars outfitted with self-driving tech can definitely be superhuman (in his tweet on April 7, 2020), which invokes the superhuman moniker and raises questions about what exactly the notion of being superhuman portends.

Regrettably, he is joined by a slew of others, both outside the field of AI and even many within the AI field, continuing to proudly and with apparent abandon bandy around the superhuman signature.

The problem is that superhuman is a lousy form of terminology, allowing for inflated allusions to what AI is today, and stokes excessive over-the-top hype as an outright misnomer that spreads marketing blarney, more so than offering bona fide substance.

Some might say that those with a bitter distaste for the use of superhuman are overly tightly wound and should just loosen up about the matter.

No big deal, it would seem.

The counterargument is that in light of the heaps upon heaps of hyperbole going on about AI, there has to be somebody, someplace, and at some point-in-time with a willingness and verve that will start drawing a line in the sand (see my remarks about the dangers and qualms of the superhuman trope at this link here).

One such line would be at the shameless and mindless invoking of the superhuman imagery.

Why pick on superhuman as the straw that breaks the camels back?

Because it has a visceral stickiness that is going to keep it in use and likely get worse and worse in expanding usage over time.

In short, it sounds nice and catches the imagination, and akin to a veritable snowball, it just keeps rolling ahead, becoming bigger and bigger in popularity as it lumbers down the AI hysteria mountain.

Other ways of hyping AI are often more scientific-sounding and less catchy for the general public.

The super part in superhuman dovetails into our fascination and beloved adulation of the vaunted superman and superwomen comic books, movies, merchandising, etc., and now has become a kind of general lore in our contemporary society (the character of Superman was first showcased on April 18, 1938, in Action Comics #1).

Lets tackle what superhuman even seems to mean.

Suppose someone creates a checkers playing computer program, using AI, and it is able to beat all comers of a human variety.

In 1994, human player Marion Tinsley, a checkers world champion, fell to a checkers playing program called Chinook in a closely watched and highly publicized match, a moment that some assert was the point at which checkers exceeded humans at the game of checkers.

It has been said that AI checkers playing games have become superhuman.

Really?

Are we really willing to ascribe the notion of being superhuman due to the aspect that a computer program was able to best a top-ranked human checkers player?

By the way, many of the games played were draws.

Does that change your opinion about the superhuman capability of the checkers program?

If it was so superhuman, why didnt it whip the human in each and every game played, knocking the human player for a loop and showcasing how really super it is.

Anyway, the key point is that flinging around the superhuman catchphrase can be done by anyone and for whatever reason, they might arbitrarily choose.

You see, there isnt a formal definition per se of superhuman.

At least not a definition that all have agreed upon and furthermore, nor agreed to reserve for use in only proper settings (kind of like a Break Glass when superhuman is warranted or needed).

This brings up another facet.

Checkers is an interesting game, but it certainly isnt the most challenging of games (oops, sorry to you checkers fans, please dont go berserk; its a great game, but you have to admit it is not as complex as say Go, Chess, and the like).

Does being superhuman count when the underlying task itself is not the topmost of challenges per se?

Suppose an AI system is able to cook a souffle and the resulting delicacy receives raves as the best ever by anyone, human hands included.

Superhuman!

Superhuman?

Okay, you might say, lets make the stakes higher and use something that humanity has mentally strained to do well for eons, such as the playing of chess.

Chess is a tough game.

We marvel at those human players that can play chess in ways that are a beauty to behold.

In 1997, an IBM chess playing game running on the Deep Blue supercomputer was able to win against human chess champion Garry Kasparov.

Was that program something we can rightfully refer to as superhuman?

Chess is something that most humans dont do well, and thus it would seem that the program was pretty impressive, along with beating our considered best at the game.

Keep in mind that the only thing the program could do is play chess.

It couldnt write a song, it couldnt carry on a Socratic open-ended dialogue with you, and otherwise used various programming tricks such as having in computer memory tons and tons of prior chess positions that it could rapidly search and make use of.

This doesnt seem to be especially super, nor superhuman.

Dont misunderstand and misinterpret such a condemnation this does not imply that those superb chess-playing programs and checkers playing programs arent tremendous accomplishments.

They are!

And, for each instance whereby via the use of AI techniques that we make further progress toward achieving (eventually) true AI, its something worthy of applauding and offering some kind of trophy or recognition for those triumphs.

But, using a medal or crown that implies being capable of human efforts, and indeed implies the ability to go beyond human efforts, presumably far beyond human efforts as a result of being super, thats not an appropriate way to offer praise.

Consider too the role of common-sense reasoning.

Humans have common-sense reasoning.

As an aside, I realize some might chuckle and say that they know some people that lack in common-sense, but, putting aside such snickering, there is something called common-sense that humans do undeniably seem to have overall (see my analysis of common-sense reasoning at this link here).

There isnt any AI system today that has anything close to what human common-sense reasoning seems to entail.

So, if an AI system is superhuman, does it count that the AI doesnt have a core aspect of human capability, namely that the AI lacks common-sense reasoning?

Wouldnt you tend to assume that something of a superhuman caliber ought to be able to do everything that a human can do, and on top of that, go beyond human reach and be super?

That just seems logical.

Again, it might appear that this is blowing out of proportion the misuse of superhuman as a means to describe AI systems, yet do realize that many arent aware of the true limitations and narrowness involved in these AI systems that some are saying are superhuman.

The subtle attachment of superhuman to an AI system provides a glow of incredible essence, and inch by inch is convincing the public that AI can do wondrous things of a superhuman nature, all of which creates outsized expectations and sets people up to be misled and less wary of what AI is able to actually do today.

Take another consideration, brittleness.

Many of the Machine Learning (ML) and Deep Learning (DL) systems that are being deployed today are brittle at the edges of what they do.

A facial recognition system that is developed by using ML/DL could be really good at detecting people by their faces, and yet it also can fail to do so when a face is partially obscured or in other circumstances, which, by the way, other humans might not falter at.

Does that facial recognition deserve the superhuman label?

You might say that it does because in some respects it exceeds human ability to recognize faces, but at the same time, this hides the fact that AI-based facial recognition is actually worse than human capability in many ways.

Plus, as mentioned about common-sense reasoning, the AI facial recognition has no there in terms of understanding that the face so recognized is a human being and what a human being is or does. For the AI system, the face is a mathematical construct, no more significant than counting beans.

If something is superhuman, it seems like it ought to be super in all respects, and not brittle or weak in ways that undermine the super part of what it is getting as accolades.

With all of that as background, now lets turn our attention to true self-driving cars.

Heres the question for today: Do AI-based true self-driving cars deserve to get the superhuman tribute, and if so, when or how will we know that it is appropriate and fair to do so?

Thats a great question.

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Pondering Superhuman

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Existing Teslas are not Level 4 and nor are they Level 5.

Most would classify them as Level 2 today.

What difference does that make?

Well, if you have a true self-driving car (Level 4 and Level 5), one that is being driven solely by the AI, there is no need for a human driver and indeed no interaction between the AI and a human driver.

For a Level 2 car, the human driver is still in the drivers seat.

Furthermore, the human driver is considered the responsible party for driving that car.

The twist thats going to mess everyone up is that the AI might seem to be able to drive the Level 2 car, meanwhile, it cannot, and thus the human driver still must be attentive and act as though they are driving the car.

With that as a crucial backdrop, heres the tweet that Elon Musk sent on April 7, 2020: Humans drive using 2 cameras on a slow gimbal & are often distracted. A Tesla with 8 cameras, radar, sonar & always being alert can definitely be superhuman.

The first part of his tweet makes a physics-clever reference to human eyes, saying that they are like two cameras, and our two eyes and head are mounted on our necks, akin to a slow gimbal that allows us to look back-and-forth while driving a car (for my indication of how Elon Musk is shaped by his physics mindset and how that plays out in terms of his actions as a leader and executive, take a look at this link here).

In terms of human drivers succumbing to being distracted while driving, this indeed is a serious and quite troubling problem, along with drivers being intoxicated and otherwise succumbing to a host of human foibles while at the wheel of a car.

Sadly, in the United States alone, there are about 40,000 deaths each year due to car crashes, and an estimated 2.5 million injuries annually.

The hope is that true self-driving cars will avoid incurring as many of those deaths and injuries as possible.

Some believe that we are going to have zero deaths, but this doesnt make logical sense since there will still be some deaths involved in car crashes, regardless if we somehow magically even had only self-driving cars on our roadways (for why zero fatalities is a zero chance, see my analysis at this link here).

Suppose that true self-driving cars are able to reduce the number of car-related deaths and injuries, does that constitute that the AI and the self-driving car are superhuman?

It is tempting to perhaps give the AI such a prize, especially since the task at hand involves life-or-death.

A checkers or chess-playing AI system is obviously not involved in life-or-death circumstances (unless, perhaps, theres a dual-to-the-death on the line as part of the match, something we dont do anymore).

In short, the AI for a self-driving car has a lot going for it in terms of possibly being a candidate to get the honor of being considered superhuman.

It involves the complexities of driving a car, it entails life-or-death matters, and if it can presumably drive more reliably than humans then it seems to be able to drive better than humans do.

Still, does that attain a superhuman quality?

Essentially, the AI is driving as well as humans, minus the foibles of humans.

See the article here:

Elon Musk Seemingly Used The Superhuman False Narrative In Advancing Teslas Self-Driving Car Ambitions - Forbes

Related Posts

Comments are closed.