Teslas AI Chips Are Rolling Out, But They Arent A Self-Driving Panacea – Forbes

Tesla has opted to design and deploy their own AI chips, a strategy to achieve true self-driving car ... [+] capabilities but questions still remain.

According to several media reports, the new AI chips Tesla devised to achieve true self-driving car status have begun rolling out to older Tesla models that require retrofitting to replace the prior on-board processors.

Unfortunately, there has been some misleading reporting about those chips, a special type of AI computer processor that extensively supports Artificial Neural Networks (ANN), commonly referred to as Machine Learning (ML) or Deep Learning (DL).

Before I explore the over-hyped reporting, let me clarify that these custom-developed AI chips devised by Tesla engineers are certainly admirable, and the computer hardware design team deserves to be proud of what they have done. Kudos for their impressive work.

But such an acknowledgement does not also imply that they have somehow achieved a singularity marvel in AI, and nor does it mean they have miraculously solved the real-world problem of how to attain a true self-driving driverless car.

Not by a long shot.

And yet many in the media seem to think so, and at times have implied in a wide-eyed overzealous way that Teslas new computer processors have seemingly reached a nirvana of finally getting us to fully autonomous cars.

Thats just not the case.

Time to unpack the matter.

Important Context About AI Chips

First, lets clarify what an AI chip consists of.

A conventional computer contains a core processor or chip that does the systems work when you invoke your word processor or spreadsheets or are loading and running an app of some kind.

In addition, most modern computers also have GPUs, Graphical Processing Units, an additional set of processors or chips that aid the core processor by taking on the task of displaying visual graphics and animation that you might see on the screen of your device such as on the display of a desktop PC, a laptop or a smartphone.

To use computers for Machine Learning or Deep Learning, it was realized that rather than necessarily using the normal core processors of a computer to do so, the GPUs actually tended to be better suited for the ML or DL tasks.

This is due to the aspect that by-and-large the implementation of Artificial Neural Networks in todays computers is really a massive numerical and linear algebra kind of affair. GPUs are generally structured and devised for that kind of numeric mashing.

AI developers that rely upon ML/DL computer-based neural networks fell in love with GPUs, utilizing GPUs for something not particularly originally envisioned but that happens to be a good marriage anyway.

Once it became apparent that having souped-up GPUs would help advance todays kind of AI, the chip developers realized that it could be a huge market potential for their processors and therefore merited tweaking GPU designs to more closely fit to the ML/DL task.

Tesla had initially opted to use off-the-shelf specialized GPU chips made by NVIDIA, doing so for the Tesla in-car on-board processing efforts of the Tesla version of ADAS (Advanced Driver-Assistance System), including and especially for their so-called Tesla AutoPilot (a naming that has generated controversy for being misleading about the actual driverless functionality to-date available in their so-equipped FSD or Full Self-Driving cars).

In April of this year, Elon Musk and his team unveiled a set of proprietary AI chips that were secretly developed in-house by Tesla (rumors about the effort had been floating for quite a while), and the idea was that the new chips would replace the use of the in-car NVIDIA processors.

The unveiling of the new AI chips was a key portion of the Investor Autonomy Day event that Tesla used as a forum to announce the future plans of their hoped-for self-driving driverless capability.

Subsequently, in late August, a presentation was made by Tesla engineers depicting additional details about their custom-designed AI chips, doing so at the annual Hot Chips conference sponsored by the IEEE that focuses on high performance computer processors.

Overall media interest about the Tesla AI chips was reinvigorated by the presentation and likewise further stoked by the roll-out that has apparently now gotten underway.

One additional important point most people refer to these kinds of processors as AI chips, which Ill do likewise for ease of discussion herein, but please do not be lulled into believing that these specialized processors are actually fulfilling the long-sought goal of being able to have full Artificial Intelligence in all of its intended facets.

At best, these chips or processors are simulating relatively shallow mathematically inspired aspects of what might be called neural networks, but it isnt at all anything akin to a human brain. There isnt any human-like reasoning or common-sense capability involved in these chips. They are merely computationally enhanced numeric calculating devices.

Brouhaha About Teslas New Chips

In quick recap, Tesla opted to replace the NVIDIA chips and did so by designing and now deploying their own Tesla-designed chips (the chips are being manufactured for Tesla by Samsung).

Lets consider vital questions about the matter.

Did it make sense for Tesla to have gone on its own to make specialized chips, or would they have been better off to continue using someone elses off-the-shelf specialized chips?

On a comparison basis, how are the Tesla custom chips different or the same as off-the-shelf specialized chips that do roughly the same thing?

What do the AI chips achieve in terms of aiming for becoming true self-driving cars?

And so on.

Here are some key thoughts on these matters:

Hardware-Only Focus

It is crucial to realize that discussing these AI chips is only a small part of a bigger picture, since the chips are a hardware-only focused element.

You need software, really good software, in order to arrive at a true self-driving car.

As an analogy, suppose someone comes out with a new smartphone that is incompatible with the thousands upon thousands of apps in the marketplace. Even if the smartphone is super-fast, you have the rather more daunting issue that there arent any apps for the new hardware.

Media salivating over the Tesla AI chips is missing the boat on asking about the software needed to arrive at driverless capabilities.

Im not saying that having good hardware is not important, it is, but I think we all now know that hardware is only part of the battle.

The software to do true AI self-driving is the 500-pound gorilla.

There has yet to be any publicly revealed indication that the software for achieving true self-driving by Tesla has been crafted.

As I previously reported, the AI team at Tesla has been restructured and revamped, presumably in an effort to gain added traction towards the goal of having a driverless car, but so far no new indication has demonstrated that the vaunted aim is imminent.

Force-fit Of Design

If you were going to design a new AI chip, one approach would be to sit down and come up with all of the vital things youd like to have the chip do.

You would blue sky it, starting with a blank sheet, aiming to stretch the AI boundaries as much as feasible.

For Tesla, the hardware engineers were actually handed a circumstance that imposed a lot of severe constraints on what they could devise.

They had to keep the electrical power consumption within a boundary dictated by the prior designs of the Tesla cars, otherwise it would mean that the Teslas already in the marketplace would have to undergo a major retrofit to allow for a more power hungry set of processors. That would be costly and economically infeasible. Thus, right away the new AI chip would be hampered by how much power it could consume.

The new processors would have to fit into the physical space as already set aside on existing Tesla cars, meaning that the size and shape of the on-board system boards and computer box would have to abide by a strict form factor.

And so on.

This is oftentimes the downside of being a first-mover into a market.

You come out with a product when few others have something similar, it gains some success, and so you need to then try to advance the product as the marketplace evolves, yet you are also trapped by needing to be backward-compatible with what you already did.

Those that come along after your product has been underway have the latitude of not being ensnared by what came before, sometimes allowing them to out-perform by having an open slate to work with.

An example of overstepping first movers includes the rapid success of Uber and Lyft and the ridesharing phenomena. The newer entrants ignored existing constraints faced by taxis and cabs, allowing the brazen upstarts to eclipse those that were hampered by the past (rightfully or wrongly so).

Being first in something is not necessarily always the best, and sometimes those that come along later on can move in a more agile way.

Dont misinterpret my remarks to imply that for self-driving cars you can wildly design AI chips in whatever manner you fancy. Obviously, there are going to be size, weight, power consumption, cooling, cost, and other factors that limit what sensibly can appropriately fit into a driverless car.

Improper Comparisons

One of my biggest beefs about the media reporting has been the willingness to fall into a misleading and improper comparison of the Tesla AI chips to other chips.

Comparing the new with the old is not especially helpful, though it sounds exciting when you do so, and instead the comparison should be with what else currently exists in the marketplace.

Heres what I mean.

Most keep saying that the Tesla AI chips are many times faster than the Tesla prior-used NVIDIA chips (but they ought to be comparing to NVIDIAs other newer chips), implying that Tesla made a breathtaking breakthrough in this kind of technology, often quoting the number of trillions of operations per second, known as TOPS.

I wont inundate you with the details herein, but suffice to say that the Tesla AI chips TOPS performance is either on par with other alternatives in the marketplace, or in some ways less so, and in selective other ways somewhat better, but it is not a hit-it-out-of-the-ballpark revelation.

Bottom-line: I ask that the media stop making inappropriate comparisons between the Tesla AI chips and the Tesla prior-used NVIDIA chips, it just doesnt make sense, it is misleading to the public, it is unfair, and it really shows ignorance about the topic.

Another pet peeve is the tossing around of big numbers to impress the non-initiated, such as touting that the Tesla AI chips consist of 6 billion transistors.

On my gosh, 6 billion seems like such a large number and implies something gargantuan.

Well, there are GPUs that already have 20 billion transistors.

Im not denigrating the 6 billion, and only trying to point out that those quoting the 6 billion do so without offering any viable context and therefore imply something that isnt really the case.

For those readers that are hardware types, I know and you know that trying to make a comparison by the number of transistors is a rather problematic exercise anyway, since it can be an apples-to-apples or an apple-to-oranges kind of comparison, depending upon what the chip is designed to do.

First Gen Is Dicey

Anybody that knows anything about chip design can tell you that the first generation of a newly devised chip is oftentimes a rocky road.

There can be a slew of latent errors or bugs (if you prefer, we can be gentler in our terminology and refer to those aspects as quirks or the proverbial tongue-in-cheek hidden features).

Like the first version of any new product, the odds are that it will take a shakeout period to ferret out what might be amiss.

In the case of chips, since it is encased in silicon and not readily changeable, there are sometimes software patches used to deal with hardware issues, and then in later versions of the chip you might make the needed hardware alterations and improvements.

This brings up the point that by Tesla choosing to make its own AI chips, rather than using an off-the-shelf approach, it puts Tesla into the unenviable position of having a first gen and needing to figure out on-their-own whatever guffaws those new chips might have.

Typically, an off-the-shelf commercially available chip is going to have not just the original maker looking at it, but will also have those that are buying and incorporating the processor into their systems looking at it too. The more eyes, the better.

The Tesla proprietary chips are presumably only being scrutinized and tested by Tesla alone.

Proprietary Chip Woes

Using your own self-designed chips has a lot of other considerations worth noting.

At Tesla, there would have been a significant cost and attention that was devoted toward devising the AI chips.

Was that cost worth it?

Was the diverted attention that might have gone to other matters a lost opportunity cost?

Plus, Tesla not only had to bear the original design cost, they will have to endure the ongoing cost to upgrade and improve the chips over time.

This is not a one-time only kind of matter.

It would seem unlikely and unwise for Tesla to sit on this chip and not advance it.

Advances in AI chips are moving at lightening-like paces.

There are also the labor pool considerations too.

Having a proprietary chip usually means that you have to grow your own specialists to be able to develop the specialized software for it. You cannot readily find those specialists in the marketplace per se, since they wont know your proprietary stuff, whereas when you use a commercial off-the-shelf chip, the odds are that you can find expert labor for it since there is an ecosystem surrounding the off-the-shelf processor.

I am not saying that Tesla was mistaken per se to go the proprietary route, and only time will tell whether it was a worthwhile bet.

By having their own chip, they can potentially control their own destiny and not be dependent upon an off-the-shelf chip made by someone else, and not be forced into the path of the off-the-shelf chip maker, while the other side of that coin is they now find themselves squarely in the chip design and upgrade business, in addition to the car making business.

Its a calculated gamble and a trade-off.

From a cost perspective, it might or might not be a sensible approach, and those that keep trying to imply that the proprietary chip is a lesser cost strategy are likely not including the full set of costs involved.

Be wary of those that do those off-the-cuff cost claims.

Redundancy Assertions

There has been media excitement about how the Tesla AI chips supposedly have a robust redundancy capability, which certainly is essential for a real-time system that involves the life-and-death aspects of driving a car.

So far, the scant details revealed seemed to be that there are two identical AI chips running in parallel and if one of the chips disagrees with the other chip that the current assessment of the driving situation and planned next step is discarded, allowing for the next frame to be captured and analyzed.

On the surface, this might seem dandy to those that havent developed fault-tolerant real-time systems before.

There are serious and somber issues to consider.

Presumably, on the good side, if one of the chips experiences a foul hiccup, it causes the identical chip to be in disagreement, and because the two chips dont agree, the system potentially avoids undertaking an inappropriate action.

But, realize that the ball is simply being punted further down-the-field, so to speak.

This has downsides.

Suppose the oddball quirk isnt just a single momentary fluke, and instead recurs, over and over.

Does this mean that both chips are going to continually disagree and therefore presumably keep postponing the act of making a driving decision?

Read more:

Teslas AI Chips Are Rolling Out, But They Arent A Self-Driving Panacea - Forbes

Related Post

Comments are closed.