Nvidia will dominate this crucial part of the AI market for at least the next two years – MarketWatch

Posted: December 3, 2019 at 12:48 am

The principal tasks of artificial intelligence (AI) are training and inferencing. The former is a data-intensive process to prepare AI models for production applications. Training an AI model ensures that it can perform its designated inferencing tasksuch as recognizing faces or understanding human speechaccurately and in an automated fashion.

Inferencing is big business and is set to become the biggest driver of growth in AI. McKinsey has predicted that the opportunity for AI inferencing hardware in the data center will be twice that of AI training hardware by 2025 ($9 billion to 10 billion vs. $4 billion to $5 billion today). In edge device deployments, the market for inferencing will be three times as large as for training by that same year.

For the overall AI market, the market for deep-learning chipsets will increase from $1.6 billion in 2017 to $66.3 billion by 2025, according to Tractica forecasts.

I believe Nvidia NVDA, -3.46% will realize better-than-expected growth due to its early lead in AI inferencing hardware accelerator chips. That lead should last for at least the next two years, given industry growth and the companys current product mix and positioning.

In most server- and cloud-based applications of machine learning, deep learning and natural language processing, the graphics processing unit, or GPU, is the predominant chip architecture used for both training and inferencing. A GPU is a programmable processor designed to quickly render high-resolution images and video, originally used for gaming.

Nvidias biggest strength and arguably its largest competitive vulnerability lies in its core chipset technology. Its GPUs have been optimized primarily for high-volume, high-speed training of AI models, though they also are used for inferencing in most server-based machine learning applications. Today, that GPU technology is a significant competitive differentiator in the AI inferencing market.

Liftr Cloud Insights has estimated that the top four clouds in May 2019 deployed Nvidia GPUs in 97.4% of their infrastructure-as-a-service compute instance types with dedicated accelerators.

While GPUs have a stronghold on training and much of the server based inference, for edge-based inferencing, CPUs rule.

Whats the difference between GPUs and CPUs? In simple terms, a CPU is the brains of the computer and a GPU acts as a specialized microprocessor. A CPU can handle multiple tasks, and a GPU can handle a few tasks very quickly. CPUs currently dominate in adoption. In fact, McKinsey projects that CPUs will account for 50% of AI inferencing demand in 2025, with ASICs, which are custom chips designed for specific activities, at 40% and GPUs and other architectures picking up the rest.

The challenge: While Nvidias GPUs are extremely capable for handling AIs most resource-intensive inferencing tasks in the cloud and server platforms, GPUs are not as cost-effective for automating inferencing within mobile, IoT, and other edge computing uses.

Various non-GPU technologiesincluding CPUs, ASICs, FPGAs, and various neural network processing unitshave performance, cost, and power-efficiency advantages over GPUs in many edge-based inferencing scenarios, such as autonomous vehicles and robotics.

The opportunity: The company no doubt recognizes the much larger opportunity resides in inferencing chips and other components optimized for deployment in edge devices. But it has its work cut out to enhance or augment its current offerings with lower-cost, specialty AI chips to address that important part of the market.

Nvidia continues to enhance its GPU technology to close the performance gap vis--vis other chip architectures. One notable recent milestone was the recent release of AI industry benchmarks that show Nvidia technology setting new records in both training and inferencing performance. The companys forthcoming new AI-optimized Jetson Xavier NX hardware module will offer server-class performance, a small footprint, low cost, low power, high performance and flexible deployment for edge applications.

With an annual revenue run rate nearing $12 billion, Nvidia retains a formidable lead over other AI-accelerator chip manufacturers, especially AMD AMD, -1.07% and Intel INTC, -0.67%.

Intel, however, has upped its game in AI inference with the recent release of multiple specialty AI chips and the recent announcement that Ponte Vecchio, the companys first discrete GPU, should hit the market in 2021. There is also a range of cloud, analytics and development tool vendors who have flocked into the AI space over the past several years.

Nvidias early lead can be attributed to the companys focus, as well as the deep software integration that enables developers to rapidly develop and scale models on its hardware. This is why many of the hyperscalers (Alphabets GOOG, -1.15% GOOGL, -1.17% GoogleCloud, Microsofts MSFT, -1.21% Azure, Amazons AMZN, -1.07% AWS) also deliver AI inference capabilities on their infrastructure based upon Nvidia technology.

In edge-based inferencing, where AI executes directly on mobile, embedded, and devices, no one hardware/software vendor is expected to dominate, and Nvidia stands a very good chance of pacing the field. However, competition is intensifying from many directions. In edge-based AI inferencing hardware alone, Nvidia faces competition from dozens of vendors that either now provide or are developing AI inferencing hardware accelerators. Nvidias direct rivalswho are backing diverse AI inferencing chipset technologiesinclude hyperscale cloud providers AWS, Microsoft, Google, Alibaba BABA, -1.84% and IBM IBM, -1.15% ; consumer cloud providers Apple AAPL, -1.16%, Facebook FB, -0.96% and Baidu BIDU, -0.92% ; semiconductor manufacturers Intel, AMD, Arm, Samsung, Qualcomm QCOM, -1.32%, Xilinx XLNX, -2.71% and LG; and a staggering number of China-based startups and technology companies such as Huawei.

The significant opportunities tied to the growth of AI inferencing will drive innovation and competition to develop more powerful and affordable solutions to leverage AI. With the deep resources and capabilities of most of the aforementioned competitors, there is certainly a possibility of a breakthrough that could rapidly shift the power positions in AI inferencing. However, at the moment, Nvidia is the company to beat, and I believe this strong market position will continue for at least the next 24 months.

With Nvidia placing an increased focus on low-cost edge-based inferencing accelerators as well as high-performance hardware for all AI workloads, the company provides widely adopted algorithm libraries, APIs and ancillary software products designed for the full range of AI challenges. Any competitor would need to do all of this better than Nvidia. That would be a tall task, but certainly not insurmountable.

Daniel Newman is the principal analyst at Futurum Research. Follow him on Twitter @danielnewmanUV. Futurum Research, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the tech and digital industries. Neither he nor his firm holds any equity positions with any companies cited.

Go here to read the rest:

Nvidia will dominate this crucial part of the AI market for at least the next two years - MarketWatch

Related Posts