Nvidia’s Next Big Thing: The HGX-1 AI Platform – Seeking Alpha

Over the past three months, Nvidia's (NASDAQ:NVDA) stock has been upgraded by several financial services firms including Goldman Sachs (NYSE:GS), Citigroup (NYSE:C), and Bernstein, while some others have downgraded the stock, such as Pacific Crest. In an article published in December, last year, I said Nvidia's stock could scale new highs if the company's revenue continues to grow at a CAGR of 20% plus in the foreseeable future. At that time, the stock created a new high around $120, before correcting almost 20% afterwards.

I also cautioned investors that the stock could go through spine-chilling volatility, and that's exactly what is happening now. The commentaries of several sell-side analyst firms fueled the extent of volatility beyond what happens under normal circumstances. My medium-term target for the stock is $200+, albeit with continued volatility. The new catalyst for the stock will be the availability of its HGX-1 platform at the hyperscale space.

Nvidia: Revisiting the Bull Thesis

My investment thesis was based on the expectation that Nvidia's revenue will grow at a CAGR of 20% plus in the next three years. The primary revenue growth driver will be the expanding application of artificial intelligence (AI) and/or deep learning (DL) across various industries. Let's focus on Nvidia's competitive advantage in AI/DL.

Nvidia's DGX-1 platform is its trump card to lead the next generation of AI-wave leveraging the TensorFlow software library, originally developed by Google (NASDAQ:GOOGL)(NASDAQ:GOOG). Nvidia recently launched a partner program with hyperscale vendors, such as Foxconn, Inventec, Quanta and Wistron, for its HGX-1 AI reference platform. What's the difference between DGX-1 and HGX-1? Well, HGX-1 is the hyperscale version of the DGX-1 platform. The launch of the partnership program will have a strong and sustainable impact in the HPC (high performance computing) space via hyperscale players.

Investors need to understand how the advantages of TensorFlow coupled with Nvidia's HGX-1 reference architecture will boost its revenue in the foreseeable future, which Nvidia's stock at today's price around $150 hasn't factored in.

CUDA, HGX and TensorFlow

TensorFlow has become extremely popular since its inception in late 2015 by the Google Brain Team. Its appeal lies in its usage of data flow graphs for solving complex mathematical problems related to building efficient neural networks. Add to that its support for multiple GPUs and CPUs via a single API. This is the secret sauce behind the immense success of TensorFlow, out-competing other AI-related software libraries, such as torch, caffe and theano in terms of speeding up the training process of the neural networks.

Engineers at Nvidia were quick to understand TensorFlow's competitive advantage over other software libraries. While Intel (NASDAQ:INTC) was busy to make its MKL (math kernel library) more versatile via incorporating the Neon deep learning framework of Nervana, the startup which Intel acquired almost a year ago, Nvidia silently made its CUDA parallel computing software platform compatible with TensorFlow.

The DGX-1 platform, which is basically a parallel processing-based hardware accelerator with eight Nvidia Tesla P100 GPUs, is the platform that gave birth to HGX-1. Both DGX-1 and HGX-1 connect the eight Tesla P100 GPUs via NVLink interconnect. With the introduction of HGX-1 earlier this year and then making it available to end-users via different hyperscale vendors, Nvidia has opened the door for running different kinds of AI/DL workloads for various industries.

How HGX-1 would expand the scope of DGX-1 via taking the latter to the hyperscale space? Okay, before addressing the issue let's delve a bit deeper into how Nvidia's upcoming Tesla V100, i.e., Tesla powered by Volta GPU architecture, will help DGX-1 deliver much faster performance compared to other GPU-based systems. The Volta architecture will be made to support Tensor-based AI/DL operations, which implies the HGX-1 will also deliver Tensor-based performance boost at the hyperscale space.

HGX At Hyperscale: What Does It Really Mean?

What does HGX at the hyperscale space mean for investors? Simple. It means more revenue for Nvidia. Since hyperscale computing enables datacenters to deploy distributed computing environments across the globe via scaling only a few servers to lots of servers, Nvidia's GPUs will see a steady rise in adoption. However, that doesn't necessarily mean all the datacenters across the world will need to deploy the high end Tesla P100 or the upcoming Tesla V100 GPUs.

Only the hyperscale datacenters will need them. For the traditional datacenters, Nvidia's Pascal-powered midrange Quadro workstation GPUs will be enough. This essentially means HGX has the ability to boost the overall demand for Nvidia's server-grade GPUs. Nvidia's decision to launch the HGX partner program will help it grab a significant portion of the AI/DL market share from archrivals Intel and AMD (NASDAQ:AMD).

Intel is also trying to make MKL compatible with TensorFlow. However, the process isn't complete yet. I believe Intel has potential to compete with Nvidia in the parallel processing space. But due to its FPGA-dominated business model and late entrance in the AI/DL marketplace, it won't be able to become a real competitor of Nvidia just yet.

Unfortunately for Nvidia, though, AMD has made significant progress in the last six months. AMD, with its upcoming Radeon Instinct line of GPUs along with its upcoming Zen-based Naples CPUs (bundled with the SenseMI technology), could be Nvidia's real competitor, but for that to happen the OpenCL (open computing language) software library (which is the CUDA equivalent for AMD, and to some extent Intel) needs to support TensorFlow. Please refer to my article mentioned at the beginning to learn more about CUDA and OpenCL.

Valuation

Nvidia's trailing 12-month revenue is $7.5 billion. A CAGR growth rate of 20%, which is quite possible due its AI/DL leadership as analyzed above, will catapult its revenue closer to $13 billion in 2020. And since such an incredible growth is almost given, I won't be surprised if the Street offers a P/S multiple of 17/18x to its stock in the next six months. After a year, Nvidia's revenue will be around $9 billion at the CAGR rate mentioned above, and at ~15x P/S multiple the stock will get closer to or even slightly beyond the $200 level. So it's a strong possibility that the stock will cross the $200 mark in the next six to twelve months.

NVDA Revenue (TTM) data by YCharts

Final Words

Nvidia is a secular bull story. However, you need nerves of steel to be a part of the story. AI/DL was a concept even a few years ago, and has started to take the shape of an industry just now. I would recommend risk-tolerant investors to hold the stock as a long-term investment.

Disclosure: I am/we are long NVDA, GOOGL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Read the original here:

Nvidia's Next Big Thing: The HGX-1 AI Platform - Seeking Alpha

Related Posts

Comments are closed.