{"id":1028002,"date":"2024-02-27T02:39:20","date_gmt":"2024-02-27T07:39:20","guid":{"rendered":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/analysis-how-nvidia-surpassed-intel-in-annual-revenue-and-won-the-ai-crown-crn.php"},"modified":"2024-02-27T02:39:20","modified_gmt":"2024-02-27T07:39:20","slug":"analysis-how-nvidia-surpassed-intel-in-annual-revenue-and-won-the-ai-crown-crn","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/analysis-how-nvidia-surpassed-intel-in-annual-revenue-and-won-the-ai-crown-crn.php","title":{"rendered":"Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown &#8211; CRN"},"content":{"rendered":"<p><p>    A deep-dive analysis into the market dynamics that allowed    Nvidia to take the AI crown and surpass Intel in annual    revenue. CRN also looks at what the x86 processor giant could    do to fight back in a deeply competitive environment.  <\/p>\n<\/p>\n<p>    Several months after Pat Gelsinger became Intels CEO in 2021,    he told me that his biggest concern in the data center wasnt    Arm, the British chip designer that is enabling a new wave of    competition against the semiconductor giants Xeon server CPUs.  <\/p>\n<p>    Instead, the Intel veteran saw a bigger threat in Nvidia and    its     uncontested hold over the AI computing space and said his    company would give its all to challenge the GPU designer.  <\/p>\n<p>    [Related:     The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are    Cashing In]  <\/p>\n<p>    Well, theyre going to get contested going forward, because    were bringing leadership products into that segment,    Gelsinger told me for a     CRN magazine cover story.  <\/p>\n<p>    More than three years later, Nvidias latest earnings    demonstrated just how right it was for Gelsinger to feel    concerned about the AI chip giants dominance and how much work    it will take for Intel to challenge a company that has been at    the center of the generative AI hype machine.  <\/p>\n<p>    When Nvidias fourth-quarter earnings arrived last week, they    showed that the company surpassed Intel in total annual revenue    for its recently completed fiscal year, mainly thanks to high    demand for its data center GPUs driven by generative AI.  <\/p>\n<p>    The GPU designer finished its 2024 fiscal year with $60.9    billion in revenue, up 126 percent or more than double from the    previous year, the company revealed in its fourth-quarter    earnings report on Wednesday. This fiscal year ran from Jan.    30, 2023, to Jan. 28, 2024.  <\/p>\n<p>    Meanwhile, Intel     finished its 2023 fiscal year with $54.2 billion in sales,    down 14 percent from the previous year. This fiscal year ran    concurrent to the calendar year, from January to December.  <\/p>\n<p>    While Nvidias fiscal year finished roughly one month after    Intels, this is the closest well get to understanding how two    industry titans compared in a year when demand for AI solutions    propped up the data center and cloud markets in a shaky    economy.  <\/p>\n<p>    Nvidia pulled off this feat because the company had spent years    building a comprehensive and integrated stack of chips,        systems, software and services for accelerated    computingwith a major emphasis on data centers, cloud    computing and edge computingthen found itself last year at the    center of a massive demand cycle due to hype around generative    AI.  <\/p>\n<p>    This demand cycle was mainly kicked off by the late 2022    arrival of OpenAIs ChatGPT, a chatbot powered by a large    language model that can understand complex prompts and respond    with an array of detailed answers, all offered with the caveat    that it could potentially impart inaccurate, biased or made-up    answers.  <\/p>\n<p>    Despite any shortcomings, the tech industry found more promise    than concern with the capabilities of ChatGPT and other    generative AI applications that had emerged in 2022, like the    DALL-E 2 and Stable Diffusion text-to-image models. Many of    these models and applications had been trained and developed    using Nvidia GPUs because the chips are far faster at computing    such large amounts of data than CPUs ever could.  <\/p>\n<p>    The enormous potential of these generative AI applications    kicked off a massive wave of new investments in AI capabilities    by companies of all sizes, from venture-backed startups to    cloud service providers and consumer tech companies, like    Amazon Web Services and Meta.  <\/p>\n<p>    By that point, Nvidia had started shipping the     H100, a powerful data center GPU that came with a new    feature called the Transformer Engine. This was designed to    speed up the training of so-called transformer models by as    many as six times compared to the previous-generation A100,    which itself had been     a game-changer in 2020 for accelerating AI training and    inference.  <\/p>\n<p>    Among the transformer models that benefitted from the H100s    Transformer Engine was GPT-3.5, short for Generative    Pre-trained Transformer 3.5. This is OpenAIs large language    model that exclusively powered ChatGPT before the introduction    of the more capable GPT-4.  <\/p>\n<p>    But this was only one piece of the puzzle that allowed Nvidia    to flourish in the past year. While the company worked on    introducing increasingly powerful GPUs, it was also developing    internal capabilities and making acquisitions to provide a    full stack of hardware and software for accelerated computing    workloads such as AI and high-performance computing.  <\/p>\n<p>    At the heart of Nvidias advantage is the CUDA parallel    computing platform and programming model. Introduced in 2007,        CUDA enabled the companys GPUs, which had been    traditionally designed for computer games and 3-D applications,    to run HPC workloads faster than CPUs by breaking them down    into smaller tasks and processing those tasks simultaneously.    Since then, CUDA has dominated the landscape of software that    benefits accelerated computing.  <\/p>\n<p>    Over the last several years, Nvidias stack has grown to    include CPUs, SmartNICs and     data processing units, high-speed networking components,    pre-integrated servers and     server clusters as well as a variety of software and    services, which includes everything from     software development kits and     open-source libraries to     orchestration platforms and     pretrained models.  <\/p>\n<p>    While Nvidia had spent years cultivating relationships with    server vendors and cloud service providers, this activity        reached new heights last year, resulting in expanded    partnerships with the likes of AWS, Microsoft Azure, Google    Cloud, Dell Technologies, Hewlett Packard Enterprise and    Lenovo. The company also started cutting more deals in the    enterprise software space with major players like     VMware and     ServiceNow.  <\/p>\n<p>    All this work allowed Nvidia to grow its data center business    by 217 percent to $47.5 billion in its 2024 fiscal year, which    represented 78 percent of total revenue.  <\/p>\n<p>    This was mainly supported by a 244 percent increase in data    center compute sales, with high GPU demand driven mainly by the    development of generative AI and large language models. Data    center networking, on the other hand, grew 133 percent for the    year.  <\/p>\n<p>    Cloud service providers and consumer internet companies    contributed a substantial portion of Nvidias data center    revenue, with the former group representing roughly half and    then more than a half in the third and fourth quarters,    respectively. Nvidia also cited strong demand driven by    businesses outside of the former two groups, though not as    consistently.  <\/p>\n<p>    In its earnings call last week, Nvidia CEO Jensen Huang said    this represents the industrys continuing transition from    general-purpose computing, where CPUs were the primary engines,    to accelerated computing, where GPUs and other kinds of    powerful chips are needed to provide the right combination of    performance and efficiency for demanding applications.  <\/p>\n<p>    There's just no reason to update with more CPUs when you can't    fundamentally and dramatically enhance its throughput like you    used to. And so you have to accelerate everything. This is what    Nvidia has been pioneering for some time, he said.  <\/p>\n<p>    Intel, by contrast, generated $15.5 billion in data center    revenue for its 2023 fiscal year, which was a 20 percent    decline from the previous year and made up only 28.5 percent of    total sales.  <\/p>\n<p>    This was not only three times smaller than what Nvidia earned    for total data center revenue in the 12-month period ending in    late January, it was also smaller than what the semiconductor    giants AI chip rival made in the fourth quarter alone: $18.4    billion.  <\/p>\n<p>    The issue for Intel is that while the company has launched data    center GPUs and AI processors over the last couple years, its    far behind when it comes to the level of adoption by    developers, OEMs, cloud service providers, partners and    customers that has allowed Nvidia to flourish.  <\/p>\n<p>    As a result, the semiconductor giant has had to rely on its    traditional data center products, mainly Xeon server CPUs, to    generate a majority of revenue for this business unit.  <\/p>\n<p>    This created multiple problems for the company.  <\/p>\n<p>    While AI servers, including ones made by Nvidia and its OEM    partners, rely on CPUs for the host processors, the average    selling prices for such components are far lower than Nvidias    most powerful GPUs. And these kinds of servers often contain    four or eight GPUs and only two CPUs, another way GPUs enable    far greater revenue growth than CPUs.  <\/p>\n<p>    In Intels latest earnings call, Vivek Arya, a senior analyst    at Bank of America, noted how these issues were digging into    the companys data center CPU revenue, saying that its GPU    competitors seem to be capturing nearly all of the incremental    [capital expenditures] and, in some cases, even more for cloud    service providers.  <\/p>\n<p>    One dynamic at play was that some cloud service providers used    their budgets last year to replace expensive Nvidia GPUs in    existing systems rather than buying entirely new systems, which    dragged down Intel CPU sales, Patrick Moorhead, president and    principal analyst at Moor Insights & Strategy, recently told    CRN.  <\/p>\n<p>    Then there was the issue of long lead times for Nvidias GPUs,    which were caused by demand far exceeding supply. Because this    prevented OEMs from shipping more GPU-accelerated servers,    Intel sold fewer CPUs as a result, according to Moorhead.  <\/p>\n<p>    Intels CPU business also took a hit due to competition from    AMD, which     grew x86 server CPU share by 5.4 points against the company    in the fourth quarter of 2023 compared to the same period a    year ago, according to Mercury Research.  <\/p>\n<p>    The semiconductor giant has also had to contend with    competition from companies developing Arm-based CPUs, such as    Ampere Computing and Amazon Web Services.  <\/p>\n<p>    All of these issues, along with a lull in the broader market,    dragged down revenue and earnings potential for Intels data    center business.  <\/p>\n<p>    Describing the market dynamics in 2023, Intel said in its    annual 10-K filing with the U.S. Securities and Exchange    Commission that server volume decreased 37 percent from the    previous year due to lower demand in a softening CPU data    center market.  <\/p>\n<p>    The company said average selling prices did increase by 20    percent, mainly due to a lower mix of revenue from hyperscale    customers and a higher mix of high core count processors, but    that wasnt enough to offset the plummet in sales volume.  <\/p>\n<p>    While Intel and other rivals started down the path of building    products to compete against Nvidias years ago, the AI chip    giants success last year showed them how lucrative it can be    to build a business with super powerful and expensive    processors at the center.  <\/p>\n<p>    Intel hopes to make a substantial business out of accelerator    chips between the Gaudi deep learning processors, which came    from its     2019 acquisition of Habana Labs, and the     data center GPUs it has developed internally. (After the    release of Gaudi 3 later this year, Intel plans to converge its    Max GPU and Gaudi road maps, starting with Falcon Shores in    2025.)  <\/p>\n<p>    But the semiconductor giant has only reported a sales pipeline    that grew in the double digits to     more than $2 billion in last years fourth quarter. This    pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels    Max and Flex data center GPUs, but it doesnt amount to a    forecast for how much money the company expects to make this    year, an Intel spokesperson told CRN.  <\/p>\n<p>    Even if Intel made $2 billion or even $4 billion from    accelerator chips in 2024, it would amount to a small fraction    of what Nvidia made last year and perhaps an even smaller one    if the AI chip rival manages to grow again in the new fiscal    year. Nvidia has forecasted that revenue in the first quarter    could grow roughly 8.6 percent sequentially to $24 billion, and    Huang said the conditions are excellent for continued growth    for the rest of this year and beyond.  <\/p>\n<p>    Then theres the fact that AMD recently launched its most    capable data center GPU yet, the     Instinct MI300X. The company said in its most recent    earnings call that strong customer pull and expanded    engagements prompted the company to upgrade its forecast for    data center GPU revenue this year to more than $3.5 billion.  <\/p>\n<p>    There are other companies developing AI chips too, including        AWS,     Microsoft Azure and     Google Cloud as well as     several startups, such as Cerebras Systems, Tenstorrent,    Groq and D-Matrix. Even OpenAI is reportedly considering    designing its own AI chips.  <\/p>\n<p>    Intel will also have to contend with Nvidias decision last    year to move to a     one-year release cadence for new data center GPUs. This    started with the successor to the H100 announced last fallthe        H200and will continue with the B100 this year.  <\/p>\n<p>    Nvidia is making its own data center CPUs, too, as part of the    companys expanding full-stack computing strategy, which is    creating another challenge for Intels CPU business when it    comes to AI and HPC workloads. This started last year with the    standalone     Grace Superchip and a hybrid CPU-GPU package called the        Grace Hopper Superchip.  <\/p>\n<p>    For Intels part, the semiconductor giant expects meaningful    revenue acceleration for its nascent AI chip business this    year. What could help the company are the growing number of    price-performance advantages found by third parties like    AWS and        Databricks as well as its vow to offer an     open alternative to the proprietary nature of Nvidias    platform.  <\/p>\n<p>    The chipmaker also expects its upcoming Gaudi 3 chip to deliver    performance leadership with four times the processing power    and double the networking bandwidth over its predecessor.  <\/p>\n<p>    But the company is taking a broader view of the AI computing    market and hopes to come out on top with its AI everywhere    strategy. This includes a push to grow data center CPU revenue    by convincing developers and businesses to take advantage of    the latest features in its Xeon server CPUs     to run AI inference workloads, which the company believes    is more economical and pragmatic for a broader constituency of    organizations.  <\/p>\n<p>    Intel is making a big bet on the emerging category of AI PCs,    too, with its recently launched     Core Ultra processors, which, for the first time in an    Intel processor, comes with a neural processing unit (NPU) in    addition to a CPU and GPU to power a broad array of AI    workloads. But the company faces tough competition in this    arena, whether its     AMD and     Qualcomm in the Windows PC segment or Apple for     Mac computers and its in-house chip designs.  <\/p>\n<p>    Even Nvidia is     reportedly thinking about developing CPUs for PCs. But    Intel does have one trump card that could allow it to generate    significant amounts of revenue alongside its traditional chip    design business by seizing on the collective growth of its    industry.  <\/p>\n<p>    Hours before Nvidias earnings last Wednesday, Intel launched    its revitalized contract chip manufacturing business with the    goal of drumming up enough business from chip designers,    including its own product groups, to become the worlds second    largest foundry by 2030.  <\/p>\n<p>    Called     Intel Foundry, its lofty 2030 goal means the business hopes    to generate more revenue than South Koreas Samsung in only six    years. This would put it only behind the worlds largest    foundry, Taiwans TSMC, which generated     just shy of $70 billion last year with many thanks to large    manufacturing orders from the likes of Nvidia, Apple and    Nvidia.  <\/p>\n<p>    All of this relies on Intel to execute at high levels across    its chip design and manufacturing businesses over the next    several years. But if it succeeds, these efforts could one day    make the semiconductor giant an AI superpower like Nvidia is    today.  <\/p>\n<p>    At Intel Foundrys launch last week, Gelsinger made that clear.  <\/p>\n<p>    We're engaging in 100 percent of the AI [total addressable    market], clearly through our products on the edge, in the PC    and clients and then the data centers. But through our foundry,    I want to manufacture every AI chip in the industry, he said.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.crn.com\/news\/components-peripherals\/2024\/analysis-how-nvidia-surpassed-intel-in-annual-revenue-and-won-the-ai-crown\" title=\"Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN\">Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/analysis-how-nvidia-surpassed-intel-in-annual-revenue-and-won-the-ai-crown-crn.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-1028002","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1028002"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=1028002"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/1028002\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=1028002"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=1028002"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=1028002"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}