Plato Data Intelligence.
Vertical Search & Ai.

Nvidia revenue grows 265 percent as H200 debut nears

Date:

Nvidia CEO Jensen Huang has claimed responsibility for hyperscalers’ decisions to extend the operating life of their server fleets, and suggested they’ve done so because they can’t improve performance by persisting with general-purpose computing and must instead adopt accelerated machines.

As the boss of the planet’s leading purveyor of accelerated computing, he would say that. But on Wednesday’s Q4 earnings call he and CFO Collette Kress pointed to Microsoft and ServiceNow both achieving record adoption of new products that include Nvidia-powered AI as evidence that accelerated computing is the new must-have.

Nvidia’s numbers also lend credence to Huang’s belief that accelerated computing is booming – big time – because CPU-centric architectures can’t run the AI workloads the world wants.

We reconfigured our products for China in a way that is not software hackable

Q4 2024 revenue came in at $22.1 billion – up 265 percent year-over-year. Full year revenue of $60.9 billion was up 126 percent compared to FY 2023. Datacenter revenue was the big driver, growing 409 percent year-over-year to $18.4 billion for the quarter, and rising 217 percent to a full-year total of $47.5 billion. That last figure means Nvidia is a bigger enterprise player than HPE, and also bigger than IBM’s hardware biz.

Kress revealed a likely driver for future growth – the forthcoming H200 accelerator – will debut in Q2 and forecast strong demand as it nearly doubles inference performance compared to Nvidia’s current champion card, the H100.

Huang and Kress both warned that demand will initially exceed supply for the H200, and the CFO added that next-generation Hopper products will be supply-constrained. Huang explained that the complexity of Nvidia products means it’s not immediately possible to step up production to levels that meet demand.

“Whenever we do new products, it ramps from zero to a very large number and you can’t do that overnight,” he explained, pointing out that Hopper products have 35,000 components.

He later acknowledged buyers’ concerns about shortages in the context of purchases of hundreds of thousands of H100s by hyperscalers. “We allocate fairly to ensure use,” he claimed, explaining that Nvidia won’t let its hyperscale customers get their hands on hardware before they have datacenters ready to run it.

China delivered a “mid-single digit percentage” of datacenter revenue, and was forecast to do the same in Q1 of 2025.

Huang conceded this quarter’s result represented a significant decline in the Middle Kingdom, caused by a company-wide pause in sales sparked by stricter US sanctions on exports of AI tech introduced in October 2023.

Huang said Nvidia stopped shipping kit to China in response and has since developed products it believes will qualify for an export license.

“We reconfigured our products in a way that is not software hackable in any way,” he claimed. “Now we are sampling for customers in China and we will do our best to succeed there.”

He added his belief that “The US government would like us to be as successful in China as possible.”

Networking revenue reached a $13 billion annual run rate and grew 5x year-over-year. CFO Kress predicted further networking growth, helped by the debut this quarter of Nvidia’s SpectrumX Ethernet range. SpectrumX is Nvidia’s set of tweaks to Ethernet to make it better able to handle AI workloads. Huang described SpectrumX as “AI-optimized” in contrast to InfiniBand’s status as “AI-dedicated.”

Enterprise software has become a billion-dollar business for Nvidia, and Huang forecast that would grow substantially, led by the Nvidia Enterprise AI suite.

Huang revealed Nvidia has substantial teams who work closely with cloud service providers to help them run apps on accelerators, but noted that hands-on approach won’t scale to every app and every user. As generative AI becomes a common enterprise workload, Nvidia is building libraries to ensure common software can also get the best out of its accelerators – at a price of $4500/GPU/year for its Nvidia Enterprise AI offering.

“My guess is that every enterprise in the world will run on Nvidia AI Enterprise,” he predicted. “This is likely going to be a very significant business. We are really just getting started.”

Gaming is Nvidia’s OG biz, and grew 15 percent year over year to reach $10.45 billion revenue.

Q1 revenue was forecast at $24 billion – growth rather less steep than Nvidia has reported in some recent quarters. A seasonal slowdown in gaming revenue was cited as one reason for the forecast. That said, Nvidia’s Q4 result beat guidance by a couple of billion dollars.

Investors clearly liked what they heard: Nvidia’s share price ended the trading day at $675.00 and sped to $743.98 after the bell before settling to $730.50. ®

Don’t miss The Next Platform’s commentary and analysis here.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?