December 26, 2024

NvidiaThe company’s historic gains were driven by its data center business, which grew 427% in the latest quarter as the company continued to snap up its artificial intelligence processors.

Now Nvidia is signaling to investors that customers who spend billions on its chips will also be able to make money from artificial intelligence. This is a long-standing concern for companies, as they can only burn so much cash on infrastructure before customers need to see some profits.

If Nvidia’s chips can deliver strong and sustainable returns on investment, it suggests the artificial intelligence boom may have room to grow as it moves through its early stages of development and as companies plan longer-term projects.

The most important customers for Nvidia’s graphics processing units are the large cloud providers— Amazon web services, Microsoft sky blue, Google Yunhe Oracle cloud. They accounted for “middle 40%” of Nvidia’s $22.56 billion in data center sales in the April quarter, the company said.

There is also a new crop of dedicated GPU data center startups that buy Nvidia’s GPUs, install them in server racks, load them into data centers, connect them to the Internet, and rent them out to customers by the hour.

For example, GPU cloud CoreWeave is currently quoting $4.25 per hour to rent an Nvidia H100. This server time is critical for training large language models like OpenAI’s GPT, and it’s why many AI developers end up accessing Nvidia hardware.

After Nvidia posted a better-than-expected earnings report on Wednesday, finance chief Colette Kress told investors that the cloud provider saw “immediate and strong returns on investment.” If a cloud provider spends $1 on Nvidia hardware, it can rent it for $5 over the next four years, she said.

Kress also said that newer Nvidia hardware will have a stronger return on investment, citing the company’s HDX H200 product, which combines eight GPUs to provide Yuan Llama AI model instead of raw access to cloud computers.

“This means that for every $1 spent on an NVIDIA HDX H200 server at current prices, an API provider servicing Llama 3 tokens could generate $7 in revenue over four years,” Kress said.

Part of the calculation includes how the wafers are used, whether they are running 24 hours a day or less frequently.

Nvidia CEO Jensen Huang told analysts on an earnings call that OpenAI, Google, Anthropic and as many as 20,000 generative AI startups are lining up to buy every GPU that cloud providers can bring online.

“All the work[cloud service providers]are doing is consuming every GPU available,” Huang said. “Customers put a lot of pressure on us to deliver the system and get it operational as quickly as possible.”

Huang said that Meta announced its intention to spend billions of dollars to purchase 350,000 Nvidia chips, even though the company is not a cloud provider. Facebook parent Meta may have to monetize its investment through its advertising business or by including chatbots in its current apps.

Huang said Meta’s server clusters are an example of “the basic infrastructure for artificial intelligence production,” or “what we call an artificial intelligence factory.”

Nvidia also surprised analysts by giving an aggressive timetable for its next-generation GPU, called Blackwell, which will be launched in data centers in the fiscal fourth quarter. These comments subsided Concerns about economic slowdown As companies wait for the latest technology.

The first customers of the new chip include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Teslaand Elon Musk’s xAI, Huang said.

Nvidia shares rose 6% in after-hours trading, topping $1,000 for the first time. In addition to announcing earnings, Nvidia also announced a 10-for-1 stock split after the company’s stock price soared 25-fold over the past five years.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *