Microsoft Leads as Nvidia’s Largest AI Accelerator Customer with 485,000 Hopper GPUs Purchased in 2024

Nvidia : Microsoft Leads as Nvidia's Largest AI Accelerator Customer with 485,000 Hopper GPUs Purchased in 2024

Microsoft is by far Nvidia’s largest customer for AI accelerators. According to an estimate by market observer Omdia, the company purchased 485,000 Hopper GPUs, specifically the H100 and H200 models, in 2024. It is unclear how many of the chips Microsoft uses itself and how many are intended for its partner, OpenAI. The Financial Times reports on these Omdia figures.

Assuming a price of $30,000 per GPU, this purchase would amount to nearly $15 billion. Nvidia’s annual profit from server products is expected to exceed $100 billion this year, which also includes ARM processors, network cards, and interconnect technology.

Microsoft is the largest customer of Nvidia’s Hopper GPUs by a significant margin. Other companies are far behind. In second and third place among the largest Hopper buyers are the Chinese companies Bytedance and Tencent, each purchasing around 230,000 Hopper GPUs. They are buying the scaled-down versions, H800 and H20, due to US export restrictions. The H800 initially came with a slower Nvlink interconnect, and the H20 was introduced by Nvidia with halved computing power due to updated export restrictions.

Bytedance uses its AI data centers for AI algorithms in TikTok. Tencent owns numerous Chinese and international companies and operates WeChat in China, which integrates AI agents. Meta reportedly purchased 224,000 Hopper GPUs and also 173,000 Instinct MI300s, making it apparently AMD’s largest customer. Microsoft, according to estimates, has acquired 96,000 Instinct MI300s.

xAI, Amazon, and Google complete the top five largest Nvidia Hopper customers, each with 150,000 to 200,000 units.

Amazon, Google, and Meta are the most advanced in employing their own AI accelerators. Google and Meta are said to have each deployed around 1.5 million of their own chips; Google calls them Tensor Processing Units (TPUs), and Meta calls theirs Training and Inference Accelerator (MTIA). Amazon has deployed 1.3 million chips named Trainium and Inferentia. Microsoft lags with 200,000 of its own Maia accelerators.

These self-developed types are slower per chip compared to Nvidia’s Hopper GPUs. Currently, Nvidia is ramping up production of the new Blackwell generation, which is significantly faster but also more expensive.

Exit mobile version