Artificial Intelligence

DeepSeek’s Rise Lays Bare Nvidia Weakness

Author: Editors Desk, Asa Fitch Source: WSJ:
January 29, 2025 at 14:10
Jensen Huang, CEO of Nvidia, at a trade show in Taiwan last year. PHOTO: RITCHIE B TONGO/EPA/SHUTTERSTOCK
Jensen Huang, CEO of Nvidia, at a trade show in Taiwan last year. PHOTO: RITCHIE B TONGO/EPA/SHUTTERSTOCK

A new Chinese AI model threatens to diminish the need for Nvidia’s most-expensive chips, but some say the concerns are overblown


DeepSeek’s dramatic rise exposes the greatest risk facing Nvidia NVDA -6.12%decrease; red down pointing triangle: that the intense demand for its advanced chips could wane.  

Nvidia’s success as the computational arms dealer for the artificial-intelligence boom has made it a target for a host of rivals seeking to diminish its dominance. That list includes other chip makers and customers that are developing their own AI chips to reduce their spending. But few in the tech world saw a nimble Chinese AI model as a risk to Nvidia’s business, even if the final effect is unclear.   

Nvidia has been perhaps the greatest beneficiary of the AI boom, with profit exceeding $63 billion in the last four quarters alone. Its shares have surged eightfold since the end of 2022.

With the excitement around DeepSeek emerging over the weekend, Nvidia’s shares plunged around 17% Monday. The stock stabilized Tuesday, closing up 8.9% to $128.99. 

Nvidia cast the DeepSeek jolt in a positive light. In a statement Monday, it said DeepSeek’s advance was an excellent illustration of new ways of operating AI models. Doing the work of serving up such AI models to users required large numbers of Nvidia’s chips, it said.

The concern for investors is that DeepSeek’s more efficient way of developing AIcould upend a status quo where the most sophisticated AI models require the largest number of Nvidia’s AI chips to train. AI development has thus far led to insatiable demand and supply shortages for its most advanced chips as big tech pours cash into AI data centers.  

Some of its biggest customers, including Amazon.com, Google and Microsoft have also been developing their own in-house chip designs to support their AI platforms. Rivals Intel and Advanced Micro Devices are pushing their own AI-specific chips to challenge Nvidia.  

 

Amazon’s in-house chips lab in Austin, Texas.
Amazon’s in-house chips lab in Austin, Texas. PHOTO: JORDAN VONDERHAAR FOR WSJ

 

Now DeepSeek, a Chinese startup, has unveiled a low-cost AI model on par with some of the most advanced in the world but purportedly trained with far fewer Nvidia chips. Concern about its advances reverberated across Wall Street and in Washington, where it was seen as a threat to the U.S.’s technological lead over China.

DeepSeek could translate into renewed scrutiny over Nvidia’s sales of its AI chips to China. The U.S. in recent years has tightened restrictions on shipments of chips to China, aiming to limit its rival’s development of the most advanced AI tools. Nvidia responded by producing less-powerful chips for the Chinese market that met Washington’s restrictions, some of which DeepSeek said it used to develop its latest “R1” model.

Morgan Stanley analysts estimated in a note Tuesday that Nvidia sold more than $10 billion of China-specific chips over the past 12 months, and called tighter export controls one potential ramification of DeepSeek’s emergence. But previous controls hadn’t kept China from producing AI models, they said, and more controls could further spur Chinese chip development.

Some chip-industry executives believe DeepSeek’s advances could end up being more of a benefit than a curse for Nvidia as the AI industry adopts its more efficient approach. Jonathan Ross, chief executive of AI chip startup Groq, said that when operating a model, a process called “inference,” costs less, the capabilities of AI writ large are enhanced.

“If the cost of inference is too high, people aren’t going to use it as much,” he said, but higher usage leads developers to invest in training better models. “And so there’s a virtuous cycle between training and inference.”

Tech-industry insiders also cite Jevons paradox, the notion in economics that technological advancements that make resources more efficient to use often lead to greater consumption of those resources.

“There is latent demand for AI capabilities across a number of domains but where those tasks are not worth the cost of inference being charged by leading AI labs,” said Akash Bajwa, a principal at Earlybird Venture Capital in London. Lower prices from DeepSeek could help realize that demand, he said.

 

Nvidia’s offices in Shanghai.
Nvidia’s offices in Shanghai. PHOTO: QILAI SHEN/BLOOMBERG NEWS

 

The way DeepSeek’s model works may also point to a continued need for large numbers of Nvidia’s chips, analysts say. The company’s R1 model uses engineering tricks that require significant computing power as it produces multistep answers to users’ queries.

There is also skepticism around DeepSeek’s claims and some American AI industry leaders said Chinese researchers may have stockpiled leading-edge Nvidia chips before the U.S. restrictions, or used workarounds such as accessing Nvidia-enabled computing power from countries outside the U.S. and China. The Biden administration in its final days implemented new rules to address such blind spots. 

AI, meanwhile, has already begun to shift focus from training large models to deploying them for use: Nvidia said last year that more than 40% of its data-center revenue was coming from inference. Being able to serve up a popular AI model smoothly to a user base across the globe—something DeepSeek has struggled to do as its app shot up the charts—requires a lot of computing power on its own.

“I get the nervousness that it could reduce demand or cause customers to rethink near-term purchases,” said Stacy Rasgon, a semiconductor analyst at Bernstein Research. “But even with that, in the near term spending is going up.”

DeepSeek’s emergence as a threat to AI came just days after Meta PlatformsChief Executive Mark Zuckerberg said his company would spend up to $65 billion on AI and a large new data center this year. That would be an increase of as much as 70% for the company, which reports its quarterly earnings on Wednesday. Earlier this month, Microsoft said it plans to invest $80 billion in data centers for AI in the fiscal year ending in June. 

Microsoft also reports earnings Wednesday along with Tesla, another heavy AI investor. 

In addition, last week, tech leaders pledged to put as much as $500 billion into AI infrastructure in the U.S., an initiative announced at the White House after President Trump began his second term.

Write to Asa Fitch at asa.fitch@wsj.com

Keywords
You did not use the site, Click here to remain logged. Timeout: 60 second