DeepSeek ‘punctures’ tech spending plans, and what analysts are saying

Date:

Share post:


Chinese AI firm DeepSeek has emerged as a potential challenger to U.S. AI leaders, demonstrating breakthrough models that claim to offer performance comparable to leading chatbots at a fraction of the cost. The company’s mobile app, released in early January, has also topped iPhone charts across major markets including the U.S., UK, and China.

Founded in 2023 by Liang Wenfeng, former chief of AI-driven quant hedge fund High-Flyer, DeepSeek makes its models open-source and incorporates a reasoning feature that articulates its thinking before providing responses.

Wall Street’s reaction has been mixed. While Jefferies warns that DeepSeek’s efficient approach “punctures some of the capex euphoria” following recent spending commitments from Meta and Microsoft — each exceeding $60 billion this year — Citi questions whether such results were achieved without advanced GPUs. Goldman Sachs sees broader implications, suggesting the development could reshape competition between established tech giants and startups by lowering barriers to entry.

Here’s how Wall Street analysts are reacting to DeepSeek, in their own words (emphasis mine):

Jefferies

DeepSeek’s power implications for AI training punctures some of the capex euphoria which followed major commitments from Stargate and Meta last week. With DeepSeek delivering performance comparable to GPT-4o for a fraction of the computing power, there are potential negative implications for the builders, as pressure on AI players to justify ever increasing capex plans could ultimately lead to a lower trajectory for data center revenue and profit growth.

If smaller models can work well, it is potentially positive for smartphone. We are bearish on AI smartphone as AI has gained no traction with consumers. More hardware upgrade (adv pkg+fast DRAM) is needed to run bigger models on the phone, which will raise costs. AAPL’s model is in fact based on MoE, but 3bn data parameters are still too small to make the services useful to consumers. Hence DeepSeek’s success offers some hope but there is no impact on AI smartphone’s near-term outlook.

China is the only market that pursues LLM efficiency owing to chip constraint. Trump/Musk likely recognize the risk of further restrictions is to force China to innovate faster. Therefore, we think it likely Trump will relax the AI Diffusion policy.

Citi

While DeepSeek’s achievement could be groundbreaking, we question the notion that its feats were done without the use of advanced GPUs to fine tune it and/or build the underlying LLMs the final model is based on through the Distillation technique. While the dominance of the US companies on the most advanced AI models could be potentially challenged, that said, we estimate that in an inevitably more restrictive environment, US’ access to more advanced chips is an advantage. Thus, we don’t expect leading AI companies would move away from more advanced GPUs which provide more attractive $/TFLOPs at scale. We see the recent AI capex announcements like Stargate as a nod to the need for advanced chips.

Bernstein

In short, we believe that 1) DeepSeek DID NOT “build OpenAI for $5M”; 2) the models look fantastic but we don’t think they are miracles; and 3) the resulting Twitterverse panic over the weekend seems overblown.

Our own initial reaction does not include panic (far from it). If we acknowledge that DeepSeek may have reduced costs of achieving equivalent model performance by, say, 10x, we also note that current model cost trajectories are increasing by about that much every year anyway (the infamous “scaling laws…”) which can’t continue forever. In that context, we NEED innovations like this (MoE, distillation, mixed precision etc) if AI is to continue progressing. And for those looking for AI adoption, as semi analysts we are firm believers in the Jevons paradox (i.e. that efficiency gains generate a net increase in demand), and believe any new compute capacity unlocked is far more likely to get absorbed due to usage and demand increase vs impacting long term spending outlook at this point, as we do not believe compute needs are anywhere close to reaching their limit in AI. It also seems like a stretch to think the innovations being deployed by DeepSeek are completely unknown by the vast number of top tier AI researchers at the world’s other numerous AI labs (frankly we don’t know what the large closed labs have been using to develop and deploy their own models, but we just can’t believe that they have not considered or even perhaps used similar strategies themselves).

Morgan Stanley

We have not confirmed the veracity of these reports, but if they are accurate, and advanced LLM are indeed able to be developed for a fraction of previous investment, we could see generative AI run eventually on smaller and smaller computers (downsizing from supercomputers to workstations, office computers, and finally personal computers) and the SPE industry could benefit from the accompanying increase in demand for related products (chips and SPE) as demand for generative AI spreads.

Goldman Sachs

With the latest developments, we also see 1) potential competition between capital-rich internet giants vs. start-ups, given lowering barriers to entry, especially with recent new models developed at a fraction of the cost of existing ones; 2) from training to more inferencing, with increased emphasis on post-training (including reasoning capabilities and reinforcement capabilities) that requires significantly lower computational resources vs. pre-training; and 3) the potential for further global expansion for Chinese players, given their performance and cost/price competitiveness.

We continue to expect the race for AI application/AI agents to continue in China, especially amongst To-C applications, where China companies have been pioneers in mobile applications in the internet era, e.g., Tencent’s creation of the Weixin (WeChat) super-app. Amongst To-C applications, ByteDance has been leading the way by launching 32 AI applications over the past year. Amongst them, Doubao has been the most popular AI Chatbot thus far in China with the highest MAU (c.70mn), which has recently been upgraded with its Doubao 1.5 Pro model. We believe incremental revenue streams (subscription, advertising) and eventual/sustainable path to monetization/positive unit economics amongst applications/agents will be key.

For the infrastructure layer, investor focus has centered around whether there will be a near-term mismatch between market expectations on AI capex and computing demand, in the event of significant improvements in cost/model computing efficiencies. For Chinese cloud/data center players, we continue to believe the focus for 2025 will center around chip availability and the ability of CSP (cloud service providers) to deliver improving revenue contribution from AI-driven cloud revenue growth, and beyond infrastructure/GPU renting, how AI workloads & AI related services could contribute to growth and margins going forward. We remain positive on long-term AI computing demand growth as a further lowering of computing/training/inference costs could drive higher AI adoption. See also Theme #5 of our key themes report for our base/bear scenarios for BBAT capex estimates depending on chip availability, where we expect aggregate capex growth of BBAT to continue in 2025E in our base case (GSe: +38% yoy) albeit at a slightly more moderate pace vs. a strong 2024 (GSe: +61% yoy), driven by ongoing investment into AI infrastructure.

J.P.Morgan

Above all, much is made of DeepSeek’s research papers, and of their models’ efficiency. It’s unclear to what extent DeepSeek is leveraging High-Flyer’s ~50k hopper GPUs (similar in size to the cluster on which OpenAI is believed to be training GPT-5), but what seems likely is that they’re dramatically reducing costs (inference costs for their V2 model, for example, are claimed to be 1/7 that of GPT-4 Turbo). Their subversive (though not new) claim – that started to hit the US AI names this week – is that “more investments do not equal more innovation.” Liang: “Right now I don’t see any new approaches, but big firms do not have a clear upper hand. Big firms have existing customers, but their cash-flow businesses are also their burden, and this makes them vulnerable to disruption at any time.” And when asked about the fact that GPT5 has still not been released: “OpenAI is not a god, they won’t necessarily always be at the forefront.”

UBS

Throughout 2024, the first year we saw massive AI training workload in China, more than 80-90% IDC demand was driven by AI training and concentrated in 1-2 hyperscaler customers, which translated to wholesale hyperscale IDC demand in relatively remote area (as power-consuming AI training is sensitive to utility cost rather than user latency).

If AI training and inference cost is significantly lower, we would expect more end users would leverage AI to improve their business or develop new use cases, especially retail customers. Such IDC demand means more focus on location (as user latency is more important than utility cost), and thus greater pricing power for IDC operators that have abundant resources in tier 1 and satellite cities. Meanwhile, a more diversified customer portfolio would also imply greater pricing power.

We’ll update the story as more analysts react.





Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

DeepSeek triggered a wild, baseless rally for some Chinese stocks

Chinese AI company DeepSeek made global headlines for helping spark a massive sell-off in U.S. tech stocks...

Figure AI details plan to improve humanoid robot safety in the workplace

Safety is often overlooked in the rush to bring humanoid robots to the workplace. As high-profile corporations...

Bookshop.org challenges Amazon with new e-book platform

Indie bookstore backer Bookshop.org launched an e-book platform on Tuesday, making it easier for readers to buy...

Documents show UK wooed a16z for five years

UK documents show that the UK’s Department of Business and Trade spent five years convincing Andreessen Horowitz...

This founder was worried about his mother slipping — so he created sensors to detect falls

Falls are common for older people living semi-independently. According to the CDC, they’re the leading cause of...

Jetify launches Testpilot, its AI QA engineer

Jetify, the company formerly known as Jetpack.io, is launching its first AI agent product Tuesday. Dubbed Testpilot,...

Hugging Face makes it easier for devs to run AI models on third-party clouds

AI dev platform Hugging Face has partnered with third-party cloud vendors including SambaNova to launch Inference Providers,...

OpenAI launches ChatGPT plan for U.S. government agencies

In a week dominated by headlines about China’s growing AI competitiveness, OpenAI has launched ChatGPT Gov. The...