Nvidia Faces Sudden Setback!

Advertisements

The recent market situation surrounding Nvidia, a major player in the artificial intelligence (AI) chip sector, has stirred considerable attention and speculationIn a surprising move, Morgan Stanley has substantially revised its forecast for Nvidia's GB200 shipment volumes for 2025, reflecting broader concerns about the company's positioning amid a tumultuous market landscape.

In its latest research report, Morgan Stanley has cut the anticipated shipment figures for Nvidia's GB200 chipset from an earlier estimate of 30,000–35,000 units down to a new range of 20,000–25,000 unitsIn some scenarios, it foresees shipments possibly dipping below the 20,000 markThis downward revision has significant implications, potentially triggering a ripple effect throughout the GB200 supply chain, with estimated market impacts ranging from $30 billion to $35 billionSuch developments portend challenging times ahead for associated semiconductor firms.

The rationale behind this reduction appears to stem from Nvidia's key customer—Microsoft—experiencing a deceleration in capital expenditure growth, which negatively affects the overall supply chain

Advertisements

Additionally, the nascent ecosystem surrounding cloud computing and AI infrastructure has yet to mature, particularly in areas critical for GB200 deployment, such as networking and power infrastructure, thus exacerbating supply chain bottlenecks.

Moreover, Morgan Stanley cites ongoing debates within the realm of large language model (LLM) efficiencies, particularly between leading firms like DeepSeek and MicrosoftThese disagreements are expected to persist through 2025, complicating market efforts to reassess stock valuationsFurthermore, there’s the historical perspective that suggests capital spending within the cloud computing industry often evolves through cyclical patterns, typically witnessing a surge followed by a contraction phase.

According to historical data compiled by Morgan Stanley, the cloud computing sector typically undergoes growth cycles lasting two to three years, which is then followed by a decline lasting between two to four quarters

Advertisements

The downturn is often characterized by a slowdown in year-over-year growth among major American cloud service providers.

Despite these gloomy forecasts, the capital expenditures for cloud computing have witnessed a rebound since Q2 and Q3 of 2023, buoyed by an uptick in market investment towards GPU serversThe figures indicate a 62% year-over-year growth in Q3 2024, up from a previous 59%, signaling robust momentum in the sector.

Nevertheless, Morgan Stanley asserts that if historical trends play out as anticipated, this growth cycle might extend only until the first half of 2025, with a slowdown expected in Q4. Investor expectations regarding growth in the GB200 supply chain may be overzealous, potentially leading to downward pressure on high P/E cloud computing stocks.

As Nvidia prepares to release financial results on February 26, investors and market analysts alike will scrutinize the company's performance metrics and guidance closely

Advertisements

Questions surrounding Nvidia's operational performance and valuation assessment will be paramount, especially given its staggering loss of $65 billion in market capitalization.

As one of the world's foremost AI chip producers, Nvidia has found itself amidst significant market volatility, having seen its stock price drop by 2.84% on February 3, closing at $116.66—a total decline of 23.8% from its January highsThis sharp downturn has resulted in an approximate evaporation of $893 billion in total market value, a staggering figure by any measure.

Notably, analysts suggest that Nvidia's recent troubles could be closely linked to the advent of DeepSeek's newly released open-source model, R1. The rising star in the AI space has garnered significant recognition, hailed with labels like “the Nvidia bear” and “AI’s Sputnik moment,” further complicating Nvidia's dominance in the AI arena.

DeepSeek has made headlines predominantly for its cost and efficiency breakthroughs

Publicly available information suggests that upon the launch of its V3 open-source foundational model in late 2024, it will measure up against GPT-4o while maintaining a training cost of just 2,048 Nvidia H800 units—totaling approximately $557.6 millionIn stark contrast, Meta's Llama 3.1 requires 16,384 units of Nvidia H100 GPUs, with the training costs for GPT-4o pegged at around $100 million, suggesting a significant divergence in operational expenses.

Historically, the use of thousands of GPUs has been the standard for training sophisticated models, yet DeepSeek's R1 is poised to challenge this normIts costs remain unverified, but its API pricing undercuts OpenAI significantly, with charges for input tokens ranging between 1–4 RMB and output tokens priced at 16 RMB per millionThis rapid emergence of competitive offerings has introduced unforeseen challenges to OpenAI’s monopolistic position established in the preceding two years.

Following the latest announcement of DeepSeek's Janus-Pro model, OpenAI CEO Sam Altman publicly acknowledged the impressive capabilities of DeepSeek’s R1, fueling speculation about forthcoming competitive models from his own organization

alefox

In a social media post, Altman remarked on the model's performance in light of its affordability and the overall excitement generated by the arrival of new competitors in the market.

In drawing a comparative analysis of large models, it’s clear that the power of open-source is gaining traction in the AI domainBy 2024, DeepSeek had established itself as a formidable player through its high-performance V2 open-source model, compelling Chinese companies to follow suit with price reductionsBy 2025, this competitive fervor is anticipated to escalate on a global scale.

Greater concern now looms over how DeepSeek's success could potentially diminish the projected demand for AI chips from established players like NvidiaDeepSeek, thriving on lower-cost computing solutions, stands poised to disrupt the status quo without the reliance on cutting-edge hardware, raising profound questions regarding the necessity for substantial capital investments in computing resources.

This landscape shift instigates a deeper inquiry into the implications of erstwhile high computing costs, as the structure of processing power and its significance undergoes transformation

In an era increasingly defined by inference, the architectural balance of heterogeneous computing is likely to evolve, impacting the market valuation of not only Nvidia but also its competitors, including Broadcom and AMD.

The turbulence surrounding Nvidia's stock price reflects a confluence of factors—overall market corrections, decreasing profit acceleration, concerns over lofty valuations, challenges with the GB200 deliveries, and climbing competition, all exacerbated by antitrust scrutiny from various jurisdictions.

Nonetheless, Nvidia's co-founder and CEO Jensen Huang maintains an optimistic outlook, reaffirming a robust demand for products built on its Blackwell platform, alongside projected growth in data centersAs the impending quarterly report slated for late February draws near, Nvidia is expected to address numerous issues while clarifying its strategic vision, albeit in a landscape still prone to pressures in the short term.

The core essence remains that while challenges abound, the value of computing power must not be underestimated

Significant developments in AI applications will continue to necessitate solid computational backingIn the ongoing competition among tech giants in the American market, initiatives like the recently announced “Interstellar Gateway” for AI infrastructure—collaboratively put together by companies such as OpenAI, SoftBank, and Oracle—represent concerted efforts to construct vast data centers over the next four years, with ambitions of raising $500 billion in financing.

Microsoft has earmarked an investment of $80 billion in AI infrastructure by 2025, while Mark Zuckerberg recently indicated that Meta plans to channel between $60 billion to $65 billion into capital expenditures towards its AI strategy over the same timeframe.

Looking forward, the prevailing challenges pivoting around how to optimize computing utilization and cultivate more sophisticated computational networks will serve as the next key hurdle for stakeholders in the sector.