Financial News
Wall Street braces for Nvidia’s most-watched earnings as the AI buildout faces its next test

[Note: Nvidia reports after the bell on Wednesday, August 27, 2025. This article previews the event using company guidance, prior results, and current street expectations.]
Nvidia (NASDAQ: NVDA) heads into its fiscal second-quarter report with the weight of the AI boom squarely on its shoulders. The chipmaker’s results and guidance will ripple through the entire technology stack—from hyperscale cloud operators and server makers to memory vendors and advanced packaging foundries—because Nvidia remains the central supplier to the world’s AI factories.
Options markets are braced for a big swing. Implied moves of roughly 6.5%–7.5% suggest as much as $260 billion in market value could change hands within hours of the print. At roughly $4.4 trillion in market capitalization heading into the week, Nvidia’s guidance on the Blackwell ramp, gross-margin trajectory, and hyperscaler demand will set the tone for the remainder of 2025’s AI trade.
What’s happening and why it matters
Nvidia is scheduled to report Q2 FY2026 (quarter ended July 27, 2025) after the close on Wednesday, with an earnings call at 5:00 p.m. ET. Management guided to approximately $45.0 billion in revenue, plus or minus 2%, with GAAP and non-GAAP gross margins of about 71.8% and 72.0% (±50 bps). Street estimates are clustered around $46–$46.5 billion in revenue and roughly $1.00–$1.02 in non-GAAP EPS, with some upside scenarios approaching $48 billion and $1.06.
The stakes are high because this quarter should show whether Nvidia’s next-gen Blackwell platform is moving from launch to sustained volume. Data Center is expected to contribute the lion’s share—consensus near $41 billion—while investors look for confirmation that supply bottlenecks have eased and that networking, NVLink systems, and Spectrum-X Ethernet are scaling in tandem. After a $4.5 billion Q1 charge tied to export licensing on China-oriented H20 inventory, margins are expected to return toward the low 70s, with management targeting mid-70s later this year.
The lead-up has been extraordinary. In Q1 FY2026, Nvidia posted $44.1 billion in revenue (up 69% year over year) and $39.1 billion from Data Center alone (up 73% y/y). Across the last three reported quarters, Nvidia has reset records on revenue and operating income as hyperscaler AI capex re-accelerated. CEO Jensen Huang has framed Blackwell’s NVL72 “thinking machine” as a rack-scale supercomputer for reasoning, while CFO Colette Kress has emphasized a fast ramp and improving gross margins as supply catches demand.
Policy risk remains a subplot. U.S. export controls on advanced AI chips triggered the Q1 charge and reduced near-term sales to China (historically around 13% of revenue). In August, multiple outlets reported an unusual U.S. licensing arrangement allowing sales of certain AI accelerators into China in exchange for a 15% revenue share with the U.S. Treasury. Even if the mechanism is in flux—and even as Beijing agencies reportedly balk at H20-class parts—the possibility of resumed, compliant China shipments could affect second-half guidance and inventory planning.
On the street, analysts have been busy marking up targets—many in the $200–$225 range on a split-adjusted basis—while flagging one big unknown: can growth sustain at torrid 2023–2024 levels as the mix shifts from training to inference and as customers deploy more customized silicon alongside Nvidia?
Who wins and who loses from here
Winners are likely to track Nvidia’s most critical inputs and adjacent platforms:
- Taiwan Semiconductor Manufacturing Co. (NYSE: TSM): Nvidia’s principal foundry and CoWoS advanced-packaging partner, with plans to more than double CoWoS capacity into 2026. Reports suggest Nvidia could command a majority share of global CoWoS demand over the next 18 months.
- SK Hynix (KRX: 000660) and Micron Technology (NASDAQ: MU): Key suppliers of HBM3/HBM3E, with 2025 output effectively sold out and ramping into 2026. Micron has begun volume shipments of HBM3E qualified for Blackwell Ultra.
- ASML (NASDAQ: ASML): The sole EUV tool provider benefits from the industry’s rush to advanced nodes needed for Nvidia-class GPUs and networking silicon.
- Broadcom (NASDAQ: AVGO) and Marvell Technology (NASDAQ: MRVL): Beneficiaries of the AI custom-silicon wave and advanced packaging capacity; both tied to hyperscaler ASIC programs and high-speed networking.
- Oracle (NYSE: ORCL): A marquee buyer and operator of Nvidia GB200 NVL72 systems, with reports of a roughly $40 billion multi-year Blackwell purchase tied to OpenAI’s Abilene, Texas “Stargate” buildout and broader OCI Supercluster deployments.
- Arista Networks (NYSE: ANET), Astera Labs (NASDAQ: ALAB), and Credo Technology (NASDAQ: CRDO): Networking and connectivity names leveraged to AI data center scale-out alongside Nvidia’s own Spectrum-X.
- Dell Technologies (NYSE: DELL) and Hewlett Packard Enterprise (NYSE: HPE): System integrators positioned to capture AI server demand as enterprises and sovereign AI projects standardize on Nvidia reference architectures.
Potential laggards or pressured names include:
- Super Micro Computer (NASDAQ: SMCI): Once a high-beta beneficiary of Nvidia GPU servers, it has contended with shifting order allocations and heightened scrutiny, underscoring concentration and execution risks in the AI server supply chain.
- Samsung Electronics (KRX: 005930): Still vying to win incremental Nvidia HBM3E share amid reports that SK Hynix and Micron hold the near-term lead on qualification and yields.
- Foxconn (TWSE: 2317): Facing uncertainty around China-oriented accelerator builds as licensing and local policy signals ebb and flow.
- Advanced Micro Devices (NASDAQ: AMD) and Intel (NASDAQ: INTC): The read-through is nuanced. A strong Nvidia guide reinforces the scale of AI spend (a rising tide), but continued Nvidia dominance in the highest-value accelerators can make it harder for rivals to dislodge incumbency at top hyperscalers.
What the earnings mean for the AI industry’s trajectory
Nvidia’s print is a proxy for the health of the entire AI infrastructure cycle. Hyperscaler capex is tracking toward an unprecedented $315–$365 billion in 2025, with Wall Street models pointing to further growth into 2026. Company-by-company: Microsoft (NASDAQ: MSFT) is on pace to spend around $80 billion on data centers this year; Amazon (NASDAQ: AMZN) could exceed $100 billion; Alphabet (NASDAQ: GOOGL) lifted its 2025 outlook to roughly $85 billion; and Meta Platforms (NASDAQ: META) guided to $66–$72 billion. Much of this is tied to Nvidia GPUs, systems, and networking—even as each hyperscaler advances its own custom accelerators.
The mix is shifting. Training clusters remain large, but inference demand—driven by a 10x jump in token generation over the past year and the emergence of AI agents—is expanding the total addressable market. That puts a spotlight on Nvidia’s networking business (NVLink, NVSwitch, Spectrum-X, ConnectX-8 SuperNICs), which many analysts see as an underappreciated growth engine.
Regulation threads through the outlook. The reported U.S. export-license revenue-share arrangement for China-bound accelerators, the evolving thresholds that define “advanced” AI chips, and Beijing’s stance toward imported AI hardware all inject policy volatility into Nvidia’s 2025–2026 plan. Historically, export rules shaved China to near-low-teens percent of revenue; any re-entry via compliant Blackwell derivatives (e.g., rumored B30A/B20A and RTX Pro variants) could change the trajectory—albeit with margins adjusted for licensing.
Historically, this phase resembles other compute buildouts—think the Wintel server era or the early hyperscale cloud land grab—yet the velocity is different. Nvidia’s last three quarters saw revenue step-ups measured in tens of billions, a scale without precedent in semis. The question now is whether Blackwell’s ramp can bridge seamlessly to Rubin in 2026, keeping unit economics and software lock-in (CUDA, NVIDIA AI Enterprise) intact even as alternatives proliferate.
What comes next
Guidance is everything. Some on the street expect Q3 FY2026 revenue guide near $55 billion with EPS around $1.25 as Blackwell shipments increase and networking mix rises. Investors will parse commentary on:
- Blackwell Ultra timing in 2H25 and its contribution split between training and inference.
- Supply chain capacity: TSMC CoWoS-L throughput, substrate availability, and the cadence of HBM3E adds from SK Hynix and Micron.
- Hyperscaler ordering patterns into 2026 and the extent to which sovereign AI projects backfill any digestion periods.
- China-compliant Blackwell derivatives, licensing mechanics, and any margin headwinds from the reported 15% revenue-share regime.
Beyond Blackwell, Nvidia has telegraphed Rubin for 2026 (and Rubin Ultra for 2027), with HBM4 and more ambitious multi-reticle packaging. That roadmap matters for customer commitments locking in 12–24 months of capacity—and for competitors plotting MI355/MI400-class responses. Watch also the software layer: enterprise AI toolchains and inference optimizations that could deepen Nvidia’s moat.
The bottom line
Nvidia’s earnings remain the single most consequential data point for the AI economy. Key takeaways heading in: a still-supply-constrained Blackwell ramp, hyperscaler capex that continues to climb, and gross margins normalizing after the Q1 export-control charge. The magnitude and shape of guidance—especially around networking mix and China re-entry mechanics—will determine whether 2025 sustains its breakneck pace or enters a digestion phase.
For investors, the checklist is clear. Track order backlogs, lead times, and cancellation rates; monitor hyperscaler capex updates; watch HBM and CoWoS capacity additions; and listen for signals on inference monetization. Also keep an eye on policy: export-license terms, thresholds that define “advanced” compute, and any feedback from Beijing on compliant Blackwell parts.
However the print lands, Nvidia’s role at the center of AI infrastructure looks intact. The open question is how quickly the ecosystem scales around it—and who captures the next dollar of AI spend as training yields to inference, racks give way to data-center-scale systems, and software becomes as decisive as silicon.
More News
View More




Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.