Interview with Jensen Huang

founder & CEO of NVIDIA

by Bg2 Pod2025-09-25

Jensen Huang

In a captivating discussion, Jensen Huang, the visionary CEO of Nvidia, offered a rare glimpse into the future of computing, AI, and even global economics. Sitting down with Bill Gurley and Brad Gerstner of Bg2 Pod, Huang articulated with striking clarity how an overlooked aspect of artificial intelligence is poised to redefine industries and unleash unprecedented growth, challenging conventional Wall Street wisdom along the way.

The Billionx Boom: Redefining AI's Computational Demands

A year ago, Jensen Huang boldly predicted that AI inference wouldn't just 100x or 1000x, but a staggering one billionx. Revisiting this projection, he admits, "I underestimated. Let me just go on record." His confidence has only soared, fueled by the rapid evolution of AI. What was once seen as a singular "pre-training" scaling law has fractured into three distinct, exponentially growing forces: pre-training, post-training (where AI "practices" skills), and crucially, "thinking" inference.

This "thinking" inference is the game-changer. Unlike the old one-shot inference model, modern AI is now designed to "think before you answer," performing research, checking ground truths, and iterating. This complex cognitive process demands exponentially more compute. As Huang puts it, "AI is no longer a language model and AI is a system of language models and they're all running concurrently maybe using tools... and it's all multimodality." This profound shift means the computational appetite of AI is not just large, but insatiable and rapidly expanding, far beyond what many currently grasp.

Key Insights:

  • AI inference is experiencing a "billionx" computational increase due to the advent of "thinking" and chain-of-reasoning capabilities.
  • The AI landscape is now governed by three distinct scaling laws: pre-training, post-training (AI practicing), and complex inference.
  • Multi-agent systems, multimodal AI, and extensive tool use dramatically escalate compute requirements beyond simple language models.

OpenAI: The Next Trillion-Dollar Hyperscaler and Nvidia's Strategic Gambit

Central to Huang's vision is Nvidia's strategic partnership with OpenAI, which includes an investment and support for building their own colossal AI infrastructure. He sees OpenAI not just as a customer, but as "likely going to be the next multi- trillion dollar hyperscale company." This bold prediction underpins Nvidia's decision to invest ahead of OpenAI's expected meteoric rise, an opportunity Huang calls "some of the smartest investments we can possibly imagine."

This partnership also signals a significant shift in the AI landscape. OpenAI, traditionally leveraging hyperscalers like Microsoft Azure, is now building its own "self-build AI infrastructure" – effectively becoming a hyperscaler itself. This mirrors the direct relationships Nvidia has with giants like Meta and Google, where they work directly at the chip, software, and system levels. OpenAI is navigating "two exponentials" simultaneously: an exponential growth in customer usage and an exponential increase in computational requirements per use case (due to "thinking" AI). Nvidia's multi-pronged support across Azure, OCI, CoreWeave, and now OpenAI's direct buildout is designed to meet this compounding demand, further solidifying Nvidia's indispensable role.

Key Decisions:

  • Nvidia’s investment in OpenAI is a strategic move, betting on its potential to become a multi-trillion dollar hyperscale entity.
  • Supporting OpenAI in self-building its AI infrastructure fosters direct, full-stack relationships, mirroring Nvidia's engagements with other tech giants.
  • The partnership addresses the compounded challenge of exponentially growing customer adoption and per-user computational demand in AI.

The Unassailable Moat: Extreme Co-Design and Annual Velocity

Wall Street analysts currently forecast Nvidia's growth flatlining around 2027-2030, a perspective Huang finds inconsistent with the underlying shifts. He presents three fundamental points: Firstly, "general purpose computing is over," and the world's trillions of dollars of computing infrastructure must be refreshed with accelerated AI computing. Secondly, existing hyperscale workloads (search, recommender engines) are rapidly migrating from CPUs to GPUs, a "hundreds of billions of dollars" shift. Lastly, and most profoundly, AI will augment human intelligence, impacting 50-65% of global GDP.

To meet the "exponential of exponentials" driven by token generation and customer use, Nvidia has adopted an aggressive annual release cycle for its architectures (Hopper, Blackwell, Rubin, Fineman). With Moore's Law for performance largely dead, Nvidia's competitive edge comes from "extreme code design." This isn't just about faster chips; it's about optimizing the model, algorithm, system, and chip simultaneously, innovating "outside the box." As Huang explains, this full-stack approach—encompassing CPUs, GPUs, networking chips, MVLink, and Spectrum-X Ethernet—allows Nvidia to achieve performance gains of 30x between generations (like Hopper to Blackwell) that no conventional silicon progress could deliver. This systemic innovation, combined with the sheer scale of investment required from both Nvidia and its customers, creates a formidable moat that is "harder than ever before" for competitors to replicate.

Key Practices:

  • Nvidia maintains an aggressive annual release cycle for its architectures to keep pace with exponential increases in token generation and AI usage.
  • "Extreme co-design" involves simultaneous optimization across the entire AI factory stack: models, algorithms, systems, chips, networking, and software.
  • The company has moved beyond individual chip innovation to building integrated, full-stack AI systems that deliver unprecedented performance gains.
  • The scale of customer deployments (e.g., a gigawatt requiring 500,000 GPUs) and Nvidia's supply chain capacity creates a massive barrier to entry.

Augmenting Humanity: The Trillion-Dollar Economic Shift

The real long-term impact of AI, Huang contends, lies in its ability to augment human intelligence. Drawing an analogy, he posits that just as motors replaced physical labor, "these AI supercomputers, these AI factories that I talk about, they're going to generate tokens to augment human intelligence." With human intelligence accounting for potentially $50 trillion of global GDP, even a modest augmentation—like a $10,000 AI making a $100,000 employee twice as productive—creates an enormous new market.

This augmentation, he believes, could add "10 trillion" to global GDP, requiring a "5 trillion" annual capital expenditure on AI infrastructure. This isn't about an "air pocket" or "glut"; it's a foundational shift. He dismisses concerns about oversupply, stating, "until we fully convert all general purpose computing to accelerated computing and AI... the chances [of a glut] are extremely low." The demand signals from customers consistently underestimate the actual need, keeping Nvidia in a constant "scramble mode." This "renaissance for the energy industry" and the entire infrastructure ecosystem signifies a global acceleration of GDP, driven by billions of new AI "co-workers."

Key Insights:

  • AI's primary economic impact will be augmenting human intelligence, leading to an acceleration of global GDP growth.
  • The transition from general-purpose to accelerated/AI computing, coupled with the migration of existing hyperscale workloads to AI, ensures continuous demand.
  • Nvidia's supply chain is demand-driven, consistently responding to customer forecasts that routinely understate actual compute needs.

"My only regret is that that they invited us to invest early on... and we were so poor, you know, that we were so poor, we didn't invest enough, you know, and I should have given them all my money." - Jensen Huang