Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494

Lex Fridman| 02:25:58|Mar 27, 2026
Chapters21
Jensen Huang discusses NVIDIA’s central role in the AI revolution and frames this interview as an exploration of the company’s history, leadership, and vision for the future.

Jensen Huang explains NVIDIA’s rise as a full-stack AI factory, the power of extreme co-design, and how CUDA’s install base and ecosystem are NVIDIA’s ultimate moat.

Summary

In this episode, Lex Fridman speaks with Jensen Huang, CEO of NVIDIA, about how the company has evolved from GPU provider to an end-to-end AI infrastructure platform. Huang argues that extreme co-design across CPU, GPU, memory, networking, and data-center architecture is essential when solving problems that outpace linear scaling. He traces NVIDIA’s shift from gaming GPUs to a computing architecture built for AI factories, highlighting CUDA’s foundational role and how CUDA on GeForce expanded their install base and unlocked the deep-learning revolution. Huang shares his leadership philosophy—shaping belief inside the company long before big announcements, and leveraging a staff that covers memory, optics, and hardware to attack problems collectively rather than in silos. The conversation also delves into scaling laws (pre-training, post-training, test time, and agentic scaling), the importance of power efficiency, and how agentic systems and OpenClaw could redefine how AI tools work in the real world. Huang reflects on supply chains, space-grade computing, and the future of “token factories” powered by AI, stressing that the real value lies in orchestrating people, processes, and hardware at planetary scale. He also touches on humane considerations: education, workforce transformation, and the ethical responsibilities that come with unprecedented technological leverage. The episode closes with a vision of a future where AI amplifies human creativity and where NVIDIA remains at the center of a global AI ecosystem through openness, collaboration, and relentless engineering.

Key Takeaways

  • CUDA’s placement on GeForce expanded NVIDIA’s install base, enabling developers worldwide to adopt GPU-accelerated computing long before cloud computing existed.
  • Extreme co-design optimizes entire stacks (software, chips, memory, networking, power, cooling, and data-center racks) to achieve speeds beyond Moore’s Law.
  • Agentic scaling predicts AI systems spawning sub-agents to execute tasks, creating a feedback loop where pre-training, post-training, and test-time data continually improve.
  • The ‘install base plus velocity’ is NVIDIA’s moat: thousands of developers and hundreds of partners rely on CUDA, making NVIDIA the default platform for AI software.
  • The future of AI infrastructure envisions token factories and pervasive AI workers that augment every profession, with human leadership guiding responsible deployment.
  • Power efficiency (tokens per second per watt) and supply-chain resilience are critical to scaling AI factories to planetary scale.

Who Is This For?

Essential viewing for AI architects, data-center engineers, and developers who want to understand how NVIDIA’s end-to-end AI platforms are built and why CUDA remains a durable moat. Also valuable for leaders exploring extreme co-design and supply-chain strategy in AI infrastructure.

Notable Quotes

""Install base is everything.""
Huang emphasizes install base as the single most important factor for architectural success.
""CUDA on GeForce... put CUDA everywhere; it started the deep learning revolution.""
Explains the strategic decision to expose CUDA to consumer GPUs to grow adoption.
""The install base of CUDA is our moat.""
Summarizes why CUDA and ecosystem matter most for NVIDIA's competitive edge.
""Extreme co-design is necessary because the problem no longer fits inside one computer to be accelerated by one GPU.""
Defines the core approach to building AI infrastructure at scale.
"" token factories" and OpenClaw are the iPhone of tokens in the AI era."
Huang envisions agentic AI as scalable, monetizable tokens in the AI economy.

Questions This Video Answers

  • How did CUDA on GeForce ignite NVIDIA's role in the AI revolution?
  • Why is install base so critical for AI platform ecosystems?
  • What is extreme co-design and how does it differ from traditional system design?
  • How will agentic scaling change the future of AI workloads and hardware requirements?
  • What are token factories in NVIDIA's vision for AI infrastructure?
Jensen HuangNVIDIACUDACUDA on GeForceExtreme co-designAI factoriesOpenClawAgentic scalingMoore's Law slowdownGrok/GroK AI initiative
Full Transcript
- The following is a conversation with Jensen Huang, CEO of NVIDIA, one of the most important and influential companies in the history of human civilization. NVIDIA is the engine powering the AI revolution, and a lot of its success can be directly attributed to Jensen's sheer force of will and his many brilliant bets and decisions as a leader, engineer, and innovator. This is Lex Fridman Podcast. And now dear friends, here's Jensen Huang. You've propelled NVIDIA into a new era in AI, moving beyond its focus on chip scale design to now rack scale design. And I think it's fair to say that winning for NVIDIA for a long time used to be about building the best GPU possible, and you still do, but now you've expanded that to extreme co-design of GPU, CPU memory, networking, storage, power cooling, software, the rack itself, the pod that you've announced, and even the data center. So let's talk about extreme co-design. What is the hardest part of co-designing system with that many complex components and design variables? - Yeah, thanks for that question. So first of all, the reason why extreme co-design is necessary is because the problem no longer fits inside one computer to be accelerated by one GPU. The problem that you're trying to solve is you would like to go faster than the number of computers that you add. So you added you know, 10,000 computers, but you would like it to go a million times faster. Then all of a sudden you have to take the algorithm, you have to break up the algorithm, you have to refactor it, you have to shard the pipeline, you have to shard the data, you have to shard the model. Now all of a sudden when you distribute the problem this way, not just scaling up the problem, but you're distributing the problem, then everything gets in the way. This is the Amdahl's Law problem where the amount of speed up you have for something depends on how much of the total workload it is. And so if computation represents 50% of the problem, and I sped up computation infinitely like a million times, you know, I only sped up the total workload by a factor of two. Now all of a sudden, not only do you have to distribute a computation, you have to, you know, shard the pipeline somehow. You also have to solve the networking problem because you've got all of these computers are all connected together. And so distributed computing at the scale that we do, the CPU is a problem, the GPU is a problem, the networking is a problem, the switching is a problem. And distributing the workload across all these computers is a problem. It's just a massively complex computer science problem. And so we just gotta bring every technology to bear. Otherwise, we scale up linearly or we scale up based on the capabilities of Moore's Law, which has largely slowed because Dennard scaling has slowed. - I'm sure there's trade-offs there. Plus you have a complete disparate disciplines here. I'm sure you have specialists in each one of these high bandwidth memory, the network and the NVLink, the NICs, the optics and the copper that you're doing, the power delivery, the cooling, all of that. I mean, there's like world experts in each of those. How do you get 'em in a room together to figure out- - That's why my staff is so large. - What's the process—can you take me through the process of the specialists and the generalists? Like how do you put together the rack when you know the s- the set of things you have to shove into a rack together? Like what does that process look like of designing it all together? - Yeah. There's the first question, which is: what is extreme co-design? You're, you, we're optimizing across the entire stack of software from architectures to chips, to systems, to system software, to the algorithms, to the applications. That's one layer. The second thing that you and I just talked about is goes beyond CPUs and GPUs and networking chips and scale up switches and scale out switches. And then of course, you gotta include power and cooling and all of that because, you know, all these computers are extremely, extremely power hungry. They do a lot of work and they're very energy efficient, but they in aggregate still consume a lot of power. And so that's one. The first question is, what is it? The second question is, why is it, and we just spoke about the reason, you know you want to distribute the workload so that you can exceed the benefit of just increasing the number of computers. And then the third question is, how is it, how do you do it? And that's the, that's kind of the miracle of this company. You know, when you're designing a computer, you have to have operating system of computers. When you're designing a company, you should first think about what is it that you want the company to produce. You know, I see a lot of companies organization charts, and they all look the same. Hamburger organization charts, soft organization charts, and car company organization charts. They all look the same. And it doesn't make any sense to me. You know, the goal of a company is to be the company is to be the machinery, the mechanism, the system that produces the output. And that output is the product that we like to create. It is also designed, the architecture of the company should reflect the environment by which it exists. It almost indirectly says what you should do with the organization. My direct staff is 60 people. You know, I don't have one-on-ones with 'em because it's impossible. You can't have, you can't have 60 people on your staff if you're, you know, gonna get work done and- - So you still have 60 reports. You still have across- - More, yeah. - More. And most stars at least have a foot in engineering. - Almost all of them. There's experts in memory, there's experts in CPUs, there's experts in optical. All, all— - That's incredible. - Yeah, GPUs and— Architecture, algorithms, design, um— - So, you constantly have an eye on the entire stack, and you're having to, like, intense discussions about the designer of the entire stack? - And no conversation is ever one person. That's why I don't do one-on-ones. We present a problem and all of us attack it. You know, because we're doing extreme co-design. And literally, the company is doing extreme co-design all the time. - So, even if you're talking about a particular component, like cooling, networking, everybody's listening in? - Yeah, exactly. - And they can contribute, "Well, this doesn't work for the power distribution. This doesn't-" - Exactly. - "... This doesn't work for the memory. This doesn't work for this." - Exactly. And whoever wants to tune out, tune out. You know what I'm saying? And the reason for that is because the people who are on the staff, they know when to pay attention. There's supposed... You know, it's something they could have contributed to, they didn't contribute to, "I'm going to call them out." You know? And so, "Hey, come on, let's get in here." - So, as you mentioned, NVIDIA is this company that's adapting to the environment. So, at which point can you say, did the environment change and began adapting sort of secretly- ... in the early days from GPU for gaming, maybe the early deep learning revolution to we're now going to start thinking of it as an AI factory? What does NVIDIA do? It produces AI, let's build a factory that makes AI. - Uh, I could, I c- you, you could- I could reason through what just systematically. We started out as a, as an accelerator company. But the problem with accelerators is that the application domain's too narrow. It has the benefit of being incredibly optimized for the job. You know, any specialist has that benefit. The problem with intense specialization is that, of course, your market reach is narrower, but that's, that's even fine. The problem is, the market size also dictates your R&D capacity. And your R&D capacity ultimately dictates the influence and impact that you can possibly have in computing. And so, when we first started out as an accelerator, very specific accelerator, we always knew that had- that was going to be our first step. We had to find a way to become accelerated computing. But the problem is, when you become a computing company, it's too general purpose and it takes away from your specialization. The tur- I connected two words that are actually have fundamental tension. The better computing company we become, the worse we became as a specialist. The more of a specialist, the less capacity we have to do overall computing. And so, that... And I connected those two words together on purpose, that the company has to find that really narrow path, step by step by step, to expand our aperture of computing, but not give up on the most important specialization that we had. Okay, so the first step that we took beyond acceleration was, we invented a programmable pixel shader. So, that was the first step towards programmability. You know, it was our first journey towards moving into the world of computing. The second thing that we did was we created we put FP32 into our shaders. That FP32 step, IEEE-compatible FP32, was a huge step in the direction of computing. It was the reason why all of the people who were working on, on stream processors and, you know, other types of data flow processors discovered us. And they said, "Hey, all of a sudden, you know, we might be able to use this GPU that's incredibly computationally intensive, and it's now, you know, compliant with IEEE." I can take my software that I was writing, you know, previously on CPUs, and I can, you know, see about, you know, using the GPU for that. And which led us to create, put C on top of FP32, what's called, we call Cg. The Cg path took us to eventually CUDA. CUDA, step by step by step We... Well, putting CUDA on GeForce, that was a strategic decision that was very, very hard to do, because it cost the company enormous amounts of our profits, and we couldn't afford it at the time. But we did it anyways because we wanted to be a computing company. A computing company has a computing architecture. A computing architecture has to be compatible across all of the chips that we build. - Can you take me through that decision? So, putting CUDA on GeForce, could not afford to do? Can you explain that decision? Why, why boldly choose to do that anyway? Can you explain that decision? - Yeah, excellent. That was, that was the first... I would say that that was the first strategic decision that is as close to an existential threat. - For people who don't know, it turned out to be, spoiler alert, one of the most incredibly brilliant decisions ever made by a company. So, CUDA turned out to be an incredible foundation for computation in this AI infrastructure world. So- - Thank you - ... just setting the context. It turned out to be a good decision. - Yeah, it turned out to have been a good decision. I think the... So, here's the way it went. So, we invented this thing called CUDA, and It expanded the aperture of applications that, that we can accelerate with our accelerator. The question is, how do we, how do we attract developers to CUDA? Because a computing platform is all about developers. And developers don't come to a computing platform just because, you know, it could perform something interesting. They come to a computing platform because the install base is large. Because a developer, like anybody else, wants to develop software that reaches a lot of people. So, the install base is, in fact, the single most important part of an architecture. The architecture could attract enormous amounts of criticism. For example, no architecture has ever attracted more criticism than the x86.... you know, as a less than, less than elegant architecture, but yet it is the defining architecture of today. It gives you an example that in fact so many RISC architectures which were beautifully architected, incredibly well-designed by some of the brightest computer scientists in the world, largely failed. And so I've given you two examples where one is, you know, one is elegant, the other one's barely aesthetic, and so yet x86 survived and the reason for- - Install base is everything. - Install base defines an architecture. Not... Everything else is secondary, okay? And so there were other architectures at the time. CUDA came out, OpenCL was here. There were... You know, there's several other competing architectures. But the thing that... The decision that we made that was good was we said, "Hey, look, ultimately it's about, Install base and what is the best way we could get a new computing architecture into the world?" By that timeframe, GeForce had become successful. We were already selling millions and millions of GeForce GPUs a year, and we said, "You know, we ought to put CUDA on GeForce and put it into every single PC whether customers use it or not, and use it as a starting point of cultivating our install base." Meanwhile, we'll go and attract developers, and we went to universities and wrote books and taught classes and put CUDA everywhere. And eventually people discover... And at the time, the PC was the primary computing vehicle. There was no cloud, and we could put a supercomputer in the hands of every researcher in school, every scientist, you know, every engineering school, every... or every student in school, and eventually something amazing will happen. Well, the problem was CUDA increased our cost of that GPU, which is a consumer product, so tremendously, it completely consumed all of the company's gross profit dollars. And so at the time, the company was probably, you know, worth, I don't know, at the time, eight... Was it like $8 billion or something? Like six, $7 billion or something like that. After we launched CUDA, I recognized that it was going to add so much cost, but it was something we believed in. You know, our market cap went down to like one and a half billion dollars. And so we were down, we were down there for a while and we clawed our way way back slowly, but we carried CUDA on GeForce. I always say that NVIDIA is the house that GeForce built, because it was GeForce that took CUDA out to everybody. Researchers, scientists, they discovered CUDA on GeForce because they were all, you know... Many of 'em were gamers. Many of them built their own PCs anyways. In a university lab, many of them built clusters themselves, you know, using PC components. And, and so that, you know, that's kind of how we got going. - And then that became the platform and the foundation for the deep learning revolution. - That was also another great, great observation. Yeah. - That existential moment, do you remember... Like, what were those meetings like? What were those discussions like, deciding as a company, risking everything? - Well I had to make it clear to the board what we're trying to do, and the management team knew our gross margins were gonna get crushed. So you could imagine a world where GeForce would carry the burden of CUDA and none of the gamers would appreciate it and none of the gamers would pay for it. You know, they only pay certain price and it doesn't matter what your cost is. And so the... You know, we, we increased our cost by 50% and that con- consumed... And we were a 35% gross margin company, and so it, it was a... It was quite a difficult decision to make. But you could imagine that someday this would go into workstations and it would go into supercomputers and, and in those segments, maybe we can capture more margin. so you could, you could reason your way into being able to afford this, But it still took... It took a decade. - But that, but that's more of, like, conversation with the board convincing them, but you psychologically- ... as NVIDIA's continued to make bold bets that predict the future, and in part, especially now, define the future. So I'm almost looking for wisdom about how you're able to make those decisions, to make leaps- ... like that as a company. - Well, first of all I'm informed by a lot of curiosity. At some point, there's a reasoning system that, that convinces me, so clearly this outcome will happen. That this will happen. And so I believe it in my mind, and when I believe it in my mind, you know, you know how it is. You manifest a future and that future is so convincing, there's no way it won't happen. There's a lot of suffering in between, but you've gotta believe what you believe. - So you envision the future- ... and you essentially, from a sort of engineering perspective, manifest it? - Yeah. And you, you reason about how to get there. You reason about why it, it must exist. And, you know, I reason... We all reason it here. The management team would reason about it. All the people that I... We spend a lot of time reasoning about it. The thing, the thing that... The next part of it is probably a skill thing, which is, you know, oftentimes in leadership the leadership stays quiet or they learn about something, and then they do some manifesto, and it's a brand-new year, and somehow at the end of the year, next year, we're gonna have a brand-new plan. Big huge layoff this way, big huge organization change this way, new mission statement... brand new logos you know, that kind of stuff. Um, we've just never, I never do things that way. When I learn about something and it's starting to influence how I think, I'll make it very clear to everybody near me that, you know, this, this is interesting. This is going to make a difference. This is going to impact that. And I reason about things step by step by step. Oftentimes, I've already made up my mind, but I'll take every possible opportunity, external information, new insights, new discoveries, New engineering, you know, revelations, new milestones developed, I'll take those opportunities and I'll use it to shape everybody else's belief system. And I'm doing that literally every single day. I'm doing that with my board, I'm doing that with my management team, I'm doing that with my employees. I'm trying to shape their belief systems such that when I come the day I say, "Hey, let's buy Mellanox," it's completely obvious to everybody that we absolutely should. On the day that I said, "Hey guys, let's go all in on deep learning," and let me tell you why. I've already been laying down the bricks to different organizations inside the company. Every organization and everybody, many of the people might have heard everything. Most of the company hears, of course, pieces of it. And on the day that I announce it, everybody's kind of bought in to many pieces of it. And in a lot of ways, I like to announce these things, and I imagine, that the employees are kind of saying, "You know, Jensen, what took you so long?" And in fact, I've been shaping their belief system for some time, and therefore leadership. Sometimes it looks like you're leading from behind, but you've been shaping their, you know, to the point where on the day that I declared it, 100% buy-in. But that's what you want. You want to bring everybody along. You know, otherwise, we announce something about deep learning and everybody goes, "What are you talking about?" You know, you announce something about let's go all in on this thing, and your management team, your board, your employees, your customers, they're kind of like, "Where's this coming from? You know, this is insane." And so, so GTC effect, if you go back in time, you look at the keynotes, I'm also shaping the belief system of my partners in the industry and I'm using that to shape, you know, the belief system of my own employees. And so by the time that I announce something, like for example, we just announced Grok. We've been late... I've been talking about the stepping stones for two and a half years. You just go back and go, "Oh my gosh, they've been talking about it for two and a half years." And so I've been laying the foundation step by step by step, so when the time comes you announce it, everybody's saying, "You know, what took you so long?" - But it's not just inside the company. You're shaping the landscape, the broader global landscape of innovation. Like, putting those ideas out there, you really are manifesting reality. - We don't build computers. We actually don't build clouds. We don't... As it turns out, we're a computing platform company. And so nobody can buy anything from us. That's the weird thing. You know, we vertically design, vertically integrate to design and optimize, but then we open up the entire platform at every single layer to be integrated into other companies' products and services and clouds and supercomputers and OEM computers and so the amazing thing is, I can't do what I do without having convinced them first. And so most of GTC is about manifesting a future that by the time that we... My product is ready, they're going, "What took you so long?" - Yeah. So one of the things you've been a believer for a long time is scaling laws, broadly defined. So are you still a believer in the scaling laws? - Yeah, yeah. Yeah, we have more scaling laws now. - So I think you've outlined four of them with pre-training, post-training, test time, and agentic scaling. What do you think, when you think about the future, deep future and the near-term future, what are the blockers that you're most concerned about that keep you up at night that you have to overcome in order to keep scaling? - Well, we can go back and reflect on what people thought were blockers. So in the beginning, we were the first... The pre-training scaling law. You know, people thought well rightfully so, that the amount of data that we have, high-quality data that we have will limit the intelligence that we achieve. And that scaling law was an important, very important scaling law. The larger the model, the correspondingly more data results in a better... With a results in a smarter AI. And so that was pre-training. And Ilya Sutskever, Ilya said, "We're out of data," or something like that. "Pre-training is over," or something like that. The industry panicked, you know, that this is the end of AI. And of course, of course that, that's obviously not true. We're gonna keep on scaling the amount of data that we have to, to train with. A lot of that data is probably gonna be synthetic, and that also confused people, you know? And what people don't realize is they've kind of forgotten that most of the data that, that we are training that we teach each other with, inform each other with, is synthetic. You know, I... It's synthetic because it didn't come out of nature. You created it. I'm consuming it. I modify it, augment it, I regenerate it, somebody else consumes it. And so, so we've now reached a level where AI is able to take ground truth, augment it...... Enhance it, synthetically generate an enormous amount of data. And that part of post-training continues to scale, and so the amount of data that we could use that is human generated will be smaller, and smaller, and smaller. The amount of data that we use to train model, is going to continue to scale to the point where we're no longer limited... Training is no longer limited by... Data is now limited by compute. And the reason for that is most of the data is synthetic. Then the next phase is test time, and I still remember people telling me that, "Inference? Oh, yeah, that's easy. Pre-training, that's hard." These are giant systems that people are talking about. Inference must be easy. And so inference chips are gonna be little tiny chips, and ... you know, they're not, they're not like NVIDIA's chips. Oh, those are gonna be complicated and expensive, and, you know, we could make... And this is- in the future, inference is gonna be the biggest market, and it's gonna be easy, and we're gonna commoditize it. You know, everybody can build their own chips. And, and that was always illogical to me because inference is thinking, and I think thinking is hard. Thinking is way harder than reading. You know, pre-training is just memorization and generalization, you know, and looking for patterns in relationships. You're reading and reading, versus thinking, reasoning, solving problems, taking unexplored experiences, new experiences, and breaking it down into... Decomposing it into, you know, solvable pieces that we then go off, either through first principle reasoning, or, you know, through previous examples, prior experiences. You know, or just uh, exploration and search and, you know, trying different things. And that whole process of, of test time scaling, Inference, is really about thinking. And it's about reasoning, it's about planning, it's about search, it's about... And so how could that possibly be compute light? And we were absolutely right about that. You know, so test time scaling is intensely compute intensive. Then the question is, okay, now we're at inference and we're at test time scaling, what's beyond that? Well, obviously, we have now created, you know, one agentic person, and that one agentic person has a large language model that we've now now, you know, developed. But during test time, that agentic system goes off and does research and bangs on databases, and it goes out and, you know, uses tools, and one of the most important things it does is spins off and spawns off a whole bunch of sub-agents. Which means we're now creating large teams. It's so much easier to scale NVIDIA by hiring more employees than it is to scale myself. And so the next scaling law is the agentic scaling law. It's kind of like multiplying AI. Multiplying AI, we could spin off agents as fast as you want to spin off agents. And so, you know, I... You know, I have four scaling laws. And as we use the agentic systems, they're gonna create a lot more data, they're gonna create a lot of experiences. Some of it we're gonna say, "Wow, this is really good. We ought to memorize this." That data set then comes all the way back to pre-training. We memorize and generalize it. We then refine it and fine-tune it back into post-training. Then we enhance it even more with test time, you know, and the agents, agentic systems, you know, put it out to the industry. And so this loop, this cycle, is gonna go on and on and on. It kinda comes down to basically intelligence is gonna scale by one thing, and that's compute. - But there's a tricky thing there that you have to anticipate and predict, which is some of these components, it requires different kind of hardware to really do it optimally. So you have to anticipate where the AI innovation's going to lead. For example, a mixture of- - Perfect - ... experts with sparsity. - Perfect. - With hardware, you can't just pivot on a week's notice. You have to anticipate what that's going to look like. It has some- - So good - ... that's so scary and difficult to do, right? - For example, These AI model architectures are being invented about once every six months. Right? And system architectures and hardware architectures kind of every three years. And so you need to anticipate what likely is going to happen, you know, two, three years from now. And there's a couple ways that you could do that. First of all, we could do research internally ourselves, and that's one of the reasons why we have basic research, we have applied research. We create our own models. And so we have hands-on life experience right here. This is part of the co-design that I'm talking about. We're also the only AI company in the world that works with literally every AI company in the world. And so to the extent that we can, we try to get a sense of what are the challenges that people are experiencing. - So you're listening to the whispers across the industry, the AI labs. - That's right. You got to listen and learn from everybody. And have a... And then the last part is to have an architecture that's flexible, that can adapt and move with the wind. And one of the benefits of, of CUDA is that it's, you know, on the one hand, an incredible accelerator. On the other hand, it's really flexible. And so that balance, incredible balance between specialization, otherwise we can't accelerate the CPU, versus generalization, so that we can adapt with changing algorithms, that's really, really important. That's the reason why CUDA has been so resilient on the one hand, and yet we continue to enhance it. We're at CUDA 13.2, and so we're evolving the architecture so fast that we can stay with, you know, with the modern algorithms. For example... When mixtures of experts came out, That's the reason why we had NVLink 72 instead of NVLink eight. We could now take an entire four trillion, 10 trillion parameter model and put it in one computing domain as if it's running on one GPU. Um, people probably didn't notice, I said it, but if you look at the architecture of the Grace Blackwell racks, it was completely focused on doing one thing, processing the LLM. All of a sudden, one year later, you're looking at a Vera Rubin rack. It has storage accelerators. It has this incredible new CPU called Vera. It has Vera Rubin and NVLink 72 to run the LLMs. It also has this new additional rack called Rock. And so this entire rack system is completely different than the previous one, and it's got all these new components in it. And the reason for that is because the last one was designed to run MoE large language models, inference. And this one is to run agents and agents bang on tools, and- - Obviously, the design of the system had to have been done before Claude Code, Codex, OpenClaw. So you were anticipating the future, essentially. And that comes from what? From the whispers, from understanding what all the state- - No - ... of the art is about? - No, it's easier than that. You just reason about it. First of all, you just reason. no matter, no matter what happens, at some point in order for that large language model to be a digital worker... Let's just, let's just use that metaphor. Let's say that we want the LLM to be a digital worker. What does that have to do? It has to access ground truth. That's our file system. It has to be able to do research. It doesn't know everything. We don't have... And I don't wanna wait until this AI becomes, you know, universally smart about everything, past, present, and future before I make it useful. And so therefore, I might as well let it go do research. It's obviously, if it wants to help me, it's gotta use my tools. You know, a lot of people would say, "You know AI is gonna completely destroy software. We don't need software anymore. We don't even need tools anymore." That's ridiculous. Let's use the... Let's use a thought experiment. And you could just sit there, enjoy a glass of whiskey, and, and think about all these things, and it would become completely obvious. Like, if I were to create the most amazing- the most amazing agent that we can imagine in the next 10 years. Let's say it'd be a humanoid robot. If that humanoid robot were to be created, is it more likely that the humanoid robot comes into my house and uses the tools that I have to do the work that it needs to do? Or does this hand turns into a 10-pound hammer in one instance, turns into a scalpel in another instance, and in order to boil water, it beams, you know, microwaves out of its fingers? You know, or is it more likely just to use a microwave, you know? And the first time it goes up to the microwave, it probably doesn't know how to use it. But that's okay. It's connected to the internet. It reads the manual of this microwave, reads it, instantly becomes an expert. And so it uses it. And so I think the... I just described, in fact, almost all of the properties of OpenClaw. You know, that it's gonna use tools, that it's gonna access files, it's gonna be able to do research. It has I/O subsystem. And when you're done reasoning through it, reasoning about it in that way, Then you say, "Oh, my gosh, the impact to the future of computing is deeply profound." And the reason for that is, I think we've just reinvented the computer. And then now you say, "Okay, when did we reason about that? When did we reason about OpenClaw?" If you take the OpenClaw schematic that I used at GTC, you'll find it two years ago. Literally, two years ago at GTC, I was talking about agentic systems that exactly reflect OpenClaw today. And, of course, the confluence of many things had to happen. First of all, we needed Claude and GPT and, you know, all of these models to reach a level of capability. So their innovation and their breakthroughs and their continued advances was really important. And then, of course, somebody had to create an open source, you know project that was sufficiently robust, you know, and sufficiently complete and that we can all put to work. And I think OpenClaw did for, did for agentic systems what ChatGPT did for generative systems. And I just think it's a very big deal. - Yeah, it's a really special moment. I'm not exactly sure why it captured so much of the world's attention, but it did, more than Claude Code and Codex and so on. - Because consumers could reach it. - Sure, yeah. But there's also so much of this is vibes. And Peter, I had a podcast with him, he's a wonderful human being. So part of it is also the humans that represent the thing. - Yeah, no doubt. - Part of it is memes and the— 'Cause we're all trying to figure it out. There's really serious and complicated security concerns about when you have such powerful technology, how do you hand over your data so they can do useful stuff? But then there's scary things associated with that. And we, as a civilization, as individual people and as a civilization, figuring out how to find that right balance. - Yeah, we jumped on it right away and we sent a bunch of security experts this way. And we did this thing called OpenShell. It's already been integrated into, into OpenClaw. - And NVIDIA put forward NemoClaw. - Yep, exactly. - They install super easy. It makes sure that it's secure. - We give you two out of three rights. Agentic systems can access sensitive information, it can execute code, and it can communicate externally. We could keep things safe if we gave you two out of those three capabilities at any time, but not all three. And out of those two out of three capabilities, we also give you access control based on whatever rights that you're given by enterprise. And then we connect it to a policy engine that all these enterprises already have. And so we're going to try to do our best to help OpenClaw become a better claw. - So you eloquently explained how we have a long history of blockers that we thought were going to be blockers, and we overcame them. But now looking into the future, what do you think might be the blockers now that it's clear that agents will be everywhere? So obviously we're going to need compute. So what is going to be the blocker for that scaling? - Power is a concern, but it's not the only concern. But that's the reason why we're pushing so hard on extreme co-design, so that we can improve the tokens per second per watt orders of magnitude every single year. And so in the last 10 years, Moore's Law would have progressed computing about 100 times in the last 10 years. We progressed and scaled up computing by a million times in the last 10 years. And so we're gonna keep on doing that through extreme co-design. So energy efficiency, perf per watt, completely affects the revenues of a company. It affects the revenues of a factory. And we're just going to push that to the limit so that we can keep on driving token costs down as fast as we can. You know, the our computer price is going up, but our token generation effectiveness is going up so much faster that token cost is coming down. It's just coming down an order of magnitude every year. - So power, that's an interesting one. So the way to try to get around the power blocker is to try to, with the tokens per second per watt, try to make it more and more efficient. Of course, there's the question of how do we get more power. - We should also get more power. - That's a really complicated one. You've talked about small modular nuclear power plants. There's all kinds of ideas for energy. How much does it keep you up at night? The bottlenecks in the supply chain of AI, like ASML with EUV lithography machines, TSMC with advanced packaging like CoWoS, and SK Hynix with the high bandwidth memory? - All the time, and we're working on it all the time. No company in history has ever grown at a scale that we're growing while accelerating that growth. It's incredible. And it's hard for people to even understand this. In the overall world of AI computing, we're increasing share. And so supply chain, upstream and downstream, are really important to us. I spend a lot of time informing all the CEOs that I work with, what are the dynamics that's going to cause, The growth to continue or even accelerate? It's part of the reasons why to the entire right-hand side of me were CEOs of practically the entire IT industry upstream and practically the entire infrastructure industry downstream. And they were all... There were several hundred CEOs. And I don't think there's ever been keynotes where several hundred CEOs show up. And part of it is, I'm telling them about our business condition now. I'm telling them about the growth drivers in the very near future and what's happening. And I'm also describing where are we going to go next so that they could use all of this information and all of the dynamics that are here to inform how they want to invest. And so I inform them that way like I inform my own employees. And then of course, then I make trips out to them and make sure that, "Hey, listen, I want you to know this quarter, this coming year, this next year, these things are going to happen." And if you look at the CEOs of the DRAM industry, The number one DRAM in the world was DDR memory for CPUs in data centers. About three years ago, I was able to convince several of the CEOs that even though at the time HBM memory was used quite scarcely, you know, and, and barely by supercomputers that this was going to be a mainstream memory for data centers in the future. And at first it sounded ridiculous, but several of the CEOs believed me and decided to invest in building HBM memories. Another memory was rather odd to put into a data center, is the low power memories that we use for cell phones. And we wanted them to adapt them for supercomputers in the data center. And they go, "Cell phone memory for supercomputers?" And I explained to them why. Well, look at these two memories, LPDDR5, HBM4. The volumes are so incredible. All three of them had record years in history, and these are, these are 45-year companies. And so, you know, I... That's part of my job, is to inform and shape, inspire, you know. - So you're not just manifesting the future and maybe inspiring NVIDIA, the different engineers of the company, you're, you're manifesting the supply chain of the future. So you're having conversations with TSMC, with ASML. - Upstream, downstream. - Upstream, downstream. So that's the thing. - GEV, Caterpillar. Yeah, that's downstream from us. Yeah, yeah, there you go. - Yeah, the whole thing. I mean, but that's so... There's so much incredibly difficult engineering that happens in the entire semiconductor industry, and it just feels scary how intricate the supply chain is, how many components there are, but it works somehow. - Exactly, the deep science. The deep engineering, the incredible manufacturing, and so much of the manufacturing is already robotics, but we have a couple of hundred suppliers that contribute the technology that goes into our 1.3 million component rack. Each rack is 1.3, one and a half million components. There are 200 suppliers across the Vera Rubin rack. - So it's interesting that you don't list that as the thing that keeps you up at night in the list of blockers. - But I'm doing, I'm doing all the things necessary to- - Okay - ... yeah, see? I can go to sleep because I checked it off. I said, okay, you know, I go, I yeah, I can go to sleep and I go, well, let's see, what re- let's reason about this. What's important for us? Um, because okay, let's reason about this. Because we changed the system architecture from the original DGX-I that you remembered to NVLink-72 rack scale computing- ... what's gonna... What does that, what does that mean? What does that mean to software? What does that mean to engineering? What does that mean to how we design and test? How, and what does that mean to the supply chain? Well, one of the things that it meant was we moved supercomputer, supercomputer integration at the data center into supercomputer manufacturing in the supply chain. If you're doing that, you also have to recognize you're gonna move one... And if, if you're, if you're, you know, total footprint of whatever data center you're gonna build, let's say you would like to have, you know, 50 gigawatts of supercomputers that are running simultaneously, and it takes one week to manufacture that 50 gigawatts of supercomputers, then each week in the supply chain, the supercomputers are gonna need a gigawatt of power. And so, so we're gonna need the supply chain to increase the amount of power it has to build, test, to build and test the supercomputers in the supply chain before I ship it. Well, NVLink-72 literally builds supercomputers in the supply chain and ships 'em two, three tons at a time per rack. It used to be—they used to come in parts and we used to assemble 'em inside the data center. But that's impossible now because NVLink-72 is so dense. And so that's an example. And I would have to go and to, you know, I've... Fly into the supply chain, go meet my partners saying, "Hey," I said, "guess what? So here's what I'm going to do with... This is the way we used to build our DGXs. We're gonna build them this way. This is gonna be so much better because we're going to need 'em for inference." The market for inference is, you know, coming. The inflection point for inference is coming. It's gonna be a big market. And so I first explain to them what's going on, why it's gonna happen, and then I ask 'em to make several billion dollars of capital investments each. And because they, you know, they trust me and I'm very respectful of 'em, and I give 'em every opportunity to question me and I spend time to explain things to people and I reason about it. I draw on pictures and I reason about it in first principles. And by the time I'm done with them, they know what to do. - So it's a lot of it is about relationships and building a shared view of the future. Uh, but do you worry about certain bottlenecks? I mean, what are the biggest bottlenecks in the supply chain? Are you, are you worried about ASML's EUV tooling? Are you, are you worried about the packaging, CoWoS packaging of TSMC, about how fast it could scale? Like you said, you're not only growing incredibly fast, you're accelerating your growth. So it feels like everybody in the supply chain, and those are certainly bottlenecks, would have to scale up. Are you having conversations with them, like, how can you scale up faster? Do you worry about it? - No. - Okay. - Because, because I told 'em what I needed. They understood what I need. They told me what they're gonna go do, and I believe them what they're going to do. - Interesting. That's great to hear. So maybe if we can just linger on the power for a little bit. What are your hopes for how to solve the energy problem? - One of the areas, Lex, that I'm that I would love, I would love us to talk about and just get the message out, you know our power grid is designed for the worst case condition with some margin. Well, 99% of the time we're nowhere near the worst case condition because the worst case condition is a few days in the winter, a few days in the summer, and extreme weather. Most of the time we're nowhere near the worst case condition and we're probably running around, call it 60% of peak. And so 99% of the time, our power grid has excess power, and they're just sitting idle, but they have to be there sitting idle because just in case, when the time comes, hospitals have to be powered and, you know, infrastructure has to be powered and airports have to run and so on and so forth. And so the question that I have is whether we could go and, Help them understand and create contractual agreements and design computer architecture systems, data centers, such that when they need, The maximum power for infrastructure in society, that the data centers would get less. But that's in a very rare instance anyways. And during that time, we either have a backup generator for that little part of it, or we just have our computers shift the workload somewhere else, or we have the computers just run slower. You know, we could degrade our performance, reduce our power consumption and provide for a, you know, slightly longer latency response, you know, when somebody asks for, you know, asks for an answer. And so I think that that way of using computers, of building data centers, instead of expecting 100% uptime-... and these contracts that are really, really quite rigorous, it's putting a lot of pressure on the grid to be able to... Now, they're gonna have to increase from their maximum. I just wanna use their excess. It's just sitting there. - Yeah, that's not talked about enough. So what's stopping there? Is it regulation? Is it bureaucracy? - I think it's a three-way problem. It starts with the end customer. The end customer puts requirements on the data centers that they can never not be available, okay? So that the end customer expects perfection. Now, in order to deliver that perfection, you need a combination of backup generators and your grid power supplier to deliver on perfection. And so everybody's gotta have six nines. Well, I think first of all, right now, we ought to have everybody understand that when the customer asks for these things, you have somebody in your data center operations team disconnected from the CEO. I bet the CEO doesn't know this. I'm gonna talk to all the CEOs. The CEOs are probably not paying any attention to the contracts that are being signed, and so everybody wants to sign the best contract, of course. And they go down to cloud service providers, and the contract, the... The two contract negotiators that are... You, I could just see them now. You know, negotiating these multi-year contracts. Both sides want, you know, the best contract. As a result, the CSPs then have to go down to the utilities, and they expect the nine, the six nines. And so I think the first thing is just make sure that all of the customers, the CEOs and the customers realize what they're asking for. Now, the second thing is we have to build data centers that gracefully degrade. And so if the power, if the utility, if the grid tells us, "Listen, we're gonna have to back you down to about 80%," we're gonna say, "That's no problem at all." We're just gonna move our workload around. We're gonna make sure that data's never lost, but we can reduce the computing rate and use less energy. The quality of service degrades a little bit. For the critical workloads, I shift that somewhere else right away so I don't have that problem, and so, you know, whoever, whichever data center still has 100% uptime, and so... - How difficult of an engineering problem is that, that smart, dynamic allocation of power in a data center? - As soon as you could specify, you could engineer it. U- beautifully put. So long as it obeys the laws of physics on first principles, I think we're good. - What was the third thing you were mentioning? Um... - So the second thing is the data centers. And the third thing is we need the utilities to also recognize that this is an opportunity- ... and instead of saying, "Look it's gonna take me five years to increase my grid capability," uh, if you, if you have, if you're willing to take power of this level of guarantee, I can make them available for you next month and at this price. And so if utilities also offered more segments of power delivery promises, then I think everybody will figure out what to do with it. Yeah, but there's just way too much waste in the grid right now. We should go after it. - Uh, you've highly lauded Elon and xAI's accomplishment in Memphis, in building Colossus supercomputer, probably in record time in just four months. It's now at 200,000 GPUs and growing very quickly. Is there something that you could speak to the understand about his approach that's instructive to, broadly to all the data center creators that's that enabled that kind of accomplishment? His approach to engineering, his approach to the whole management of construction, everything? - First of all, Elon is deep in so many different topics. Um, Yet he's also a really good systems thinker. And so he's able to think through multiple disciplines, and, and he obviously pushes things, questions everything, where they're, number one, is it necessary? Number two, does it have to be done this way? And then numbers, you know, does it have, does it have to take this long? And, and so, so he, he has, he has the a- he has the ability to question everything, To the point where everything is down to its minimal amount that's necessary, you can't take anything else out. And yet the, the necessary capabilities of the product remains, you know? And so he's, he is as minimalist as you could possibly imagine, and he does it at a system scale. I think... I also love the fact that he is he is represented. He is present at the point of action. You know, he'll just go there. If there's a problem, he'll just go there and then, "Show me the problem." You know, when you do all of this in combination, you overcome a lot of previous, "This is just the way we do it." "Um, you know, I'm waiting for them. Uh," You know, I mean, it's just, everybody has a lot of excuses. And so, and then the last thing is when you act personally with so much urgency it causes everybody else to act with urgency, you know? And every supplier has a lot of customers going on. Every supplier has a lot of projects going on, and he made it his... He makes it his business that he's the top priority of everybody else's, you know, projects. And so he does that by demonstrating it. - Yeah, I've been in a bunch of those meetings. It's just, it's fun to watch, 'cause really, not enough people ask the question like, "Okay, so, can this be done a lot faster, and how? Why does it have to take this long?" - Yeah, right. - And then in the... That becomes an engineering question often. And yes, I think when you get the ground truth of actually... I remember, one of the times I was hanging out with him, he literally is going through the entire process of how to plug in cables into a rack. He's, he's working with an engineer on the ground that's doing that task, and he's just trying to understand what does that process look like so it can be less error-prone. And just building up that intuition from every single task involved in, putting together a data center— ...you start to immediately get a sense at the detailed scale and at the broad systems scale of where the inefficiencies are, and so you can make it more and more and more efficient. Plus you have the big hammer of being able to say, "Let's do it totally different-" - Yeah. That's right. - "... and remove all possible blockers." - That's right. - Is there parallels in the NVIDIA Extreme Systems co-design approach that you see in the way Elon approaches systems engineering? - Well, first of all, co-design is an ultimate systems engineering problem. And so we approach, we approach the work that we do from that first, from that The other thing that we do and this is, this is a philosophy, a thought, a state of mind, I guess, a method, That I started 30 years ago, and it's called the speed of light. The speed of light is not just about the speed. The speed of light is, is my shorthand for what's what's the limit of what physics can do. And so every single, everything, everything that we do is compared against the speed of light. Memory speed, Math speed, power, cost, time, effort, number of people, manufacturing cycle time. And when you think about latency versus throughput, When you think about cost versus throughput, cost versus capacity, all of these things, You test against the speed of light to achieve all of these different constraints separately. And then when you consider it together, you know you have to make compromises because a system that achieves extremely low latency versus a cheap, a system that achieves very high throughput are architected fundamentally differently. But you want to know what's the speed of light of a system that achieves high throughput, what's the speed of light of a system that achieves low latency? And then when you think about the total system, you can make trade-offs. And so I force everybody to think about what's the, what the first- the first principles, the limits- ... the physical limits, For everything before we, you know, before we do anything. And we test everything against that. And so that's a good frame of mind. I don't love the other methods, which is continuous improvement. The problem with continuous improvement, it... First of all, you should engineer something from first principles at the speed, you know, with speed of light thinking. Limit it only by physical limits, and physics limits. And after that, of course you would improve it over time. Um, but I don't like going into a problem and somebody says, "Hey, you know, it takes 74 days to do this today-" "... Right now. And we can do it for you in 72 days." You know, I'd rather strip it all back to zero- ... and say, "First of all, explain to me why 74 days in the first place. And let's note, let's think about what's possible today. And if I were to build it completely from scratch, you know, how long would it take?" Oftentimes, you'd be surprised. It might come to six days. Now, the rest of the six days, the 74, could be very well-reasoned and compromises, and, you know, cost reductions, and all kinds of different things. But at least you know what they are. And then now that you know that six days is possible, then the conversation from 74 to six, surprisingly much more effective. - In such incredibly complex systems that you're working with, is simplicity sometimes a good heuristic to reach for? I mean, if I can just... I mean, the pod, the Vera Rubin pod that you announced is just incredible. We're talking about seven chips, seven chip types, five purpose-built rack types, 40 racks, 1.2 quadrillion transistors, nearly 20,000 NVIDIA dies, over 1,100 Rubin GPUs, 60 exaflops, 10 petabytes per second of scale bandwidth. That's all just one... - That's just one pod. - Yeah, that's just one pod. - I mean, in- ... so you have the... And then even the NVL 72 rack alone is 1.3 million components, 1300 chips, 4,000 pods crammed into a single 19-inch wide rack. - And Lex, we're probably gonna have to crank out about 200 of these pods a week, just to put it in perspective. - The amount of different components, I suppose simplicity is impossible, but is that a metric that you kind of reach for in trying to design things? - You know, the phrase, the phrase that I use most often is, we- we need things to be as complex as necessary, but as simple as possible. And so the question is, is all that complexity there necessary? And we ought to test for that. And we got to challenge that. And then after that, everything else above it, you know, is gratuitous. - But it's still almost incredible. Semiconductor industry broadly, but what NVIDIA is doing, some of the greatest engineering in history. So these systems are just truly, truly marvels of engineering. - It is the most complex computer the world has ever made. - Yeah, the engineering teams, I mean- ... I don't, it's not a competition, but I don't know. If it was like an Olympics of engineering teams, I mean, TSMC does incredible engineering. Like I said, ASML at every scale, but NVIDIA is gonna give them a run for their money. Just incredible, incredible teams. - Well, it's gold medalists in every single, in every single sport, all assembled right here. - And have to work together. And report directly to you. This is wonderful. You recently traveled to China. so it's interesting to ask you China's been incredibly successful in building up its technology sector. What do you understand about how China's able to, over the past 10 years, build so many incredible world-class companies, world-class engineering teams, and just this technology ecosystem- ... that produces so many incredible products? - A whole bunch of reasons for, well, first of all, let's start with some facts. 50% of the world's AI researchers are Chinese, plus or minus, and they're mostly in China still. We have many of them here, but there's amazing researchers still in China. They, their tech industry showed up at precisely the right time. At the time of the mobile cloud era their way of contributing was software, and so this is a country's incredible science and math. Really well-educated kids. Their tech industry was created during the era of software. They're very comfortable with modern software. China is not one giant economic country. It's got many provinces and cities with mayors all competing with each other. That's the reason why there's so many EV companies. That's the reason why there's so many AI companies. That's the reason why there's so many, every company you could imagine, they all create some of them. And as a result, they have insane competition internally. And, you know, what remains is an incredible company. They also have a social culture where it's family first, friends second, and company third. And so, the amount of conversation that goes back and forth between... They're essentially open source all the time. So the fact that they contribute more to open source is so sensible because they're probably, "What are we protecting?" You know, my engineers, their brothers are in that company, their friends are in that company, and they're all schoolmates. You know, the schoolmate concept. There's a, you know, one schoolmate, you're brother for life. And so they, they share knowledge very, very quickly. And so there's no sense keeping technology hidden. You might as well put it on open source. And so the open source community then amplifies, accelerates the innovation process. So you get this rapid, incredible great talent, rapid innovation because of open source and just, you know, the nature of friends, and insane competition. Among comp- among the company, what emerges is incredible stuff. And so this is the fastest innovating country in the world today, and this is something that has everything that, everything that I've just said is fundamental to just how the kids were grown, the fact that they have excellent education, the fact that they, parents want them to do well in school, the fact that they, their culture is that way. These are, you know, these are just the thing about their country, and they showed up at precisely the time when technology is going through that exponential. - Plus culturally, it's pretty cool to be an engineer. It connects to all the components that you're mentioning... - It's a builder nation. - Yeah, it's a builder nation. Our country's leaders, incredible, but they're mostly lawyers. Their country's leaders, and because we're, they're trying to keep us safe, rule of law, governing, their country was built out of poverty. And so most of their leaders are incredible engineers. Some of the brightest minds. - To take a small tangent, because you mentioned open source, I have to go to Perplexity here, who you have been a fan of a long time. - Love it, yeah. - And thank you for releasing open source Nemotron 3 Super, which you can also use inside Perplexity to look stuff up. Now, which is 120 billion parameter open weight MoE model. What's your vision with open source? So you mentioned China with DeepSeek and MiniMax, with all these companies really pushing forward the open source AI movement, and NVIDIA is really leading the way in close to state-of-the-art open source LLMs. What's your vision there? - First, if we're gonna be a great AI computing company, we have to understand how AI models are evolving. One of the things that I love about Nemotron 3 is it's not a, just a pure transformer model, it's transformer and SSMs. And we were early in, Developing the conditional GANs, which, that progressive GANs, which led step-by-step to diffusion. And so, The fact that we're doing basic research in model architecture and in different domains gives us visibility into, you know, what kind of computing systems would do a good job for future models. And so it is part of our extreme co-design strategy. Second, um, I think we rightfully recognize that on the one hand, we want world-class models as products, and they should be proprietary. On the other hand, we also want AI to diffuse into every industry and every country, every researcher, every student. And if everything is proprietary, it's hard to do research and it's hard to innovate on top of, around, with. And so...Open source is fundamentally necessary for many industries to join the AI revolution. NVIDIA has the scale and we have the motives to not only skills, scale, and motivation to build and continue to build these AI models for as long as we shall live. And so therefore, we ought to do that. We can open up, we can activate every industry, every researcher, you know, every country to be able to join the AI revolution. There's the third reason, which is from that, to recognizing that AI is not just language. These AIs will likely use tools and models and sub-agents that were trained on other modalities of information. Maybe it's biology or chemistry or you know, laws of physics, or you know, fluids and thermodynamics, and not all of it is in language structure. And so somebody has to go make sure that weather prediction, biology, AI, AI for biology, physical AI, all of that stuff stays, can be pushed to the limits and pushed to the frontier. We don't build cars, but we wanna make sure every car company has access to great models. We don't, discover drugs, but I wanna make sure that Lilly has the world's best biology AI systems, so that they can go use it for discovering drugs. And so these three fundamental reasons, both in, recognizing that AI is not just language, that AI is really broad, that we wanna engage everybody into the world of AI, and then also co-design of AI. - Well, I have to say, once again, thank you for open sourcing, really truly open sourcing Nemotron 3 and ... - Yeah, I appreciate you were saying that. We open sourced the models, we open sourced the weights, we open sourced the data, we open sourced how we created it. Yeah, it's pretty amazing. - Uh, it's really, it's really incredible. You're originally from Taiwan and have a close relationship with TSMC. So I have to ask, TSMC I think also is a legendary company in terms of the engineering teams, in terms of the incredible engineering work that they do. What what do you understand about TSMC culture and their approach that explains how they're able to achieve this singular unmatched success in everything they're doing with semiconductors? - You know, first of all, the deepest misunderstanding about TSMC is that their technology is all they have. That somehow they have a really great transistor, and if somebody shows up another transistor, game over. It's the technology and of course, you know, I don't mean just the transistor, the metallization systems, the packaging, the 3D packaging, the silicon photonics, the, you know, all of the technology that they have. That technology is really what makes the company special. Their technology makes the company special. But their ability to orchestrate the demands, the dynamic demands of hundreds of companies in the world as they're moving up, shifting out, you know, increasing, decreasing, push, pushing out, pulling in changing from customer to customer, Wafer starting, wafer stopping, Emergency wafer starts, you know, all of this dynamics of the world's complexity as the world is shape-shifting all the time, and somehow they're running a factory with high throughput, high yields, really great costs, excellent customer service. They take their work, they take their promises seriously. They, when your wafer, because they know that they're helping you run your company, when the wafers, when the wafers were promised to show up, the wafers show up, you know, so that you could run your company appropriately. And so their system, their manufacturing system is completely miraculous, I would say. Then the second thing is their culture. This culture is simultaneously, Technology focused on one hand, advancing technology, simultaneously customer service oriented on the other hand. A lot of companies are very customer service oriented, but they're not very technology excellent. They're not at the bleeding edge of technology. There are a lot of companies who are tech, at the bleeding edge of technology, but they're not the best customer service oriented company. And so it just depends on somehow they've, they've balanced these two and they're world-class at both. And then probably the third thing is the technology that I most value in them that they created this, you know, this, Intangible called trust. I trust them to put my company on top of them. That's a very big deal. - When they trust, I mean, there's a really close relationship there that you've established, and that trust is established based on many years of performance, but there's human relationships involved there as well. - Three decades, I don't know how many tens, hundreds of billions of dollars of business we've done through them, and we don't have a contract. That's pretty great. - Amazing. Okay, there's this story ... ... That in 2013, the founders of TSMC, Morris Chang offered you the chance to become TSMC's chief executive, And you said you already had a job. Is this story true? - Story is true. I didn't dismiss it. Um but I was deeply honored and, and of course, of course uh, I knew then as I know now, TSMC is one of the most consequential companies in history. And, and Morris is one of the highest regarded executive and, and business and personal friend that I've had in my life. And, um ...Uh, for him to ask is, uh um, I was humbled and, and really honored. But, but the work that I'm doing here is really important, and I've seen, you know, in my mind's eye, what NVIDIA was going to be and what the impact that we could have. And uh, it was really important work. And it's my responsibility, you know, my sole responsibility to make this happen. And so I uh, I declined it, You know, not, not because it wasn't an incredible offer. It's an unbelievable offer but, but I simply couldn't take it. - I think NVIDIA, both NVIDIA and TSMC are two of the greatest companies in the history of human civilization. And running either one, I'm sure, is incredibly complicated effort and takes... You have to truly be all in. Uh, everybody at every scale, not just at the CEO level. Everybody is really truly all in- - Yeah. Yeah, no doubt - ... To, to accomplish this kind of complexity. - So now I can help both companies. - Exactly. So NVIDIA is now the most valuable company in the world. I have to ask, what is the NVIDIA's biggest moat, as the folks in the tech sector say? The edge you have that protects you from the competition. - Our single most important property as a company is the install base of our computing platform. Our single most important thing today is our, is the install base of CUDA. Now, the reason why, uh, 20 years ago, of course, there was no install base. But what makes... And if somebody came up with a GUDA or TUDA it wouldn't make any difference at all. And the reason for that is because it's never been just about the technology. The technology, of course, was incredible visionary. But it's the fact that the company was dedicated to it, stuck with it, expanded its reach. Um, it wasn't three people that made CUDA successful. It was 43,000 people that made CUDA successful. And the several million developers that believed in us, That trusted that we were going to continue to make CUDA 1, 2, 3, 13, that they decided to port and dedicate their software on top of it, their mountain of software on top of it. And so the install base is the number one most important advantage. That install base, when you amplify it with the velocity of our execution at the scale that we're talking about, no company in history had ever built systems of this complexity, period. And then to build it once a year is impossible. And that velocity combined with the install base, in the developer's mind, you just go now, take the developer's mind. From the developer's perspective, if I support CUDA, tomorrow it'll be 10 times better. I just have to wait six months on average. Not only that, if I develop it on CUDA, I reach a few hundred million people, computers. I'm in every cloud, I'm in every computer company, I'm in every single industry, I'm in every single country. So if I create an open source package and I put it on CUDA first, I get these both attributes simultaneously. And not only that, I trust 100% that NVIDIA is going to keep CUDA around and maintain it and improve it and keep optimizing the libraries for as long as they shall live. You could take that to the bank, and that last part, trust. You put all that stuff together, if I were a developer today, I would target CUDA first. I would target CUDA most. And that's the reason that I think in the final analysis is our first, that's even our first- - core advantage. Our second one is our ecosystem. The fact that we vertically integrated this incredibly complex system, but we integrate it horizontally into every single company's computers. - We're into Google Cloud, we're into Amazon, we're in Azure. You know, we're ramping up AWS like crazy right now. We're in new companies like CoreWeave and Nscale. We're in supercomputers at Lilly. We're in enterprise computers. We're at the edge in radio base stations. You know, I mean, it's just crazy. One architecture is in all these different systems. We're in cars, we're in robots, we're in satellites, we're out in space. And so the fact that you have this one architecture and the ecosystem is so broad, it basically covers every single industry in the world. - Well, how does the, how does the CUDA install base evolve into the future with AI factories as a moat? What do you... Do you think it's possible that NVIDIA of the future is all about the AI factory? - Well, the unit of computing used to be GPU to us. Then it became a computer, then it became a cluster. Now it's an entire AI factory. When I see a computer, when I see what NVIDIA builds, in the old days, I would, you know, I visualize the chip. And then when I announced the new product, new generation, like, "Ladies and gentlemen, we're announcing Ampere today," I'd pick up the chip. That was my mental model- ... of what I was building. Today, I wouldn't... Picking up the chip is kind of still adorable. But it's adorable. It's not my mental model of what I'm doing. My mental model is this giant gigawatt thing that has power generations connected to the grid. It's got cooling systems and networking of incredible monstrosity, you know. 10,000 people are in there trying to install it, hundreds of networking engineers in there, thousands of engineers behind it trying to power it up. You know, powering up one of those factories, as you know, it's not somebody going, "It's on now." It takes thousands of people to bring it up. - So mentally, you're actually... When you're thinking about a single unit of compute, you're like literally, when you go to bed at night, you're thinking now about collection of racks, so pods, not individual chips. - Entire infrastructure. And I'm hoping my next click is when I'm thinking about building computers, it's, you know, planetary scale. That'll be the next click. - Well, what do you think about the space angle that Elon has talked about, doing compute in space for solving some of the... It makes some of the energy issues in terms of scaling energy easier. - Cooling issues is not easy. Yeah. - Cooling. Well, there's a large number of engineering complexities involved with that. So, so what... You know, NVIDIA has also announced that you're already thinking about that. - Yeah, we're already there. NVIDIA GPUs are the first GPUs in space. And I didn't realize it, it was so interesting to... I would have declared it maybe. We're in space. You know, little, little astronaut suit on one of our GPUs. But we've been in space. It's the right place to do a lot of imaging. You know, because those satellites have, have really high resolution imaging systems, and they're sweeping the Earth, you know, continuously now. And, um, you want, you know, centimeter scale, imaging imaging that is done continuously for the world, so that, you know, you'll basically have real time telemetry of everything. You don't wanna beam that back down to earth. It's just, you know, petabytes and petabytes of data. You gotta just do AI right there at the edge, throw away everything you don't need, you've seen before, didn't change, and then just keep the stuff that, that you need. And so AI had to be done at the edge. Obviously we have, we have a 24/7 solar, if we put it at the polars. And uh, but, you know, there's no conduction, no convection. And so, you know, you're pretty much just radiation. And uh, but, you know, space is big. I guess, you know, we're just gonna put big, giant radiators out there. - How crazy of an idea do you think it is? Like is this, is this five years out, 10 years out, 20 years out? So we're talking about blockers for AI scaling. - You know, I'm just so much more practical. I look for where, where, um my next, next bucket of opportunities are first. Meanwhile, I'm cultivating space. And so I send engineers to go work on the problem. We're starting to... We're learning a lot about it. How do we deal with radiation? How do we deal with degrading performance? How do we deal with a continuous, Testing and attestation of, of de- defects? And you know, how do we deal with redundancy? And how do we degrade gracefully and things like that? And so we could, we could do a... What about software? How do you think about software and redundancy and performance out in space? Make it so that the computer never breaks, it just gets slower, you know. And I... So we could start doing a lot of engineer exploration upfront. But in the meantime, my, my favorite answer is ge- eliminate waste. You know, we've got all that idle power, I want to evacuate it as fast as possible. - Yeah. There's a lot of low-hanging fruit here on Earth- ... That we can utilize for the AI scaling. Quick pause. Quick 30-second thank you to our sponsors. Check them out in the description. It really is the best way to support this podcast. Go to lexfridman.com/sponsors. We got Perplexity for curiosity-driven knowledge exploration, Shopify for selling stuff online, LMNT for electrolytes, Fin for customer service AI agents, and Quo for a phone system, like calls, texts, contacts, for your business. Choose wisely, my friends. And now, back to my conversation with Jensen Huang. Do you think NVIDIA may be worth 10 trillion at some point? Let's ask it this way. What does the future of the world look like where that's true? - I think that NVIDIA's growth is Extremely likely, and in my mind, inevitable. And let me explain why. We're the largest computer company in history. That alone should beg the question, why? And the reason of course... Two reasons. First, two foundational technical reasons. The first reason is that computing went from being a retrieval-based, file retrieval system. Almost everything is a file... We pre-write something, we pre-record something. You know, we draw something, we put it on the web, we put it in a file. And we use a recommender system, some smart filter, to figure out what to retrieve for you. And so we were a pre-recording, human pre-recording, and file retrieving system. That's what a computer is, largely. To now, AI computers are contextually aware, which means that it has to process and generate tokens in real time. So we went from a retrieval-based computing system to a generative-based computing system. We're gonna need a lot more processing in this new world than in the old world. We need a lot of storage in the old world. We need a lot of computation in this new world. And so, so that's the first part of it. We fundamentally changed computing and the way how computing is done. The only thing that would cause it to go back...... is if this way of computation, this way of computing generating information that's contextually relevant, situationally aware, that is grounded on new insight before it generates information, this computation-intensive way of doing computing would only go back if it's not effective. So if... For the last 10, 15 years while working on deep learning, if at any single moment I would have come to the conclusion that, "You know what? This is not gonna work out. I think this is a dead end." Or, "It's not gonna scale, it's not gonna solve this modality, not gonna be used in this application." Then, of course, I would feel very differently about it, but I think the last five years has given me more confidence than the last ten year, previous ten years. The second idea, is computers, because it was a storage system, it was largely a warehouse. We're now building factories. Warehouses don't make much money. Factories directly correlates with the company's revenues. And so, the computer did two things. Not only did it change the way it did it, its purpose in the world changed. It's no longer a computer, it's a factory. It's a factory, it's used for generation of revenues. We're now seeing not only is this factory generating products, commodities that people want to consume, we're seeing that the commodities are so interesting, so valuable, so, to so many different audiences that the tokens are starting to segment, like iPhones. Mm-hmm. You have free tokens, you have premium tokens, and you have several tokens in the middle. Yeah. And so intelligence, as it turns out, you know, it's a scalable product. There's extremely high intelligence products, tokens that you could... that are used for specialized things, people be willing to pay. You know, the idea that somebody's willing to pay $1000 per million tokens is just around the corner. It's not if, it's only when. And so now we're seeing that the commodity that this factory makes is actually valuable, and is revenue generating and profit generating. How, now the question is how many of these factories can, does the world need? How much, how many tokens does the world need? And how much is society willing to pay for these tokens? And what would happen to the world's economy if the productivity were to improve so substantially? What would happen... Are we, are we gonna discover new drugs, new products, new services? And so when you take these things in combination, I am absolutely certain that the world's GDP is going to accelerate in growth. I'm absolutely certain the percentage of that GDP that will be used for computation will be 100 times more than the past... because it's no longer a storage unit. It's a product generation unit. And so when you look at it in that context and then you back into what is NVIDIA's, what does NVIDIA does NVIDIA do and how much of that new economics, new industry would we have to benefit to address, I think we're gonna be a lot, lot bigger. And then the rest of it, to me, is is it possible for NVIDIA to be a, you know, $3 trillion revenues company in the near future? The answer is of course yes. And the reason for that is because it's not limited by any physical limits. There's nothing that I see that says, you know, gosh, $3 trillion is not possible. And as it turns out, NVIDIA's supply chain is, the burden is shared by 200 companies. And the fact that we scale out on the backs with the partnership of this ecosystem, the question is do we have the energy to do so? And surely we will have the energy to do so. And so all of these things combined, that number is just a number, you know? And I still remember, NVIDIA was a, NVIDIA was a, the first time we crossed a billion dollars, I was reminded of a CEO who told me, "You know, Jensen, it's theoretically impossible for a fabless semiconductor company to exceed a billion dollars." And I won't bore you with why, but the end, of course it's illogical and there's a lot of evidence we're not. And then, somebody told me, "You know, Jensen, you'll never be more than $25 billion because of some other company." Somebody told me that, "You'll never be, you know, because..." And so the, those aren't first principled thinking. And the simple way to think about that is what is it that we make and how large is the opportunity that we can create? Now, NVIDIA is not in the market share business. Almost everything that I just talked about don't exist. That's the part that's hard. You know, if NVIDIA was a $10 billion company trying to take market share, then it's easy to see for shareholders that, oh, yeah, if they could just take 10% share, they could be this much larger. But it's hard for people to imagine how large we could be because there's nobody I could take share from. You know? And so I think that that's one of the challenges.... for the world is, is the imagination of the future. But I got plenty of time, and I'll keep reasoning about it, and I'll keep talking about it, and every single GTC will become more and more real. You know, and then more and more people will talk about it, and one of these days, you know, we'll get there. But I'm 100% we'll get there. - Yeah, this view of you know, token factories essentially, this token per second per watt, and every token having value. Like it's an actual thing that brings value, and it brings different kinds of value, different amounts of value to different people with value. That's the actual product, is really could be loosely thought of as the token. And so you have a bunch of token factories. And then it's very easy, first principles, to imagine a future, given all the potential things that AI can solve, that you're going to need an exponential number more of token factories. - And what's really interesting, the reason why I was so excited about it, the iPhone of tokens arrived. - What do you call it? Wait, are you saying OpenClaw's iPhone? That's interesting. - Agents. - Yeah, agents. True. - Agents in general. The iPhone of tokens arrived. It is the fastest-growing application in history. It went straight up. Went straight up. - That says something. - Yep, there's no question OpenClaw is the iPhone of tokens. - Is there something truly, as you know, something truly special happening from about December, where people have really woke up to the power of Claude Code of Codex, of OpenClaw. Um, I mean, I'm embarrassed to admit that on the way here in the airport, I've ... It's the first time I've done this in public. I was programming, quote unquote, by talking to my laptop. And I was embarrassed because I was pretending like I'm talking to a human colleague. Uh, I'm not sure how I feel about the future where everybody- ... is walking around talking to their AI, but it's such an efficient way to get stuff done. - And it's more likely that your AI is bothering you all the time. And the reason for that is because it's getting stuff done so fast. It's reporting back to you, "I got…

Transcript truncated. Watch the full video for the complete content.

Get daily recaps from
Lex Fridman

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.