Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games | Lex Fridman Podcast #475

Lex Fridman| 02:28:14|Mar 27, 2026
Chapters21
The discussion centers on how classical learning systems can model highly nonlinear dynamics and fluids, suggesting learned, lower-dimensional structure under real-world materials and patterns.

Demis Hassabis explains how classical learning systems may model nature’s patterns, the future of AI, and how AI could transform science, games, and society.

Summary

Lex Fridman sits down with Demis Hassabis for a wide-ranging discussion about the core ideas shaping AI today. Hassabis argues that patterns in nature—shaped by evolution, geology, and cosmology—could be efficiently learned by classical neural networks, challenging the assumption that only brute-force or quantum methods can crack complex problems. He cites AlphaFold, AlphaGo, and V3 as evidence that neural models can discover underlying structure and dynamics in high-dimensional spaces. The conversation delves into intuitive physics as learned from video data, the potential for open-world, personalized games powered by AI, and how AGI might emerge from increasingly capable but controllable systems. Hassabis also sketches his grand scientific ambitions, including virtual cells and even modeling life’s origins, while reflecting on the societal implications, governance, and the ethics of fast-moving, transformative technology. Throughout, Fridman presses on topics like the P vs NP analogy, the future of computation, energy, and the role of collaboration across institutions. The dialogue blends neuroscience, computer science, philosophy, and practical product insight, offering a candid view of where AI research is headed and what it could mean for humanity.

Key Takeaways

  • Nature’s patterns may be learnable by classical neural networks, enabling efficient modeling of complex systems like fluids, proteins, and astronomical dynamics.
  • AlphaFold and AlphaGo demonstrate that high-dimensional, combinatorial problems can be tamed by structured learning, guiding search and prediction in milliseconds or seconds.
  • Video-based learning in V3 hints at an implicit, intuitive physics understanding that supports realistic rendering of liquids, lighting, and materials without explicit physics engines.
  • Open-world, adaptive AI in games could redefine storytelling by co-creating narratives with players, moving beyond scripted experiences to dynamic, individualized worlds.
  • There is a strong belief that AGI will require a mix of scaling, new architectures, and intelligent search strategies (e.g., Alpha Evolve) rather than a single breakthrough—progress will be iterative and multi-faceted.
  • Ethical and governance questions around AI are as crucial as technical advances; collaboration across labs and responsible deployment will be essential as capabilities grow.
  • A future with abundant energy (fusion and solar) could unlock transformative societal benefits, making resource constraints a less binding problem and enabling more ambitious AI projects to flourish.

Who Is This For?

Researchers and developers in AI, machine learning engineers, game developers exploring AI-driven gameplay, policymakers and ethics scholars interested in AI governance, and science enthusiasts curious about how AI could accelerate biology and physics research.

Notable Quotes

"Anything that can be evolved can be efficiently modeled. Think survival of the stablest, because structure in nature exists due to long survival processes."
Hassabis articulates a central conjecture about learnability of natural patterns by neural networks.
"We model what the dynamics of the system are and that makes the search for the solution tractable—polynomial time on a classical system."
Explanation of why AI models like AlphaGo/AlphaFold work by learning environment structure.
"Intuitive physics is learned from video; V3 can predict next frames well enough to produce coherent, physically plausible results without explicit hand-crafted physics."
Discussion of what V3 understands about the physical world.
"Open-world AI in games could co-create narratives with players, adapting to how they play and offering personalized, dramatic storytelling."
Future of AI-driven game design and player experience.
"The true test of AGI will be lighthouse moments—inventing a new conjecture, or building a game as deep and elegant as Go—moments that reveal genuine creativity."
Defining signals for AGI beyond narrow task performance.

Questions This Video Answers

  • How might neural networks learn physical intuitions from video data without explicit programs?
  • What is Alpha Evolve and how does it combine evolution with language models for program search?
  • Could AI systems eventually model a whole cell or the origin of life, and what would that take?
  • What would a truly open-world AI-driven game look like in 5–10 years?
  • Is AGI closer to scaling current models or to making a few big architectural breakthroughs?
Demis HassabisLex Fridman PodcastAI General Intelligence (AGI)AlphaFoldAlphaGoV3 (video generation)intuitive physicsopen-world gamesAlpha EvolveP vs NP analogy (computational complexity)
Full Transcript
It's hard for us humans to make any kind of clean predictions about highly nonlinear dynamical systems. But again to your point, we might be very surprised what classical learning systems might be able to do about even fluid. Yes, exactly. I mean fluid dynamics, Navia Stokes equations, these are traditionally thought of as very very difficult intractable problems to do on classical systems. They take enormous amounts of compute, you know, weather prediction systems, you know, these kind of things all involve fluid dynamics calculations. But again, if you look at something like VO, our video generation model, it can model liquids quite well, surprisingly well, and materials, specular lighting. I love the ones where, you know, there's there's people have generated videos where there's like clear liquids going through hydraulic presses and then being squeezed out. I I used to write uh physics engines and graphics engines and in my early days in gaming. And I know it's just so painstakingly hard to build programs that can do that. And yet somehow these systems are, you know, reverse engineering from just watching YouTube videos. So presumably what's happening is it's extracting some underlying structure around how these materials behave. So perhaps there is some kind of lower dimensional manifold that can be learned if we actually fully understood what's going on under the hood. That's maybe, you know, maybe true of most of reality. The following is a conversation with Demis Hassabis, his second time on the podcast. He is the leader of Google Deep Mind and is now a Nobel Prize winner. Demis is one of the most brilliant and fascinating minds in the world today, working on understanding and building intelligence and exploring the big mysteries of our universe. This was truly an honor and a pleasure for me. This is the Lex Freedman podcast. To support it, please check out our sponsors in the description and consider subscribing to this channel. And now, dear friends, here's Deus Hassavas. In your Nobel Prize lecture, you propose what I think is a super interesting conjecture that quote any pattern that can be generated or found in nature can be efficiently discovered and modeled by a classical learning algorithm. What kind of patterns of systems might be included in that? Biology, chemistry, physics, maybe cosmology, neuroscience. What what are we talking about? Sure. Well, look, I I felt that it's sort of a tradition, I think, of Nobel Prize lectures that you're supposed to be a little bit provocative and I wanted to follow that tradition. What I was talking about there is if you take a step back and you look at um all the work that we've done especially with the alpha x projects so I'm thinking alpho of course alpha fold what they really are is we're building models of very combinatorily highdimensional spaces that you know if you tried to brute force a solution find the best move and go or find the the exact shape of a protein and if you enumerated all the possibilities you there wouldn't be enough time in the in the you know the time of the universe. So you have to do something much smarter and what we did in both cases was build models of those environments. Um and that guided the search in a in a smart way and that makes it tractable. So if you think about protein folding which is obviously a natural system you know why should that be possible? How does physics do that? You know proteins fold in milliseconds in our bodies. So somehow physics solves this problem that we've now also solved computationally. And I think the reason that's possible is that in nature, natural systems have structure because they were subject to evolutionary processes that that shape them. And if that's true, then you can maybe learn uh uh what that structure is. So this perspective, I think, is really interesting one. You've hinted it at it, which is almost like uh crudely stated. Anything that can be evolved can be efficiently modeled. Think there's some truth to that. Yeah, I sometimes call it survival of the stablest or something like that because you know it's it's of course there's evolution for life uh living things but there's also you know if you think about geological time so the shape of mountains that's been shaped by weathering processes right over thousands of years but then you can even take it cosmological the orbits of planets the um shapes of asteroids these have all been survived kind of processes that have acted on them many many times so if that's true then there should some sort of pattern um that you can kind of reverse learn and uh a kind of manifold really that helps you uh uh search to the right solution to the right shape um and actually allow you to predict things about it uh in an efficient way because it's not a random pattern right so um it may not be possible for for man-made things or abstract things like factorizing large numbers because unless there's patterns in the number space which there might be but if there's not and it's uniform then there's no pattern to learn there's no model to learn that will help you search. So you have to do brute force. So in that case you you know you maybe need a quantum computer something like this. But in most things in nature that we're interested in uh are not like that. They have structure um that evolved for a reason and survived over time. And if that's true I think that's potentially learnable by a neural network. It's like nature is doing a search process and it's so fascinating that it's in that search process is creating systems that could be efficiently modeled. That's right. Yeah. So interesting. So they can be efficiently rediscovered or recovered um because nature is not random, right? These everything that we see around us, including like the elements that are more stable, all of those things, they're subject to um some kind of selection process pressure. Do you think because you're also a fan of theoretical computer science and complexity, do you think we can come up with a kind of complexity class like a complexity zoo type of class where maybe it's the set of learnable systems, the set of learnable natural systems, lns. Yeah, this is a deis new class of systems that could be actually learnable by classical systems in this kind of way. Natural systems that can be uh modeled efficiently. Yeah, I mean I' I've always been fascinated by the P= MP question and what is modelable by classical systems I non-quantum systems you know cheuring machines in effect and that's exactly what I'm working on actually in kind of my few moments of spare time with a few colleagues about is should there be you know maybe a new class of problem that is solvable by this type of neural network process and kind of mapped on to these natural systems so you know the things that exist in physics and have structure. So I think that could be a very interesting uh new way of thinking about it. And it sort of fits with the way I think about physics in general which is that you know I think information is primary. Information is the most sort of fundamental unit of the universe more fundamental than energy and matter. I think they can all be converted into each other but I think of the universe as a kind of informationational system. So when you think of the universe as anformational system then the P= NP question is a is a physics question. That's right. And it's a question that can help us actually solve the entirety of this whole thing going on. Yeah, I think it's one of the most uh fundamental questions actually if you think of physics asformational uh and and the answer to that I think is going to be you know very enlightening. more specific to the PNNP question. This again, some of the stuff we're saying is kind of crazy right now. Just like the Christian Edinson Nobel Prize speech controversial thing that he said sounded crazy and then you went and got a Nobel Prize for this with John Jumper solved the problem. So, let me let me just stick to the P equals MP. Do you think there's something in this thing we're talking about that could be shown if you can do something like uh polomial time or constant time compute ahead of time and construct this gigantic model then you can solve some of these extremely difficult problems in a theoretical computer science kind of way. Yeah, I think that there are actually a huge class of problems that could be couched in this way. the way we did alpha go and the way we did alpha fold where you know you you model what the dynamics of the system is the the the the properties of that system the environment that you're trying to understand and then that makes the search for the solution or the prediction of the next step efficient basically polomial time so tractable by a uh classical system uh which a neural network is it runs on normal computers right classical computers uh chewing machines in effect and um I think it's one of the most interesting questions there is is how far can that paradigm go? You know, I think we've proven uh and the AI community in general that classical systems, cheuring machines can go a lot further than we previously thought. You know, they can do things like model the structures of proteins and play go to better than world champion level. And uh you know a lot of people would have thought maybe 10 20 years ago that was decades away or maybe you would need some sort of quantum machines to to quantum systems to be able to do things like protein folding. And so I think we haven't really uh even sort of scratched the surface yet of what uh classical systems socalled uh uh could do. And of course AGI being built on a on a neural network system on top of a neural network system on top of a classical computer would be the ultimate expression of that. And I think the limit you know the the what what the bounds of that kind of system what it can do it's a very interesting question and and and directly speaks to the P equals MP question. What do you think again hypothetical might be outside of this maybe emergent phenomena like if you look at cellular automa some of the you have extremely simple systems and then some complexity emerges yes maybe that would be outside or even would you guess even that might be amendable to efficient modeling by a classical machine yeah I think those systems would be right on the boundary right so um I think most emergent systems cellular automter things that could be modelable by a classical system. You just sort of do a forward simulation of it and it probably be efficient enough. Um, of course there's the question of things like chaotic systems where the initial conditions really matter and then you get to some, you know, uncorrelated end state those could be difficult to model. So I think these are kind of the open questions. But I think when you step back and look at what we've done with the systems and the and the problems that we've solved and then you look at things like V3 on like video generation sort of rendering physics and lighting and things like that, you know, really core fundamental things in physics. Um it's pretty interesting. I think it's telling us something quite fundamental about how the universe is structured in my opinion. Um so you know in a way that's what I want to build AGI for is to help uh us uh as scientists answer these questions uh like p=mp. Yeah I think we might be continuously surprised about what is modelable by classical computers. I mean alpha fold 3 on the interaction side is surprising that you can make any kind of progress on that direction. Alpha genome is surprising that you can map the genetic code to the function kind of playing with the emergent kind of phenomena. You think there's so many combinatorial options that and then here you go. You can find the kernel that is efficiently modeled. Yes. Because there's some structure there's some landscape you know in the energy landscape or whatever it is that you can follow some gradient you can follow. And of course what neural networks are very good at is following gradients. And so if there's one to follow and object and you can specify the objective function correctly you know you don't have to deal with all that complexity which I think is how we maybe have naively thought about it for decades those problems if you just enumerate all the possibilities it looks totally intractable and there's many many problems like that and then you think well it's like 10^ the 300 possible protein structures uh it's 10^ theund you know 70 possible go positions all of these are way more than atoms in the universe so how could one possibly find the the right solution or predict the next step and and it but it turns out that it is possible and of course reality nature does do it right proteins do fold so that that gives you confidence that there must be if we understood how physics was doing that uh in a sense uh then and we could mimic that process I model that process uh it should be possible on our classical systems is is is basically what the conjecture is about and of course there's nonlinear dynamical systems, highly nonlinear dynamical systems, everything involving Yes. Right. You know, I recently had a conversation with Terrence Ta who mathematically uh it contends with a very difficult aspect of systems that have some singularities in them that break the mathematics and it's just hard for us humans to make any kind of clean predictions about highly nonlinear dynamical systems. But again to your point we might be very surprised what classical learning systems might be able to do about even fluid. Yes, exactly. I mean fluid dynamics, Navia Stokes equations, these are traditionally thought of as very very difficult intractable kind of problems to do on calculations. And um but again, if you look at something like VO, our video generation model, it can model liquids quite well, surprisingly well, and materials, specular lighting. I love the ones where, you know, there's there's people have generated videos where there's like clear liquids going through hydraulic presses and then being squeezed out. I I used to write uh physics engines and graphics engines and in my early days in gaming and I know it's just so painstakingly hard to build programs that can do that and yet somehow these systems are, you know, reverse engineering from just watching YouTube videos. So presumably what's happening is it's extracting some underlying structure around how these materials behave. So perhaps there is some kind of lower dimensional manifold that can be learned if we actually fully understood what's going on under the hood. That's maybe you know maybe true of most of reality. Yeah. I've been continuously precisely by this aspect of V3. I think a lot of people highlight different aspects, including the comedic and the meme and all that kind of stuff. And then the ultra realistic ability to capture humans in a really nice way that's compelling and feels close to reality and then combine that with native audio. All of those are marvelous things about V3. But the exactly the thing you're mentioning, which is the physics. it's not perfect, but it's pretty damn good. And then the really interesting scientific question is what is it understanding about our world in order to be able to do that because of the cynical take with diffusion models there's no way it understands anything but it seem I mean I don't think you can generate that kind of video without understanding and then our own philosophical notion what it means to understand then is like brought to the surface like do to what degree do you think V3 understands our world? I think to the extent that it can predict the next frames you know in a coherent way that's some that is a form you know of understanding right not in the anthropomorphic version of you know it's not some kind of deep philosophical understanding of what's going on I don't think these systems have that but they they certainly have uh modeled enough of the dynamics you know put it that way that they can pretty accurately generate whatever it is 8 seconds of consistent video that by eye at least you know at a glance is quite hard to distinguish what the issues are and imagine that in two or three more years time. That's the thing I'm thinking about and how incredible that there will look uh given where we've come from, you know, the early versions of that uh one or two years ago. And so, um the rate of progress is incredible. And I think um I'm like you is like a lot of people love all of the the the the standup comedians and the the that actually captures a lot of human dynamics very well and and body language, but actually the thing I'm most impressed with and fascinated by is the physics behavior, the lighting and materials and liquids. And it's pretty amazing that it can do that. And I think that shows that it has some notion of at least intuitive physics, right? um how things are supposed to work uh intuitively maybe the way that uh a human child would understand physics right as opposed to a you know a PhD student really uh being able to unpack all the equations it's more of an intuitive physics understanding well that intuitive physics understanding that's the base layer that's the thing people sometimes call like common sense like it it really understands something I think that really surprised a lot of people it blows my mind that I just didn't think it would be possible to generate that level of realism without understanding. You there's this notion that you can only understand the physical world by having an embodied AI system, a robot that interacts with that world. That's the only way to construct an understanding of that world. But V3 is directly challenging that it feels like yes and it's very interesting you know even if we if you were to ask me 5 10 years ago I would have said even though I was immersed in all of this I would have said well yeah you probably need to understand intuitive physics you know like if I push this off the table this glass it will maybe shatter you know um and and the liquid will spill out right so we know all of these things but I thought that you know and there's a lot of theories in neuroscience it's called action in perception where you know you you need to act in the world to really truly perceive it in a deep way. And there was a lot of theories about you need embodied intelligence or robotics or something or maybe at least simulated action uh so that you would understand things like intuitive physics. But it seems like um you can understand it through passive observation which is pretty surprising to me and and again I think hints at something underlying about the nature of uh reality in in my opinion beyond um just the you know the cool videos that it generates. Um and and of course there's next stages is maybe even making those videos interactive. So uh one can actually step into them and move around them. Um which would be really mind-blowing especially given my games background. So you can imagine and then and then I think you know you're we're starting to get towards what I would call a world model a model of how the world works the mechanics of the world the physics of the world and the things in that world. And of course that's what you would need for a true AGI system. I have to talk to you about video games. So, you were being a bit trolly. I I think you're you're having more and more fun on Twitter on X, which is great to see. So, guy named Jimmy Apples tweeted, "Let me play a video game of my V3 videos already. Uh, Google cooked so good playable world models when spelled we n question mark." Uh, and then you quote tweeted that with, "Now, wouldn't that be something?" So, how how hard is it to build game worlds with AI? Maybe can you look out into the future uh of video games 5 10 years out? What do you think that looks like? Well, games were my first love really and doing AI for games was the first thing I did professionally in my teenage years and and was the first major AI systems that I built and uh I always want to have I want to scratch that itch one day and come back to that. though, you know, and I will do, I think, and um I think I'd sort of dream about, you know, what would I have done back in the '90s if I'd had access to the kind of AI systems we have today? And I think you could build absolutely mind-blowing games. Um, and I think the next stage is I always used to love making all the games I've made are openw world games. So, they're games where there's a simulation and then there's AI characters and then the player uh interacts with that simulation and the simulation adapts to the way the player plays. And I always thought they were the coolest games because uh so games like theme park that I worked on where everybody's game experience would be unique to them, right? Because you're kind of co-creating the game, right? Uh we set up the parameters, we set up initial conditions, and then you as the player immersed in it, and then you are co-creating it with the with the simulation. But of course, it's very hard to program open world games. you know, you've got to be able to create uh content whichever direction the player goes in and you want it to be compelling no matter what the player chooses. Um, and so it was always quite difficult to build uh things like cellular automter actually type of those kind of classical systems which created some emergent behavior. Um, but they're always a little bit fragile, a little bit limited. Now we're maybe on the cusp in the next few years, 5 10 years of having AI systems that can truly create around your imagination. um can nar sort of dynamically change the story and storytell the narrative around uh and make it dramatic no matter what you end up choosing. So it's like the ultimate choose your own adventure sort of game. And uh you know I think maybe we're within reach if you think of a kind of interactive version of VO uh and then wind that forward 5 to 10 years and you know imagine how good it's going to be. Yeah. So you said a lot of super interesting stuff there. So one the open world built into that is a deep personalization the way you've described it. So it's not just that it's open world like you can open any door and there'll be something there. It's that the choice of which door you open in an unconstrained way defines the worlds you see. So some games try to do that to give you choice. Yes. But it's really just an illusion of choice because the only uh like like Stanley Parable game I recently played. It's it's it's really there's a couple of doors and it really just takes you down a narrative. Stanley Parable is a great video game I recommend people play that kind of uh in a meta way uh mocks the illusion of choice and there's philosophical notions of free will and so on. But uh I do like one of my favorite games of Elder Scrolls is Daggerfall. I believe that they really played with a like random generation of the dungeons. Yeah. Of you can step in and they give you this feeling of an open world and there you mentioned interactivity. You don't need to interact. That's a first step cuz you don't need to interact that much. You just when you open the door, whatever you see is randomly generated for you. Yeah. And that's already an incredible experience because you might be the only person to ever see that. Yeah. Exactly. And and so but what you'd like is a little bit better than just sort of a random generation, right? So you'd like uh and and also better than a simple AB hardcoded choice, right? That's not really uh open world, right? As you say, it's just giving you the illusion of choice. What you want to be able to do is is potentially anything in that game environment. Um, and I think the only way you can do that is to have uh generated systems, systems that uh will generate that on the fly. Of course, you can't create infinite amounts of game assets, right? It's expensive enough already how AAA games are made today. And that was obvious to to us back in the '9s when I was working on all these games. I think maybe Black and White uh was the game that I worked on, early stages of that that had the still probably the best AI learning AI in it. It was an early reinforcement learning system that you, you know, you were, you were looking after this mythical creature and growing it and nurturing it and depending how you treated it, it would treat the villagers in that world in the same way. So if you were mean to it, it would be mean. If you were good, it would be protective. And so it was really a reflection of the way you played it. So actually all of the uh I've been working on sort of simulations and AI through the medium of games at the beginning of my career and and really the whole of what I do today is still a follow on from uh those early more hardcoded ways of doing the AI to now you know fully general learning systems that that are trying to achieve the same thing. Yeah, it's been uh interesting, hilarious, and uh fun to watch you and Elon obviously itching to create games because you're both gamers. And one of the sad aspects of your uh incredible success in so many domains of science like serious adult stuff. That you might not have time to really create a game. You might end up creating the tooling that others would create the game. You have to watch other others create the thing you've always dreamed of. Do you think it's possible you can somehow in your extremely busy schedule actually find time to create something like black and white? some some an actual video game where like you could make the childhood dream come become reality. You know, there's two things way to think about that is maybe with vibe coding as it gets better and there's a possibility that I could, you know, one could do that actually in in your spare time. So, I'm quite excited about that as a as that would be my project if if I got the time to do some vibe coding. Um I'm actually itching to do that. And then the other thing is, you know, maybe it's a sbatical after agi has been safely stewarded into the world and delivered into the world. You know, that and then working on my physics theory as we talked about at the beginning. Those would be the two my my two post AGI projects. Let's call it that way. I I would love to see which game post AGI which you choose. Solving uh the the problem that some of the smartest people in human history contended with, you know, P equals MP or creating a cool video. Yeah. Well, but they might but in my world they'd be related because it would be an openw world simulated game uh as realistic as possible. So, you know what what is what is the universe? That's that's that's speaking to the same question, right? NPL MP. I think all these things are related, at least in my mind. I mean in a really serious way like video games sometimes are looked down upon as just this fun side activity but especially as AI does more and more of the difficult uh boring tasks something we in in modern world call work. You know video games is the thing in which we may find meaning in which we may find like what to do with our time. You could create incredibly rich, meaningful experiences. Like that's what human life is. And then in video games, you can create more sophisticated, more diverse ways of living, right? I think so. I mean, those of us who love games, and I still do, is is is um you know, it's almost can let your imagination run wild, right? Like I I used to love games um and working on games. so much because it's the fusion especially in the '9s and early 2000s the sort of golden era maybe the 80s of of of game of the games industry and it was all being discovered new genres were being discovered we weren't just making games we felt we were we were creating a new entertainment medium that never existed before especially with these open world games and simulation games where you were co-create you as the player were co-creating the story there's no other media uh entertainment media where you do that where you as the audience actually co-create the the story and of course Now with multiplayer games as well, it can be a very social activity and can explore all kinds of interesting worlds in that. But on the other hand, you know, it's very important to um also enjoy and experience uh the physical world. But the question is then, you know, I think we're going to have to kind of confront the question again of what is the fundamental nature of reality? uh what is the going to be the difference between these increasingly realistic simulations and uh multiplayer ones and emergent um and what we do in the real world. Yeah, there's clearly a huge amount of value to experiencing the real world nature. There's also a huge amount of value in experiencing other humans directly in person the way we're sitting here today. But we need to really scientifically rigorously answer the question why. Yeah. And which aspect of that can be mapped into the virtual world. Exactly. It's not it's not enough to say, "Yeah, you should go touch grass and hang out in nature." It's like, why exactly is that valuable? Yes. And I guess that's maybe the thing that's been uh haunting me, obsessing me from the beginning of my career. If you think about all the different things I've done, that's they're all related in that way. This simulation, nature of reality, and what is the bounds of, you know, what can be modeled. Sorry for the ridiculous question, but so far, what is the greatest video game of all time? What's up there? Well, my favorite one of all time is Civilization. I have to say that that was the the the Civilization 1 and Civilization 2. My favorite games of all time. Um I can only assume you've avoided the most recent one because it would probably you would that would be your sobatical that you would disappear. Yes, exactly. They take a lot of time these Civilization games. So, I got to be careful with them. Fun question. You and Elon seem to be somehow solid gamers. Uh is there a connection between being great at gaming and and uh being great leaders of AI companies? I don't know. I It's an interesting one. I mean uh we both love games and uh it's interesting he wrote games as well to start off with. It's probably especially in the era I grew up in where home computers were just became a thing, you know, in the late ' 80s and '9s, especially in the UK. I had a Spectrum and then a Commodore Omega 500 which is my my favorite computer ever and that's why I learned all my programming and of course it's a very fun thing uh to program is to program games. So I think it's a great way to learn programming probably still is and um and then of course I immediately took it in directions of AI and simulations which so I may was able to express my interest in in games and my sort of wider scientific interests alto together. And then the final thing I think that's great about games is it fuses um artistic design, you know, art with the the the most cutting edge programming. Um so again, in the '90s, all of the most interesting uh technical advances were happening in gaming, whether that was AI, graphics, physics engines, uh hardware, even GPUs of course were designed for gaming originally. Um so everything that was pushing computing forward in the in the '9s was due to gaming. So interestingly that was where the forefront of research was going on and it was this incredible fusion with with art um you know graphics but also music and just the whole new media of storytelling and I love that. For me it's this sort of multi-disiplinary kind of effort is again something I've enjoyed my whole my whole life. I have to ask you, I almost forgot about one of the many and I would say one of the most incredible things recently uh that somehow didn't yet get enough attention is alpha evolve. We talked about evolution a little bit but it's the Google deep mind system that evolves algorithms. Are these kinds of evolution-like techniques promising as a component of future super intelligence system? So for people who don't know, it's kind of um I don't know if it's fair to say it's LLM guided evolution search. So evolutionary algorithms are doing the search and LLMs are telling you where. Yes. Exactly. So LLMs are kind of proposing some possible solutions and then you do you use evolutionary computing on top to to to find some novel part of the of the search space. So actually I think it's an example of very promising directions where you combine LLMs or foundation models with other computational techniques. Evolutionary methods is one but you could also imagine Monte Carlo research basically many types of search algorithms or reasoning algorithms sort of on top of or using the foundation models as a basis. So, I actually think there's quite a lot of interesting uh things to be discovered probably with these sort of hybrid systems, let's call them. But not to romanticize evolution. Yeah, I'm only human. But you think there's some value in whatever that mechanism is because we already talked about natural systems. Do you think where there's a lot of lowhanging fruit of us understanding being being able to model uh being able to simulate evolution and then using that whatever we understand about that nature inspired mechanism to to then do surge better and better and better. Yes. So if you think about uh again breaking down the sort of systems we've built uh to their really fundamental core, you've got like the model of the of the underlying dynamics of the system. Uh and then if you want to discover something new, something novel that hasn't been seen before, um then you need some kind of search process on top to take you to a novel region of the of the of the search space. And um you can do that in a number of ways. Evolutionary computing is one. um with Alph Go we just use Monte Carlo research right and that's what found move 37 the new kind of never seen before strategy in go and so that's how you can go beyond potentially what is already known so the model can model everything that you currently know about right all the data that you currently have but then how do you go beyond that so that starts to speak about the ideas of creativity how can these systems create something new discover something new obviously this is super relevant for scientific discovery or pushing met science and medicine forward, which we want to do with these systems. And you can actually bolt on some uh fairly simple search systems on top of these models and get you into a new region of space. Of course, you also have to um make sure that you're not searching that space totally randomly. It would be too big. So, you have to have some objective function that you're trying to optimize and hill climb towards and that guides that search. But there's some mechanism of evolution that are interesting maybe in the space of programs. But then the space of programs is an extremely important space because you can probably generalize to to everything you know for example mutation. So it's not just Monte Carlo tree search where it's like a search. You could every once in a while combine things. Yeah. Combine things alter like sub like a components of a thing. Yes. So then you know what evolution is really good at is not just the natural selection. It's combining things and building increasingly complex hierarchical systems. So that component is super interesting especially like with alpha evolve in the space of programs. Yeah. Exactly. So there's a you can get a bit of an extra property out of evolutionary systems which is some new emergent capability may come about but of course like happened with life. Interestingly, with naive uh sort of traditional evolutionary computing methods without LLMs and the modern AI, the problem with them, there was they were very well studied in the 90s and and and and early 2000s and some promising results, but the problem was they could never work out how to evolve new properties, new emergent properties. You always had a sort of subset of the properties that you put into the system. But maybe if we combine them with these foundation models, perhaps we can overcome that limitation. Obviously uh natural evolution clearly did because it it did evolve new capabilities right so bacteria to where we are now. So clearly that it must be possible with evolutionary systems to generate uh new patterns you know going back to the first thing we talked about and uh new capabilities and emergent properties and maybe we're on the cusp of discovering how to do that. Yeah listen uh alpha evolve is one of the coolest things I've ever seen. I've I've on my desk at home, you know, most of my time is spent behind that computers just programming. And next to the the three screens is a skull of a tectalic, which is one of the early organisms that crawled out of the water onto land. And I just kind of watch that little guy. It's like you whatever the computation mechanism of evolution is is quite incredible. It's truly truly incredible. Now whether that's exactly the thing we need to do to do our search but never dismiss the power of nature what it did here. Yeah. And it's amazing um which is a relatively simple algorithm right effectively and it can generate all of this immense complexity emerges obviously running over you know 4 billion years of time but but it's it's it's you know you can think about that as again a pro a search process that ran over the physics substrate of the universe for a long amount of computational time but then it generated all this incredible uh rich diversity. So uh so many questions I want to ask you. But one, you do have a dream. One of the natural systems you want to uh try to model is a is a cell. Yes, that's a beautiful dream. Uh I could ask you about that. I also just for that purpose on the AI scientist front just broadly. So there's a essay uh from Daniel Cocatalio, Scott Alexander, and others that outlines steps along the way to get to ASI and has a lot of interesting ideas in it. one of which is uh including a superhuman coder and a superhuman AI researcher and in that there's a term of research taste that's really interesting. So in everything you've seen, do you think it's possible for AI systems to have research taste to help you in the way that AI co-scientist does to help steer human um human brilliant scientists and then potentially by itself to figure out what are the directions where you want to generate truly novel ideas because that seems to be like a really important component how to do great science. Yeah, I think that's going to be one of the hardest things to to uh mimic or model is is this this idea of taste or or judgment. I think that's what separates the you know the the great scientists from the good scientists like all all professional scientists are good technically right otherwise they wouldn't have made it that far in in academia and things like that but then do you have the taste to sort of sniff out what the right direction is what the right experiment is what the right question is. So the it's the it's picking the right question is is the hardest part of science. Um and and making the right hypothesis and um that's what you know today's systems definitely they can't do. So you know I often say it's harder to come up with a conjecture a really good conjecture than it is to solve it. So we may have systems soon that can solve pretty hard conjectures. um you know I I um mass Olympiad problems where we we you know alpha proof last year our system got you know silver medal in that really hard problems maybe eventually we'll be able to solve a millennium prize kind of problem but could a system have come up with a conjecture worthy of study that someone like Terren Tower would have gone you know what that's a really deep question about the nature of maths or the nature of numbers or the nature of physics and that is far harder type of creativity and we don't really Oh, systems clearly can't do that and we're not quite sure what that mechanism would be. This kind of leap of imagination like like Einstein had when he came up with, you know, special relativity and then general relativity with the knowledge he had at the time. As for conjecture, the you want to come up with a thing that's interesting and amenable to proof. So like it's easy to come up with a thing that's extremely difficult. It's easy to come up with a thing that's extremely easy. at that at that very edge, that sweet spot, right, of of basically advancing the science and splitting the hypothesis space into two ideally, right? Whether if it's true or not true, you you've learned something really useful and um and and that's hard and and and and making something that's also uh you know falsifiable and within sort of the technologies that you have you currently have available. So it's a very creative process actually highly creative process that um I think just a kind of naive search on top of a model won't be enough for that. Okay. The idea of splitting the hypothesis space in two is super interesting. So uh I've heard you say that there's basically no failure in or failure is extremely valuable if it's done if you construct the questions right if you construct the experiments right if you design them right that failure success are both useful. So perhaps because it splits the hypothesis basically two, it's like a binary search. That's right. So when you do like, you know, real blue sky research, there's no such thing as failure really as long as you're picking experiments and hypotheses that that that that meaningfully spit the hypothesis space. So you know, and you learn something, you can learn something kind of equally valuable from an experiment that doesn't work. That should tell you, if you've designed the experiment well and your hypothesis are interesting, it should tell you a lot about where to go next. and um and then it's you're effectively doing a search process um and using that information in in you know very helpful ways. So to go to your dream of uh modeling a cell uh what are the big challenges that lay ahead for us to make that happen? We should maybe highlight that alpha I mean there's just so many leaps. So AlphaFold solved if it's fair to say protein folding and there's so many incredible things we could talk about there including the open sourcing uh the everything you've released. Alpha Fold 3 is doing protein, RNA, DNA interactions, which is super complicated and and fascinating. That's amendable to modeling. Alpha genome uh predicts uh how small genetic changes like if we think about single mutations, how they link to actual uh function. So um those are it seems like it's creeping along to sophistic to to much more complicated u things like a cell but a cell has a lot of really complicated components. Yeah. So what I've tried to do throughout my career is I have these really grand dreams and then I try to as you've noticed and then I try to break but I try to break them down any you know it's easy to have a kind of a crazy ambitious dream but the the the trick is how do you break it down into manageable achievable uh interim steps that are meaningful and useful in their own right and so virtual cell which is what I call the project of modeling a cell I've had this idea you know of wanting to do that for maybe more like 25 is and I used to talk with Paul Nurse who is a bit of a mentor of mine in biology. He runs the the you know founded the Craig Institute and and won the Nobel Prize in in 2001. uh is is we've been talking about it since you know before the you know in the '90s and um and I come used to come back to every 5 years is like what would you need to model the full internals of a cell so that you could do experiments on the virtual cell and what those experiment you know in silicone and those predictions would be useful for you to save you a lot of time in the wet lab right that would be the dream maybe you could 100x speed up experiments by doing most of it in silicone the search in silicico and then you do the validation step in the wet lab. That would be that's the that's the dream. And so u but maybe now finally uh so I was trying to build these components alpha fold being one that that would allow you eventually to model the full interaction a full simulation of a cell and I'd probably start with a yeast cell and partly that's what Paul nurse studied because a yeast cell is like a full organism that's a single cell right so it's the kind of simplest single cell organism and so it's not just a cell it's a full organism and um and yeast is very well understood And so that would be a good candidate for uh a a kind of full simulated model. Now alpha fold is the is the solution to the kind of static picture of what does a what does a protein look 3D structure protein look like a static picture of it. But we know that biology all the interesting things happen with the dynamics the interactions and that's what alpha 3 is is the first step towards is modeling those interactions. So first of all pairwise you know proteins with proteins proteins with RNA and DNA but then um the next step after that would be modeling maybe a whole pathway maybe like the to pathway that's involved in cancer or something like this and then eventually you might be able to model you know a whole cell also there's another complexity here that stuff in a cell happens at different time scales is that tricky like there you know protein uh folding is you know super fast yes um I don't know all the bi ological mechanisms, but some of them take a long time. And so is that that's an level. So the levels of interaction has a different temporal scale that you have to be able to model. So that would be hard. So you'd probably need several simulated systems that can interact at these different temporal dynamics or at least maybe it's like a hierarchical system. So um you can jump up and down the the different temporal stages. So can you avoid I mean one of the challenges here is not avoid simulating for example the the the quantum mechanical aspects of any of this right you want to not overm model you can skip ahead to just model the really highlevel things that get you a really good estimate of what's going to happen so you you got to make a decision when you're modeling any natural system what is the cutoff level of the granularity that you're going to model it to that then captures the dynamics that you're interested in. So probably for a cell I would hope that would be the protein level uh and that one wouldn't have to go down to the atomic level. Um so you know of course that's where alpha volt stock kicks in. So that would be kind of the basis and then you'd build these um uh higher level simulations that um take those as building blocks and then you get the emergent behavior. Apologize for the pthead questions ahead of time, but uh will do you think uh we'll be able to simulate and model the origin of life. So being able to simulate the first from from non-living organisms the the birth of a living organism. I think that's a one of the of course one of the deepest and most fascinating questions. Um I love that area of biology. you know, uh, people like there's a great book by Nick Lane, one of the top top experts in this area called the the 10 great inventions of of of evolution. I think it's fantastic and it also speaks to what the great filters might be, you know, prior or are they ahead of us. I think I think they're most likely in the past if you read that book of how unlikely to go, you know, have any life at all and then single cell to multisell seems an unbelievably big jump that took like a billion years, I think, on Earth to do, right? So it shows you how hard it was, right? Bacteria were super happy for a very long time, a very long time before they captured mitochondria somehow, right? I don't see why not why AI couldn't help with that some kind of simulation. Again, it's again, it's a bit of a search process through a combinatorial space. Here's like all the chem, you know, the chemical soup that that you start with, the primordial soup that, you know, maybe was on Earth near these hot vents. Here's some initial conditions. Can you uh generate something that looks like a cell? So perhaps that would be a next stage after the virtual cell project is well how how could you actually um something like that emerge from the chemical soup? Well, I would love it if there was a move 37 for the origin of life. Yeah, I think that's one of the sort of great mysteries. I think ultimately what we will figure out is their continuum. There's no such thing as a line between non-living and living. But if we can make that rigorous Yes. that that the very thing from the be big bang to today has been the same process. If we can break down that wall that we've constructed in our minds of the actual origin of from non-living to living and it's not a line that it's a continuum that connects physics and chemistry and biology. There's no line. I mean this is my whole reason why I've worked on AI and AGI my whole life because I think it can be the ultimate tool to help us answer these kind of questions. And I don't really understand why um you know the average person doesn't think like worry about this stuff more like how how can we not have a good definition of life and not and not living and non-living and the nature of time and let alone consciousness and gravity and all these things. It's it's just and quantum mechanics weirdness. It's just to me it's I've always had this sort of screaming at me in my face the whole and that it's getting louder you It's like how what is going on here? You know, in in I mean that in the deepest sense like in the you know the nature of reality which has to be the ultimate question uh that would answer all of these things. It's sort of crazy if you think about we can stare at each other and all these living things all the time. We can inspect it with microscopes and take it apart uh almost down to the atomic level and yet we still can't answer that clearly in a simple way that question of how do you define living? it's kind of amazing. Yeah, living you can kind of talk your way out of thinking about but like consciousness like we have this very obviously subjective conscious experience like we're at the center of our own world and it it feels like something and then h how how are you not screaming at the mystery of it all I mean but really humans have been contending with the mystery of the world around them for long there's a lot of mysteries like what's up with the sun and and the rain, like what's that about? And then like last year we had a lot of rain and this year we don't have rain. Like what did we do wrong? Humans have been asking that question for a long time. Exactly. So we're quite I guess we've developed a lot of mechanisms to cope with this these deep mysteries that we can't fully we can see but we can't fully understand and we have to have to just get on with daily life and and and we get we keep ourselves busy right in a way. Do we keep ourselves distracted? I mean weather is one of the most important questions of human history. We still that's that's the go-to small talk direction of of the weather especially in England and then it's which is you know famously is an extremely difficult system to model and uh even that system uh Google deep mind has made progress on. Yes, we've yeah, we've created the the best weather prediction systems in the world and they're better than traditional fluid dynamics sort of systems that usually calculated on massive supercomputers takes days to calculate it. And we've managed to model a lot of the weather dynamics with neural network systems with our weather next system. And again, it's interesting that those kinds of dynamics can be modeled even though they're very complicated, almost bordering on chaotic systems in some cases. A lot of the interesting aspects of that um can be modeled by these neural network systems, including very recently we had, you know, cyclone prediction of where, you know, paths of hurricanes might go. of course super useful super important for the world and and and it's super important to do that very timely and very quickly and as well as accurately and uh I think it's very promising direction again of you know simulating and uh uh so that you can run forward predictions and simulations of very complicated real world systems. I should mention that uh I've got a chance in uh Texas to meet a community of folks called the stormchasers. Yes. And what's really incredible about them, I need to talk to them more, is they're extremely tech-savvy because what they have to do is they have to use models to predict where the storm is. So they're it's just it's it's this beautiful mix of like crazy enough to like go into the eye of the storm and like in order to protect your life and predict where the extreme events are going to be, they have to have increasingly sophisticated models of of weather. Yeah. It's it's a a beautiful balance of like being in it as living organisms and the the cutting edge of science. So they actually might be using uh deep mind system. So that's Yeah, they hopefully they are and I I'd love to join them on one of those chases. They look amazing, right? To actually experience it one time. Exactly. And then also to experience the correct prediction where something will come and how it's going to evolve. It's incredible. You've estimated that we'll have AGI by 2030. Um so there's interesting questions around that. How will we actually know that we got there? Uh and uh what maybe the move quote move 37 of AGI. My estimate is sort of 50% chance by in the next 5 years. So you know by 2030 let's say and uh so I think there's a good chance that that could happen. Part of it is what what is your definition of AGI? Of course, people are arguing about that now and and uh mine's quite a high bar and always has been of like can we match the cognitive functions that the brain has, right? So, we know our brains are pretty much general cheuring machines approximate. And of course, we've created incredible modern civilization with our minds. So, that also speaks to how general the brain is. And um for us to know we have a true AGI, we would have to like make sure that it has all those capabilities. it isn't kind of a jagged intelligence where some things it's really good at like today's systems but other things it's really uh flawed at and and that's what we currently have with today's systems they're not consistent so you'd want that consistency of intelligence across the board and then we have some missing I think capabilities like sort of uh the true invention capabilities and creativity that we were talking about earlier so you'd want to see those how you test that um I think you just test it one way to do it would be a kind of brute force test of tens of thousand thousand of cognitive tasks that um you know we know that humans can do uh and maybe also make the system available to uh a few hundred of the world's top experts uh the terren towers of each each subject area and see if they can find you know give them give them a month or two and see if they can find an obvious flaw in the system and if they can't then I think you're you're pretty uh you know pretty you can be pretty confident we have a a fully general system maybe to push back a little bit it seems like humans are really incredible as the the intelligence improves across all domains to take it for granted. Mhm. Uh like you mentioned Terrence Tao uh these brilliant experts they might quickly in a span of weeks take for granted all the incredible things it can do and then focus in well haha right there. You know I I consider myself uh first of all human. Uh second I identify as human. Um I you know some people listen to me talk and they're like that guy is not good at talking the stuttering the you know so like even humans have obvious across domains limits even just outside of mathematics and physics and so on it I I I wonder if it will take something like a move 37 so on the positive side versus a barrage of 10,000 cognitive tasks where it would be one or two where it's like yes, holy this is I think exactly. So I think there's the sort of blanket testing to just make sure you've got the consistency, but I think there are the sort of lighthouse moments like the move 37 that I would be looking for. So one would be inventing a new conjecture or new hypothesis about physics like Einstein did. So maybe you could even run the back test of that very rigorously like have a cut off of knowledge cutff of 1900 and then give the system everything that was you know that was written up to 1900 and then and then see if it could come up with special relativity and general relativity right like Einstein did that that would be an interesting test another one would be can it invent a game like go not just come up with move 37 a new strategy but can it invent a game that's as deep as aesthetically beautiful as elegant as go and those are the sorts of things I would be looking out for. Uh and probably a system being able to do uh uh several of those things, right, for it to be very general. Um not just one domain. And so I think that would be the signs at least that I would be looking for that we've got a system that's a GI level. And then maybe to fill that out, you would also check the consistency, you know, make sure there's no holes in that system either. Yeah. Something like a new conjecture or scientific discovery. That would be a cool feeling. Yeah, that would be amazing. So, it's not not just helping us do that, but actually coming up with something brand new and you would be in the room for that. So, it would be like probably 2 or 3 months before announcing it. And you would just be sitting there trying not to tweet something like that. Exactly. It's like what is this amazing new you know physics idea? And then we would probably check it with world experts in that domain, right? and validate it and kind of go through its workings and it I guess it would be explaining its workings too. Um yeah be an amazing moment. Do you worry that we as humans even expert humans like you might miss it might miss it may be pretty complicated. So it could be the analogy I give there is I don't think it will be um uh uh totally mysterious to the to the best human scientists but it may be a bit like for example in chess if I was to talk to Gary Kasparov or Magnus Carlson and play a game with them and they make a brilliant move I might not be able to come up with that move but they could explain why afterwards that move made sense and we were to understand it to some degree not to the level they do but in you know if they were good at explaining which is actually part of intellg igence too is being able to explain in a simple way that what you're thinking about. Um uh I I think that that would be very possible for the best human scientists. But I wonder maybe you can you can educate me on the side of go. I wonder if there's moves for Agnes or Gary where they at first will dismiss it as a bad move. Yeah, sure. It could be. But then afterwards they'll figure out with their intuition that that this why this works. And then and then and then empirically the nice thing about games is one of the great things about games is you can it's a sort of scientific test. Does it do you win the game or not win? And then um that tells you okay that move in the end was good. That strategy was good. And then you can go back and analyze that and and and and explain even to yourself a little bit more why explore around it. And that's how chess analysis and things like that works. So perhaps that's why my brain works like that cuz I I've been doing that since I was four and you're train you know it's sort of hardcore training in that way. But even even now like when I generate code there there is this kind of nuanced fascinating con contention that's happening where I might at first identify as a set of generated code is incorrect in in some interesting nuanced ways but then I'm always have to ask the question is there a deeper insight here that that I'm the one who's incorrect and that's going to as the systems get more and more intelligent you're going to have to contend with that. It's like what what what do you is this a bug or a feature of what you just came up with? Yeah. And they're going to be pretty complicated to do. But of course it will be you can imagine also AI systems that are producing that code or whatever that is and then human programmers looking at it but also not unaded with the help of AI tools as well. So it's going to be kind of an interesting you know maybe different AI tools to the ones that the more you know kind of monitoring tools to the ones that generated it. So if we look at a AGI system, sorry to bring it back up, but alpha evolve, super cool. So Alpha Evolve enables on the programming side something like recursive self-improvement uh potentially like what who can imagine what that AGI system maybe not the first version but a few versions beyond that. What does that actually look like? Do you think it would be simple? You think it'll be something like a self-improving program in a simple one? I mean, potentially that's possible. I would say um I'm not sure it's even desirable because that's a kind of like hard takeoff scenario. But but you you these current systems like Alpha Evolve, they have, you know, human in the loop deciding on various things. They're separate hybrid systems that interact. Uh one could imagine eventually doing that end to end. I don't see why that wouldn't be possible but right now um you know I think the systems are not good enough to do that in terms of coming up with the architecture of the code. Um and again it's a little bit connected to this idea of coming up with a new conjectural hypothesis. How like they're good if you give them very specific instructions about what you're trying to do. Um, but if you give them a very vague high level instruction, that wouldn't work currently. Like, uh, and I think that's related to this idea of like invent a game as good as go, right? Imagine that was the prompt. That's that's pretty underspecified. And so the current systems wouldn't know, I think, what to do with that, how to narrow that down to something tractable. And I think there's similar like, look, just make a better version of yourself that's too that's too unconstrained. But we've done it in, you know, and as you know with Alpha Evolve, like things like faster matrix multiplication. So when you when you hone it down to very specific thing you want um it's very good at incrementally improving that but at the moment these are more like incremental improvements sort of small iterations whereas if you know if you wanted a big leap in uh understanding you need a you need a much larger uh advance. Yeah. But it could also be sort of to push back against hard takeoff scenario. It could be just a sequence of um incremental improvements like matrix multiplication like it has to sit there for days thinking how to incrementally improve a thing and that it does so recursively and as you do more and more improvement it'll slow down so there'll be like a like uh the path to AGI won't be like a it'll be a gradual improvement over time. Yes. If it was just incremental improvements that's how it would look. So the question is could it come up with a new leap like the transformers architecture right could it have done that back in 2017 when you know we did it and brain did it and it's it's not clear that that these systems something like Alpha wouldn't be able to do make such a big leap so for sure these systems are good we have systems I think that can do incremental hill climbing and that's a kind of bigger question about is that all that's needed from here or do we actually need one or two more um uh big breakthroughs and can the same kind of systems provide the breakthroughs also. So make it a bunch of scurves like incremental improvement but also every once in a while leaps. Yeah. I don't think anyone has systems that can have shown unequivocally those big leaps that the the right. We have a lot of systems that do the hill climbing of the S-curve that you're currently on. Yeah. And that would be the move 37 is a leap. Yeah. I think would be a leap. Um something like that. Uh do you think the scaling laws are holding strong on the pre-training, post- training, test time, compute? Uh do you uh on the flip side of that anticipate AI progress hitting a wall? We certainly feel there's a lot more room just in the scaling. So um actually all steps pre-training, post-training and inference time. So uh there's sort of three scalings that are happening concurrently. Um and we again there it's about how innovative you can be and we you know we pride ourselves on having the broadest and um deepest research bench. uh we have amazing you know incredible uh researchers and uh people like Nam Shazir who you know came up with transformers and and Dave Silva you know who led the Alph Go project and so on and um it's it's it's that research base means that if some new new breakthrough is required like an Alph Go or Transformers uh I would back us to be the place that does that. So I'm actually quite like it when the terrain gets harder, right? Because then it veers more from just engineering to to true research and you know re or research plus engineering and that's our sweet spot. And I I think that's harder it's harder to invent things than to than to um you know fast follow. And um so you know we don't know I would say it's a it's kind of 50/50 whether new things are needed or whether the scaling the existing stuff is going to be enough. And so in true kind of empirical fashion, we're pushing both of those as hard as possible. The new blue sky ideas and you know maybe about half our resources are on that and then and then uh scaling to the max the the current the current capabilities and um we're still seeing some you know fantastic progress on uh each different version of Gemini. That's interesting the way you put it in terms of the deep bench that if uh progress towards AGI is more than just scaling compute so the engineering side of the problem and is more on the scientific side where there's breakthroughs needed then you feel confident deep mind as well Google deep mind is well positioned to kick kick ass in that domain well I mean if you look at the history of the last decade or 15 years um it's been I you know maybe I don't know 80 90% of the breakthroughs that that underpins modern AI field today was from you know originally Google brain Google research and deep mind so yeah I would back that to continue hopefully uh so on the data side are you concerned about running out of highquality data especially high quality human data I'm not very worried about that partly because I think there's enough data uh or and it's been proven to get the systems to be pretty good and this goes back to simulations again if you do you have enough data to make simulations or so that you can create more synthetic data that are from the right distribution. Obviously, that's the key. So, you need enough real world data in order to be able to uh uh create those kinds of generator data generators and um I think that we're at that step at the moment. Yeah, you've done a lot of incredible stuff on the side of science and biology doing a lot with not so much data. I mean, it's still a lot of data, but I guess enough take off that going. Exactly. Yeah. So exactly uh how crucial is the scaling of compute to building AGI? This is a question that's an engineering question. It's a almost geopolitical question because it also integrated into that is the supply chains and energy a thing that you care a lot about which is um potentially fusion. So innovating on the side of energy also. Do you think we're going to keep scaling compute? I think so for several reasons. I think compute there's there's the amount of compute you have for training often it needs to be colloccated so actually even like you know uh bandwidth constraints between data centers can affect that so it's it's it's there's additional constraints even there and that that's important for training obviously the largest models you can but there's also because now AI systems are in products and being used by billions of people around the world you need a ton of inference compute now um and then on top of that there's the thinking systems, the new paradigm uh of the last year that uh where they get smarter the longer amount of inference time you give them at test time. So all of those things need a lot of compute and I don't really see that slowing down. Um and as AI systems become better, they'll become more useful and there'll be more demand for them. So both from the training side, the training side actually is is only just one part of that. It may even become the smaller part of of what's needed um uh in the overall compute that that's required. Yeah, that's one sort of almost memey kind of thing which is like the success and the incredible aspects of V3 there people kind of make fun of like the more successful it becomes the you know the servers are sweating. Yes, exactly the difference in Yeah. Yeah. Exactly. We did a little video of of the servers frying eggs and things and um that's right and and and we're going to have to figure out how to do that. Um there's a lot of interesting hardware innovations that we do as you know we have our own TPU line and we're looking at like inference only things inference only chips and how we can make those more efficient. We're also very interested in building AI systems and we have done the help with energy usage so help um data center energy like for the cooling systems be efficient um grid optimization um and then eventually things like helping with plasma containment fusion reactors. We've done lots of work on that with Commonwealth Fusion and also uh one could imagine reactor design. Um and then material design I think is one of the most exciting new types of solar material solar panel material super room temperature superconductors has always been on my list of dream breakthroughs and um optimal batteries and I think a solution to any you know one of those things would be absolutely revolutionary for you know climate and energy usage and we're probably close you know again in the next 5 years to having AI systems that can materially help with those problems. If you were to bet, sorry for the ridiculous question, but what what is the main source of energy in like 20, 30, 40 years, do you think it's going to be nuclear fusion? I think fusion and solar are the two that I I would bet on. Um solar, I mean, you know, it's the fusion reactor in the sky, of course, and I think really the problem there is is is batteries and transmission. So you know as well as more efficient more and more efficient solar material perhaps eventually you know in space you know these kind of Dyson sphere type ideas and fusion I think is definitely doable seems uh if we have the right design of reactor and we can control the plasma and uh fast enough and so on and I think both of those things will actually get solved so we'll probably have at least those will probably be the two primary sources of renewable clean almost free or perhaps free energy What a time to be alive. If I uh traveled into the future with you 100 years from now, how much would you be surprised if we've passed a type one card scale civilization? I would not be that surprised if there was a like a 100redyear time scale from here. I mean, I think it's pretty clear if we crack the energy problems in one of the ways we've just discussed, fusion or or very efficient solar, um, then if energy is kind of free and renewable and clean, um, then that solves a whole bunch of other problems. So, for example, the water access problem goes away because you can just use desalination. We have the technology, it's just too expensive. So, only, you know, fairly wealthy countries like Singapore and Israel and so on like actually use it. But but if it was uh cheap then every then you know all countries that have a coast could but also you'd have unlimited rocket fuel. You could just separate sea water out into hydrogen and oxygen using energy and that's rocket fuel. So uh combined with you know Elon's amazing self landing rockets then it could be like you sort of like a bus service to to space. So that opens up you know incredible new resources and domains. uh asteroid mining I think will become a thing and maximum human flourishing to the stars. That's what I uh dream about as well is like Carl Sean's sort of idea of bringing consciousness to the universe, waking up the universe. And I I think human civilization will do that in the full sense of time if we get AI right and uh and and and crack some of these problems with it. Yeah. I wonder what it would look like if you just a tourist flying through space. You would probably notice Earth because if you solve the energy problem, you would see a lot of space rockets probably. So it would be like traffic here in London. But in space, just a lot of rockets and then you would probably see floating in space some kind of source of energy like solar. Potentially. So earth would just look more on the surface more um technological and then then you would use the power of that energy then to preserve the natural like the rainforest and all that kind of stuff because for the first time in in human history we wouldn't be uh resource constrainted and I think that could be amazing new era for humanity where it's not zero sum right I have this land you don't have it or if we take you know if the tigers have their forest just then the the local villagers can't what are they going to use? I I I think that this will help a lot. No, it won't solve all problems because there's still other human foibless that will will will still exist, but it will at least remove one I think one of the big vectors which is scarcity of resources, you know, including land and more materials and energy and um you know, we should be I sometimes call it like and others call it about this kind of radical abundance era where um there's plenty of resources to go around. But of course the next big question is making sure that that's fairly you know shared fairly uh and everyone in society benefits from that. So there is something about human nature where I go you know it's like borat like my neighbor like I like you start trouble we we we do start conflicts and that's why games throughout as I'm learning actually more and more even in ancient history serve the purpose of pushing people away from war actually hot war so maybe we can figure out increasingly sophisticated video games that pull us they that give us that uh scratch the itch of like conflict, whatever that is, about us, the human nature, and then avoid the actual hot wars that would come with increasingly sophisticated technologies because we're now long past the stage where the weapons we're able to create can actually just destroy all of human civilization. So, it's no longer um that's no longer a great way to to uh start with your neighbor. It's better to play a game of chess or football or football. Yeah. And I think I mean I think that's what my modern sport is. So, and I love football watching it and and I just feel like uh and I used to play it a lot as well and it's it's it's it's very visceral and it's tribal and I think it does channel a lot of those energies into a which I think is a kind of human need to belong to some some group and um but into a into a into a fun way, a healthy way and and a not a not destructive way kind of constructive uh thing. And I think going back to games again is I think they're originally why they're so great as well for kids to play things like chess is they're great little microcosm simulations of the world. They are simulations of the world too. They're simplified versions of some real world situation, whether it's poker or or go or chess, different aspects or diplomacy, different aspects of of the real world. And allows you to practice at them, too. And and cuz, you know, how many times do you get to practice a massive decision moment in your life, you know, what job to take, what university to go to, you know, you get maybe, I don't know, a dozen or so key decisions one has to make, and you got to make those as best as you can. Um, and games is a kind of safe environment, repeatable environment where you can get better at your decision- making process. Um, and it maybe has this additional benefit of channeling some energies into uh into more creative and constructive pursuits. Well, I think it's also really important to practice um losing and winning, Like losing is a really, you know, that's why I love games. That's why I love even um things like uh Brazilian jiu-jitsu. Where you can get your ass kicked in a safe environment over and over. It reminds you about the way about physics, about the way the world works, about sometimes you lose, sometimes you win. You can still be friends with everybody. But that that feeling of losing, I mean, it's a weird one for us humans to like really like make sense of like that's just part of life. That is a fundamental part of life is losing. Yeah. And I think in martial arts as I understand it, but also in things like light chess is a at least the way I took it, it's a lot to do with self-improvement, self-nowledge. You know that, okay, so I did this thing. It's not about really being the other person. It's about maximizing your own potential. If you do it in a healthy way, you learn to use victory and losses in a way. Don't get carried away with victory and and think you're the just the best in the world. Keep and and and the losses keep you humble and always knowing there's always something more to learn. there's always a bigger expert that you can mentor you, you know, I think you learn that I'm pretty sure in martial arts and and and I think that's also uh the way that at least I was trained in chess. And so in the same way and it can be very hardcore and very important and of course you want to win, but you also need to learn how to deal with setbacks in a in a healthy way that and and and and wire that that feeling that you have when you lose something into a constructive thing of next time I'm going to improve this, right? Or get better at this. There is something that's a source of happiness, a source of meaning, that improvement step. It's not about the winning or losing. Yes. The mastery. There's nothing more satisfying in a way is like, "Oh, wow. This thing I couldn't do before, now I can." And and and again, games and physical sports and and mental sports, they're way they're ways of measuring. They're beautiful because you can measure that that progress. Yeah. I mean there's something about this is why I love role playing games like the uh number go up of like my on the skill tree like literally that is a source of meaning for us humans whatever our yeah we're quite we're quite addicted to this sort of yeah these numbers going up and uh and and and and maybe that's why we made games like that because obviously that is something we're we're hill climbing systems ourselves right yeah it would be quite sad if we didn't have any mechanism by color belts all of the we do we do this everywhere right where we just have this thing that it's and I don't want to dismiss that that there is a source of deep meaning for us as humans. U so one of the incredible stories on the business on the leadership side is um what Google has done over the past year. So I uh I think it's fair to say that Google was losing on the LLM product side uh a year ago with Gemini 15 and now it's winning with Gemini 25 and you took the helm and you led this effort. What did it take to go from, let's say, quote unquote losing to quote unquote winning in the in in the span of a year? Yeah. Well, firstly, it's absolutely incredible team that we have, you know, led by Cory and Jeff Dean and and Oral and the amazing team we have on Gemini. Absolutely world class. So, you can't do it without the best talent. Um, and of course, you have, you know, we have a lot of great compute as well. But then it's the research…

Transcript truncated. Watch the full video for the complete content.

Get daily recaps from
Lex Fridman

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.