Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI | Lex Fridman Podcast #472

Lex Fridman| 03:14:34|Mar 27, 2026
Chapters21
Tao reflects on his conversation with Terence Tao, his humility, and the context of this Lex Fridman podcast interview.

Terrence Tao maps the frontiers of math and AI, from Navier-Stokes mysteries to Lean proof assistants, with a rare blend of depth, breadth, and wit.

Summary

Terrence Tao sits with Lex Fridman to unpack the hardest problems in mathematics, physics, and the future of AI. Tao reflects on boundary problems that push existing methods to their limit, like the Navier–Stokes regularity question and the concept of supercriticality in PDEs, which helps explain why some equations resist global regularity. He explains how his averaged Navier–Stokes construction creates a finite-time blow-up under carefully engineered nonlinear interactions, and how this yields obstructions to proving global regularity for the true equations. The conversation dives into Tao’s philosophical view of math as a balance between structure and randomness, the power of inverse theorems, and the way universality emerges in complex systems. They discuss Tao’s forays into general relativity, wave maps, and his “gauge transformation” approach that reinterprets nonlinear dynamics to reveal hidden linearity. The interview then pivots to AI and formal proof, detailing Lean as a proof assistant that certifies math step-by-step, the potential of crowd-sourced proofs, and the prospect of AI-assisted collaboration in reaching new theorems. Tao shares his stance on the future: AI will accelerate discovery but still needs human intuition, while the mathematics community benefits from new workflows, deeper collaboration, and the democratization of formalization through tools like Lean. The talk closes on Tao’s mentorship mindset, the ethics of awards, and a hopeful vision for how future generations will explore, prove, and connect ideas across disciplines—perhaps even revealing a few hidden truths about prime numbers along the way.

Key Takeaways

  • To raise a proof for Navier–Stokes global regularity, Tao engineered an averaged equation that channels energy into one scale at a time, creating a controlled route to blow-up and proving the method cannot extend to the real, unmodified equations.
  • Supercriticality vs. criticality is a qualitative divider: in supercritical regimes, nonlinear transport dominates viscosity at small scales, making regularity far harder to establish and often leading to turbulence or blow-up scenarios.
  • Tao’s ‘gauge transformation’ for wave maps shows how reframing a nonlinear PDE can reveal effective linear behavior, enabling global regularity results that were previously out of reach.
  • Lean as a proof assistant offers a formal certificate for every proof step, enabling highly granular collaboration (even dozens of co-authors) and a reliability that traditional papers alone cannot guarantee.
  • Timelines aside, Tao envisions a near-term future where AI contributes meaningfully to proofs (verification, searching lemmas, suggesting approaches) but requires human judgment to steer the right path and prune dead ends.
  • He emphasizes the dichotomy of structure vs. randomness in mathematics, and how structure theorems (inverse theorems) reveal why certain patterns exist while randomness explains typical behavior in large systems.
  • His ‘structured procrastination’ insight—break a hard problem into approachable subproblems—helps manage the emotional and cognitive load of deep research and keeps momentum going when facing stubborn obstacles.

Who Is This For?

Essential viewing for researchers in mathematics and theoretical physics, as well as students and engineers curious about how deep theory interfaces with AI-assisted proof work and modern collaborative math.

Notable Quotes

"The Navier–Stokes equations govern the fluid flow for incompressible fluids like water."
Tao introduces the central PDE of fluid dynamics that anchors much of the discussion.
"I engineered an averaged Navier–Stokes equation that can blow up in finite time, which provides an obstruction to global regularity for the true equations."
Core technical idea behind his finite-time blow-up result.
"Supercriticality is where transport dominates viscosity at small scales, and that makes regularity much harder to prove."
Key qualitative distinction in Tao’s PDE discussion.
"Lean can produce not just the answer but a proof certificate for every step, enabling truly atomized collaboration."
Explanation of formal proof tooling and its collaborative potential.
"Anticipating a future where AI helps with proofs but humans steer the right path, collaboration between minds and machines will accelerate discovery."
Vision for AI-assisted mathematical progress.

Questions This Video Answers

  • How does Tao explain the difference between critical and supercritical PDEs in plain terms?
  • What is the averaged Navier–Stokes approach Tao used to demonstrate finite-time blow-up?
  • Can Lean proofs truly reduce the uncertainty in complex mathematical arguments?
  • What is the role of universality in Tao's view of mathematics and complex systems?
  • Will AI ever generate a Fields Medal–worthy proof, and what would that look like?
Terence TaoNavier–StokesPDEsupercriticalityglobal regularitywave mapsgauge transformationLean proof assistantformal proofsAI in mathematics','collaborative math','universal phenomena','cellular automata
Full Transcript
The following is a conversation with Terrence Tao. Widely considered to be one of the greatest mathematicians in history. Often referred to as the Mozart of math, he won the Fields Medal and the Breakthrough Prize in mathematics and has contributed groundbreaking work to a truly astonishing range of fields in mathematics and physics. This was a huge honor for me for many reasons, including the humility and kindness that Terry showed to me throughout all our interactions. It means the world. This is the Lex Freedman podcast. To support it, please check out our sponsors in the description or at lexfreedman.com/sponsors. And now, dear friends, here's Terren Tao. What was the first really difficult research level math problem that you encountered? One that gave you pause maybe. Well, I mean in your undergraduate um education, you learn about the really hard impossible problems like the reman hypothesis, the twin primes conjecture. You can make problems arbitrarily difficult. That's not really a problem. In fact, there's even problems that we know to be unsolvable. What's really interesting are the problems just at the on the boundary between what we can do relatively easily and what are hopeless. Um but what are problems where like existing techniques can do like 90% of the job and then you just need that remaining 10%. Um I think as a PhD student the CA problem certainly caught my eye and it just got solved actually. It's a problem I've worked on a lot in my early research. Historically it came from a little puzzle by the Japanese mathematician Soji Kaya uh in like 1918 or so. Um, so the puzzle is that you you you have um a needle um in on the plane. Um think like like a like driving like on on on a road something and you you want it to execute a U-turn. You want to turn the needle around. Um but you want to do it in as little space as possible. So you want to use as little area in order to turn it around. So um but the needle is infinitely maneuverable. So you can imagine just spinning it around its um as a unit needle. You can spin it around its center. Um, and I think, um, that gives you a disc of of area, I think pi over four. Um, or you can do a three-point U-turn, which is what they we teach people in in the driving schools to do. Uh, and that actually takes area pi over 8. So, it's it's a little bit more efficient than um a rotation. And so, for a while, people thought that was the most efficient uh way to turn things around. But, Mazikovich uh showed that in fact, you could actually uh turn the needle around using as little area as you wanted. So 0001 there was some really fancy multi- um u back and forth U-turn thing that you could you could do that that you could turn a needle around and in so doing it would pass through every intermediate direction. Is this in the two dimensional plane? This is in the two dimensional plane. Yeah. So we understand everything in two dimensions. So the next question is what happens in three dimensions. So suppose like the Hubble space telescope is tube in space and you want to observe every single star in the universe. So you want to rotate the telescope to reach every single direction. And here's unrealistic part. Suppose that space is at a premium, which it totally is not. Uh you want to occupy as little volume as possible in order to rotate your your needle around in order to see every single star in the sky. Um how small a volume do you need to do that? And so you can modify basic construction. And so if your telescope has zero thickness, then you can use as little volume as you need. That's a simple modification of the two dimensional construction. But the question is that if your telescope is not zero thickness but but just very very thin some thickness delta what is the minimum volume needed to be able to see every single direction as a function of delta. So as delta gets smaller as you need gets thinner the volume should go down but but how fast does it go down? Um and the conjecture was that it goes down very very slowly um like logarithmically um uh roughly speaking and that was proved after a lot of work. So this seems like a puzzle. Why is it interesting? So it turns out to be surprisingly connected to a lot of problems in partial differential equations, in number theory, in geometry, comics. For example, in in wave propagation, you splash some some water around um you create water waves and they they travel in various directions. Um but waves exhibit both both particle and wave type behavior. So you can have what's called a wave packet, which is like a a very localized wave that is localized in space and moving a certain direction in time. And so if you plot it in both space and time, it occupies a region which looks like a tube. And so what can happen is that you can have a wave which initially is very dispersed but it all comes it all focuses at a single point later in time. Like you can imagine dropping a pebble into a pond and ripples spread out. But then if you time reverse that that um that scenario and the equations of wave motion are time reversible. You can imagine ripples that are converging um to a single point and then a big splash occurs um maybe even a singularity. Um and so it's possible to do that. Uh and geometrically what's going on is that there's always s of light rays. Um so like if if if this wave represents light for example um you can imagine this wave as a superp position of photons um all traveling at the speed of light. They all travel on these light rays and they're all focusing at this one point. So you can have a very dispersed wave focus into a very concentrated wave at one point in space and time, but then it defocuses again and it separates. But potentially if the conjecture had a negative solution. So what that meant is that there's there's a very efficient way to pack um tubes pointing different directions into a very very narrow region of of of very narrow volume. Then you would also be able to create waves that start out some there'll be some arrangement of waves that start out very very dispersed but they would concentrate not just at a single point but um um there'll be a large um there'll be a lot of concentrations in space and time and uh um and you could create what's called a blowup where these waves their amplitude becomes so great that the laws of physics that they're governed by are no longer wave equations but something more complicated and nonlinear. Um and so in mathematical physics we care a lot about whether certain equations in in wave equations are stable or not whether they can create um these singularities. There's a famous unsolved problem called the Navia Stokes regularity problem. So the Navia Stokes equations equations that govern the fluid flow for incompressible fluids like water. The question asks if you start with a smooth velocity field of water can it ever concentrate so much that like the velocity becomes infinite at some point that's called a singularity. We don't see that um in real life. You know, if you splash around water on the bathtub, it won't explode on you. Um or or have have water leaving at the speed of light, I think. But potentially, it is possible. Um and in fact, in recent years, the the consensus has has drifted towards the uh the belief that uh that in fact for certain very special initial configurations of of say water that singularities can form. But people have not yet been able to uh to actually establish this. The clay foundation has these seven millennium prize problems has a million dollar prize for solving one of these problems that this is one of them. Of these seven only one of them has been solved the point conjecture by Pelman. So the Ka conjecture is not directly directly related to the Navis Stokes problem but understanding it would help us understand some aspects of things like wave concentration which would indirectly probably help us understand the Navis problem better. Can you speak to the neighbors? So the existence and smoothness like you said millennial prize problem right you've made a lot of progress on this one in 2016 you published a paper finite time blow up for an averaged threedimensional navia stoke equation right so we're trying to figure out if this thing usually doesn't blow up right but can we say for sure it never blows up right yeah so yeah that is literally the the million- dollar question yeah so this is what distinguishes mathematicians from pretty much everybody else like it If something holds 99.99% of the time, um that's good enough for most, you know, uh for for most things, but mathematicians are one of the few people who really care about whether every like 100% really 100% of all um situations are covered by by um yeah, so most fluid most of the time um water that does not blow up. But could you design a very special initial state that does this? And maybe we should say that this is a this is a set of equations that govern in the field of fluid dynamics. Trying to understand how fluid behaves and it's actually turns out to be a really comp you know fluid is yeah extremely complicated thing to try to model. Yeah. So it has practical importance. So this clay price problem concerns what's called the incompressible navio stokes which governs things like water. There's something called the compressible navio stokes which governs things like air. And that's particularly important for weather prediction. Weather prediction it does a lot of computational fluid dynamics. A lot of it is actually just trying to solve the ny stokes equations as best they can. Um also gathering a lot of data so that they can get they can in initialize the equation. There's a lot of moving parts. So it's very important practically. Why is it difficult to prove general things about the set of equations like it not not blowing up? Short answer is Maxwell's demon. Um so exos demon is a concept in thermodynamics like if you have a box of two gases and oxygen and hydrogen uh and maybe you start with all the oxygen one side and nitrogen the other side but there's no barrier between them right then they will mix um and they should stay mixed right there there's no reason why they should unmix but in principle because of all the collisions between them there could be some sort of weird conspiracy that that um like maybe there's a microscopic demon called Maxwell's demon that will um every time a oxygen and nitrogen atom collide they will bounce off in such a way that the oxygen sort of drifts onto one side and then goes to the other and uh you could have an extremely improbable configuration emerge. Uh which we never see. Um and and we statistically it's extremely unlikely but mathematically it's possible that this can happen and we can't rule it out. Um and this is a situation that shows up a lot in mathematics. Um a basic example is the digits of pi 3.14159 and so forth. The digits look like they have no pattern and we believe they have no pattern. On the long term, you should see as many ones and twos and threes as fours and fives and sixes. There should be no preference in the digits of pi to favor let's say 7 over 8. Um, but maybe there's some demon in the digits of pi that that like every time you compute more digits, it sort of biases one digit to another. Um, and this is a conspiracy that should not happen. There's no reason it should happen, but um there's there's there's no way to prove it. uh with our current technology. Okay. So getting back to Nabia Stokes, a fluid has a certain amount of energy and because a fluid is in motion, the energy gets transported around and water is also viscous. So if the energy is spread out over many different locations, the natural viscosity of the fluid will just damp out the energy and will it will go to zero. Um and this is what happens um in um uh when we actually experiment with water like you splash around there. there's some turbulence and waves and so forth. But eventually it it settles down and and and the the lower the amplitude, the smaller the velocity, the the more calm it gets. Um but potentially there is some sort of a demon that keeps pushing the uh the energy of the fluid into a smaller and smaller scale and it will move faster and faster and at faster speeds the effective viscosity is relatively less. And so it could happen that that it it creates a some sort of um um what's called a self similar blowup scenario where you know um the energy of fluid starts off at some um large scale and then it all sort of um transfers it energy into a smaller um region of of of the fluid which then at a much faster rate um moves into um an even smaller region and so forth. Um and and each time it does this uh it takes maybe half as as long as as the previous one and then you you could you could actually uh converge to all the energy concentrating in one point in a finite amount of time. Um and that that's uh that scenario is called finite blow up. Um so in practice this doesn't happen. Um so water is what's called turbulent. Um so it is true that um if you have a big eddy of water it will tend to break up into smaller eddies but it won't transfer all the the energy from one big eddy into one smaller eddy. It will transfer into maybe three or four and then those must split up into maybe three or four small edies of their own and so the energy gets dispersed to the point where the viscosity can can then keep that thing under control. Um but if it can somehow um concentrate um all the energy keep it all together um and do it fast enough that the viscous effects don't have enough time to calm everything down then this blob can occur. So there were papers who had claimed that oh you just need to take into account conservation energy and just carefully use the viscosity and you can keep everything under control for not just Navia Stokes but for many many types of equations like this and so in the past there have been many attempts to try to obtain what's called global regularity for Navio Stokes which is the opposite of final time blow up that velocity say smooth and it all failed there was always some sign error or some subtle mistake and and it couldn't be salvaged. Um so what I was interested in doing was trying to explain why we were not able to disprove um planet time blow up. I couldn't do it for the actual equations of fluids which were too complicated. But if I could average the equations of motion of naval basically if if um if I could turn off certain types of of ways in which water interacts and only keep the ones that I want. Um, so in particular, um, if, um, if there's a fluid and it could transfer energy from a large Eddie into this small Eddie or this other small Eddie, I would turn off the energy channel that would transfer energy to this this one and and direct it only into um, this smaller Eddie while still preserving the law of conservation of energy. So you're trying to make it blow up. Yeah. Yeah. So I I I basically engineer um, a blow up by changing the laws of physics, which is one thing that mathematicians are allowed to do. We can change the equation. How does that help you get closer to the proof of something? Right? So, it provides what's called an obstruction in mathematics. Um, so, so what I did was that uh basically if I turned off the um certain parts of the equation, so which usually when you turn off certain interactions make it less nonlinear, it makes it more regular and less likely to blow up. But I found that by turning off a very well-designed set of of of of interactions, I could force all the energy to blow in finite time. So what that means is that if you wanted to prove um global regularity for Navia Stokes um for the actual equation you had you must use some feature of the true equation which which my artificial equation um does not satisfy. So it it rules out certain um certain approaches. So um the thing about math is is it's not just about finding you know taking a technique that is going to work and applying it but you you need to not take the techniques that don't work. Um and for the problems that are really hard, often there are dozens of ways that you might think might apply to solve the problem. But uh it's only after a lot of experience that you realize there's no way that these methods are going to work. So having these counter examples for nearby problems um kind of rules out um uh it saves you a lot of time because you you're not wasting um energy on on things that you now know cannot possibly ever work. How deeply connected is it to that specific problem of fluid dynamics or just some more general intuition you build up about mathematics? Right. Yeah. So the key phenomenon that uh my my technique exploits is what's called superc criticality. So in partial differential equations often these equations are like a tugof-war between different forces. So in Navia Stokes there's the dissipation um force coming from viscosity and it's very well understood. It's linear. It calms things down. If if viscosity was all there was, then then nothing bad would ever happen. Um but there's also transport um that that energy from in one location of space can get transported because the fluid is in motion to to other locations. Um and that's a nonlinear effect and that causes all the all the problems. Um so there are these two competing terms in the Davis equation the dissipation term and the transport term. If the dissipation term dominates, if it's if it's large, then basically you get regularity. And if um if the transport term dominates, then uh then we don't know what's going on. It's a very nonlinear situation. It's unpredictable. It's turbulent. So sometimes these forces are in balance at small scales, but not in balance at large scales or or vice versa. Um so Navis Stokes is what's called supercritical. So at at smaller and smaller scales, the transport terms are much stronger than the viscosity terms. So the viscosity are the things that calm things down. Um and so this is um um this is why the problem is hard in two dimensions. So the Soviet mathematician ladish skaya she in the 60s shows in two dimensions there is no blow up and in two dimensions the nav equations is what's called critical the effect of transport and the effect of viscosity about the same strength even at very very small scales and we have a lot of technology to handle critical and also subcritical equations and proof um regularity but for superc critical equations it was not clear what was going on and I did a lot of work and then there's been a lot of follow-up showing that for many other types of superc critical equations you create all kinds of blow up examples. Once the nonlinear effects dominate the linear effects at small scales, you can have all kinds of bad things happen. So this is sort of one of the main insights of this this line of work is that superc criticality versus criticality and subcriticality. This this makes a big difference. I mean that's a key qualitative feature that distinguishes some equations for being sort of nice and predictable and you know like like planetary motion and I mean there are certain equations that that you can predict for millions of years and or thousands at least. Again, it's not really a problem, but but there's a reason why we can't predict the weather past 2 weeks into the future because it's a super critical equation. Lots of really strange things are going on at very fine scales. So, whenever there is some huge source of nonlinearity, yeah, that can create a huge problem for predicting what's going to happen. Yeah. And if the nonlinearity is somehow more and more featured and interesting at at small scales. Um I mean there's there's many equations that are nonlinear but um in in many equations you can approximate things by the bulk. Um so for example planetary motion you know if you want to understand the orbit of the moon or Mars or something you don't really need the micro structure of like the seismology of the moon or or like exactly how the mass is distributed. um you just basically you can almost approximate these planets by point masses and just the aggregate behavior is important um but if you want to model a fluid um like like the weather you can't just say in Los Angeles the temperature is this the wind speed is this for super critical equations the finance confirmation is is really important if we can just linger on the narto's uh equations a little bit so you've suggested maybe you can describe it that one of the ways to uh solve it or to negatively resolve it would be to sort of to construct a liquid a kind of liquid computer, right? And then show that the halting problem from computation theory has consequences for fluid dynamics. So uh show it in that way. Can you describe this this Yeah. So this came out of of this work of constructing this this this average equation that that blew up. Um so one um as as part of how I had to do this. So there this naive way to do it. You you just keep pushing um um every time you you get energy at one scale you you push it immediately to the next scale as as fast as possible. This is sort of the naive way to to to to force blow up. Um it turns out in five and high dimensions this works. Um but in three dimensions there was this funny phenomenon that I discovered that if you if you keep if if you change the laws of physics you just always keep trying to push um the energy into smaller smaller scales. Um what happens is that the energy starts getting spread out into multi many scales at once. Um so that you you have energy at one scale you're pushing it into the next scale and then um as soon as it enters that scale you also push it to the next scale but there's still some energy left over from the previous scale. um you're trying to do everything at once. Um and this spreads out the energy too much. Um and then it turns out that that um it makes it vulnerable for viscosity to come in and actually just damp out everything. So um so it turns out this this direct bush doesn't doesn't actually work. There was a separate paper by some other authors that actually showed this um in three dimensions. Um so what I needed was to program a delay. Um so kind of like air locks. So um I needed an equation which would start with a fluid doing something at one scale. It would push this energy into the next scale but it would stay there until all the energy from the from the larger scale got transferred and only after you pushed all the energy in then you sort of open the next gate and and then you you push that in as well. So um by doing that it kind of the energy inches forward scale by scale in such a way that it's always um localized at one scale at a time. Um and then it can resist the effects of viscosity because it's not dispersed. Um so in order to make that happen um yeah I had to construct a rather complicated nonlinearity. Um and it was basically like um you know like was constructed like electronic circuit. So I I actually thank my wife for this because she was trained as a electrical engineer. Um and um you know he talked about um uh you know he had to design circuits and so forth. And you know if if you want a circuit that does a certain thing like maybe have a light that that flashes on and then turns off and then on and then off. You can build it from from more primitive components you know capacitors and resistors and so forth and you have to build a diagram and you um and these diagrams you can you can sort of follow your eyeballs and say oh yeah the the current will build up here and then it will stop and then it will do that. So I knew how to build the analog of basic electronic components, you know, like resistors and capacitors and so forth. And and I would I would stack them together um in in such a way that that I would create something that would open one gate and then there'll be a clock that would and then once the clock hits a certain threshold it would close it kind of a rude Goldberg type machine but described mathematically and this ended up working. So what I realized is that if you could pull the same thing off for the actual equations. So if the equations of water support a computation so um like if you can imagine kind of a steampunk but really water punk uh type of thing where um you know so modern computers are electronic you know they they they're powered by by electrons passing through very tiny wires and interacting with other electrons and so forth. But instead of electrons, you can imagine these pulses of of water moving at certain velocity and maybe it's they're two different configurations corresponding to a bit being up or down. Probably if you had two of these moving bodies of water collide, it would come out with some new configuration which is which would be something like an ANDgate or orgate. you know that if the the the output would depend in a very predictable way on on the inputs and like you could chain these together and maybe create a touring machine and and then you could you have computers which are made completely out of water um and if you have computers then maybe you can do robotics so I you know hydraulics and so forth um and so you could create some machine which is basically a fluid analog what's called a vonomian machine so vonomian proposed if you want to colonize Mars. The sheer cost of transporting people machines to Mars is just ridiculous. But if you could transport one machine to Mars and this machine had the ability to mine the planet, create some more materials to smelt them and build more copies of the same machine. Um, then you could colonize a whole planet um over time. Um, so uh if you could build a fluid machine, which uh yeah, so it's it's it's a it's a robot. Okay. And what it would do it its purpose in life, it's programmed so that it would create a smaller version of itself in some sort of cold state. It wouldn't start just yet. Once it's ready, the big robot configuration water would transfer all his energy into the smaller configuration and then power down. Okay? And then like I clean itself up. And then what's left is this newest state which would then turn on and do the same thing but smaller and faster. And then the equation has a certain scaling symmetry. Once you do that, it can just keep iterating. So this in principle would create a blow up uh for the actual Navia Stokes and this is what I managed to accomplish for this average Navia Stokes. So it provided the sort of road map to solve the problem. Now this is uh a pipe dream because uh there are so many things that are missing for this to actually be a reality. Um so um I I I can't create these basic logic gates. Um I I don't I don't have these in these special configurations of water. Um, I mean there's candidates there things called vortex rings that might possibly work but um um but also you know analog computing is really nasty um compared to digital computing. I mean because there's always errors um you you have to you have to do a lot of error correction along the way. I don't know how to completely power down the big machine so that it doesn't interfere with the the running of the smaller machine but everything in principle can happen like it doesn't contradict any of the laws of physics. Um so it's sort of evidence that this thing is possible. Um there are other groups who are now pursuing ways to make navis blow up which are nowhere near as ridiculously complicated as this. Um um they they actually are pursuing much closer to the the direct self similar model which can it doesn't quite work as is but there could be some simpler scheme than what I just described to make this work. There is a real leap of genius here to go from Navia Stokes to this touring machine. So it goes from what the self similar blob scenario that you're trying to get the smaller and smaller blob to now having a liquid toying machine gets smaller and smaller and smaller and somehow seeing how that could be used to say something about a blowup. I mean that's a big leap. So there's precedent. I mean um so the the thing about mathematics is that it's really good at um spotting connections between what you think of what you might think of as completely different um problems. Um but if if the mathematical form is the same you you can you you can you can draw a connection um so um there's a lot of work previously on what called cellular automator um the most famous of which is Conway's game of life. there's this infinite discrete grid and at any given time the grid is either occupied by a cell or it's empty and there's a very simple rule that uh tells you how these cells evolve. So sometimes cells live and sometimes they die. Um and this um you know um when I was a a student it was a very popular screen saver to actually just have these these animations going and and they look very chaotic. In fact they look a little bit like turbulent float sometimes. But at some point people discovered more and more interesting structures within this game of life. Um so for example they discovered this thing called a glider. So a glider is a very tiny configuration of like four or five cells which evolves and it just moves at a certain direction and that's like this this vortex rings this um yeah so this is an analogy the game of life is kind of like a discrete equation and and um the flu navis is a continuous equation but mathematically they have some similar features um and um so over time people discovered more and more interesting things you could build within the game of life. The game life is a very simple system. It only has like three or four rules um to to do it, but but you can design all kinds of interesting configurations inside it. Um there's something called a glider gun that does nothing to spit out gliders one at a one one at a time. Um and then after a lot of effort, people managed to to create um and gates and or gates for gliders. Like there's this massive ridiculous structure which if you if a if you have a stream of gliders um coming in here and a stream of gliders coming in here then you may produce a stream of gliders coming out. If so maybe if both of of the um streams um have gliders then there'll be an output stream but if only one of them does then nothing comes out. Mhm. So they could build something like that. And once you could build and um these basic gates then just from software engineering you can build almost anything. Um you can build a touring machine. I mean it's like an enormous steampunk type things. They look ridiculous. But then people also generated self-replicating objects in the game of life. A massive machine a bon machine which over a huge period of time and it always look like glider guns inside doing these very steampunk calculations. it would create another version of itself which could replicate. It's so incredible. A lot of this was like community crowdsourced by like amateur mathematicians actually. Um so I knew about that that that work and so that is part of what inspired me to propose the same thing with Navia Stokes. Um which is a much as I said analog is much worse than digital like it's going to be um you can't just directly take the constructions in the game of life and plunk them in. But again it just it shows it's possible. You know, there's a kind of emergence that happens with these cellular automa. Local rules. Maybe it's similar to fluids. I don't know. But local rules operating at scale can create these incredibly complex dynamic structures. Do you think any of that is amendable to mathematical analysis? Do we have the tools to say something profound about that? The thing is you can get this emerg in very complicated structures but only with very carefully prepared initial conditions. Yeah. So so these these these glider guns and and gates and and so forth machines if you just plunk down randomly some cells and you and you will not see any of these. Um and that's the analogous situation with Navia Stokes again you know that that with with typical initial conditions you you will not have any of this weird computation going on. Um but basically through engineering you know by by by specially designing things in a very special way you can make clever constructions. I wonder if it's possible to prove the sort of the negative of like basically prove that only through engineering can you ever create something interesting. This this is a recurring challenge in mathematics that um I call it the dichotomy between structure and randomness. That most objects that you can generate in mathematics are random. They look like rand like the digits of pi. Well, we believe is a good example. Um, but there's a very small number of things that have patterns. Um, but um, now you can prove something has a pattern by just constructing, you know, like if something has a simple pattern and you have a proof that it it does something like repeat itself every so often. You can do that. But um, and you you can prove that that for example, you can you can prove that most sequences of of digits have no pattern. Um, so like if you just pick digits randomly, there's something called low large numbers. It tells you you're going to get as many ones as as twos in the long run. Um but um we have a lot fewer tools to to to if I give you a specific pattern like the digits of pi how can I show that this doesn't have some weird pattern to it. Some other work that I spend a lot of time on is to prove what are called structure theorems or inverse theorems that give tests for when something is is very structured. So some functions are what's called additive like if you have a function that maps natural numbers with natural numbers. So maybe um you know two maps to four three maps to six and so forth. um some functions what's called additive which means that if you add if you add two inputs together the output gets gets added as well uh for example multiplying by a constant if you multiply a number by 10 um if you if you multiply a plus b by 10 that's the same as multiplying a by 10 and b by 10 and then adding them together so some um functions are additive some are kind of additive but not completely additive um so for example if I take a number n I multiply by the square root of two and I take the integer part of that So 10 by square of two is like 14 point something. So 10 up to 14. Um 20 up to 28. Um so in that case additively is true then. So 10 + 10 is 20 and 14 + 14 is 28. But because of this rounding sometimes there's roundoff errors and and sometimes when you um add a plus b this function doesn't quite give you the sum of of the two individual outputs but the sum plus minus one. Um so it's almost additive but not quite additive. Um so there's a lot of useful results in mathematics and I've worked a lot on developing things like this to the effect that if if a function exhibits some structure like this then um it's basically there's a reason for why it's true and the reason is because there's there's some other nearby function which is actually um completely structured which is explaining this sort of partial pattern that you have. Um and so if you have these so inverse theorems it um it creates this sort of dichotomy that they either the objects that you study are either have no structure at all or they are somehow related to something that is structured. Um and in either way in either um in either case you can make progress. Um a good example of this is that there's this old theorem in mathematics called sim theorem proven in the 1970s. It concerns trying to find a certain type of pattern in a set of numbers. the patterns that have make progression things like 3 five and seven or or or 10 15 and 20 andreli proved that um any set of of numbers that are sufficiently big um what's called positive density has um arithmetic progressions in it of of any length you wish um so for example um the odd numbers have a set of density 1/2 um and they contain arithmetic progressions of any length um so in that case it's obvious because the the odd numbers are really really structured I can just take 11 13 15 17 I just I can I can easily find arithmetic progressions in in in that set. Um but um zerminism also applies to random sets. If I take the set of odd numbers and I flip a coin um and for each number and I only keep the numbers which for which I got a heads okay so I just flip coins. I just randomly take out half the numbers I keep one half. So that's a set that has no no patterns at all. But just from random fluctuations, you will still get a lot of um um of arithmetic progressions in that set. Can you prove that there's arithmetic progressions of arbitrary length within a random? Yes. Um have you heard of the infinite monkey theorem? Usually mathematicians give boring names to theorists, but occasionally they they give colorful names. Yes. The popular version of the infinite monkey theorem is that if you have an infinite number of monkeys in a room with each with a typewriter they type out uh text randomly almost surely one of them is going to generate the entire screw of Hamlet or any other finite string of text. Uh it will just take some time quite a lot of time actually but if you have an infinite number then it happens. Um so um basically the the if you take an infinite string of of digits or whatever um eventually any finite pattern you wish will emerge. Um it may take a long time but it will eventually happen. Um in particular arithmetic progressions of any length will eventually happen. Okay. But you need that but you need an extremely long random sequence for this to happen. I suppose that's intuitive. It's just infinity. Yeah. Infinity absorbs a lot of sins. Yeah. How are we humans supposed to deal with infinity? Well, you can think of infinity as as as just an abstraction of um a finite number for which you you do not have a bound for um that uh you know I mean so nothing in real life is truly infinite. Um but you know you can um you know you can ask yourself questions like you know what if I had as much money as I wanted you know or what if I could go as fast as I wanted and a way in which mathematicians formalize that is mathematics has found a formalism to idealize instead of something being extremely large or extremely small to actually be exactly infinite or zero. Um and often the the mathematics becomes a lot cleaner when you do that. I mean in physics we we joke about uh assuming spherical cows. um you know like real world problems have got all kinds of real world effects but you can idealize send certain things to infinity send certain things to zero um and um and the mathematics becomes a lot simpler to work with there. I wonder how often using infinity uh forces us to deviate from um the physics of reality. Yeah. So there's a lot of pitfalls. Um so you know we we spend a lot of time in undergraduate math classes teaching analysis. Um and analysis is often about how to take limits and and and and whether you you know so for example a plus b is always b plus a. Um so when you have a finite number of terms you add them you can swap them and there there's no problem. But when you have infinite number of terms there these sort of shell games you can play where you can have a series which converges to one value but you rearrange it and it suddenly converges to another value. And so you can make mistakes. You have to know what you're doing when you allow infinity. Um you have to introduce these epsilons and deltas and and this there's a certain type of way of reasoning that helps you avoid mistakes. Um in more recent years um people have started taking results that are true in infinite limits and what's called finetizing them. Um so you know that something's true eventually but um you don't know when. Now give me a rate. Okay. Okay, so it's such a if I have don't have an infinite number of monkeys but but a large finite number of monkeys, how long do I have to wait for H to come out? Um and that's a more quantitative question. Um and this is something that you can you can um attack by purely finite methods and you can use your finite intuition. Um and in this case it turns out to be exponential in the length of the text that you're you're trying to generate. Um so um and so this is why you never see the monkeys create Hamilton. you can maybe see them create a four-letter word, but nothing that big. And so I personally find once you finitize an infinite statement, it's it does become much more intuitive and it's no longer so so weird. Um so even if you're working with infinity, it's good to finitize so that you can have some intuition. Yeah. The downside is that the finite groups are just much much messier and and uh yeah. So so the infinite ones are found first usually like decades earlier and then later on people finize them. So since we mentioned a lot of math and a lot of physics uh what is the difference between mathematics and physics as disciplines as ways of understanding of seeing the world maybe we can throw in engineering in there you mentioned your wife is an engineer give it new perspective on circuits right so this different way of looking at the world given that you've done mathematical physics so you you've you've worn all the hats right so I think science in general is interaction between three things um there's the real world um there's is what we observe of the reward, our observations and then our mental models as to how we think the world works. Um so um we can't directly access reality. Okay. Uh all we have are the observations which are incomplete and they they have errors. Um and um there are many many cases where we would um uh we want to know for example what is the weather like tomorrow and we don't yet have the observation we'd like to a prediction. Um and then we have these simplified models sometimes making unrealistic assumptions you know spherical cow type things. Those are the mathematical models. Mathematics is concerned with the models. Science collects the observations and it proposes the models that might explain these observations. What mathematics does we we stay within the model and we ask what are the consequences of that model? what observations would what predictions would the model make of the of future observations um or past observations does it fit observed data um so there's definitely a symbiosis um it's ma I guess mathematics is is unusual among other disciplines is that we start from hypothesis like the axims of a model and ask what conclusions come up from that that model um in almost any other discipline uh you start with the conclusions you know I want to do this I want to build a bridge, you know, I I want to to make money. I want to do this. Okay. And then you you you find the path to get there. Um a lot there there's a lot less sort of speculation about suppose I did this, what would happen? Um you know, planning and and and modeling um uh speculative fiction maybe is one other place. Uh but uh that's about it actually. Most of things we do in life is conclusions driven including physics and science. You I mean they want to know you know where is this asteroid going to go? What was what what is the weather going to be tomorrow? Um but um Bathe also has this other direction of of going from the uh the axioms. What do you think there is this tension in physics between theory and experiment? Mhm. What do you think is the more powerful way of discovering truly novel ideas about reality? Well, you need both top down and bottom up. Um yeah, it's it's a real interaction between all these things. So over time the observations and the theory and the modeling should both get closer to reality. But initially and it is I mean this is um this is always the case. You know they're always far apart to begin with. Um but you need one to figure out where to push the other you know. So um if your model is predicting anomalies um that are not picked up by experiment that tells experimenters where to look you know um to to to to find more data to refine the models. Um yeah so it it it goes it goes back and forth. Um within mathematics itself there's there's also a theory and experimental component. It's just that until very recently theory has dominated almost completely like 99% of mathematics is theoretical mathematics and there's a very tiny amount of experimental mathematics. Um I mean people do do it you know like if they want to study prime numbers or whatever they can just generate large data sets and with a so once we had computers um we be to do it a little bit. Um although even before well like Gaus for example he discovered he conjectured the most basic theorem in in number theory to call the prime number theorem which predicts how many primes that up to a million up to a trillion. It's not an obvious question and basically what he did was that he computed I mean mostly um by himself but also hired human computers um people who whose professional job it was to do arithmetic um to compute the first 100,000 tribes or something and made tables and made a prediction um that was an early example of experimental mathematics um but until very recently it was not um yeah I mean theoretical mathematics was just much more successful I mean because doing complicated mathematical computations is uh was just not not feasible until very recently. Uh and even nowadays, you know, even though we have powerful computers, only some mathematical things can be um explored numerically. There's something called the comatorial explosion. If you want us to study, for example, Zodius the you want to study all possible subsets of the numbers 1 to a,000. There's only 1,000 numbers. How bad could it be? It turns out the number of different subsets of of 1 to a,000 is 2 to the^ 1,000 which is way bigger than than that any computer can currently can can in fact anybody ever will ever um enumerate. Um so you have you have to be um there are certain math problems that very quickly become just intractable to attack by direct brute force computation. Uh chess is another um famous example. The number of chess positions uh we can't get a computer to fully explore. But now we have AI um um we have tools to explore this space not with 100% guarantees of success but with experiment you know so like um we can empirically solve chess now for example we have we have very very good AIs that that can you know they don't explore every single position in in the game tree but they have found some very good approximation um and people are using actually these chess engines to make uh to do experimental chess um that they're revisiting old chess theories about, oh, you know, when you this type of opening, you know, this is a good, this is a good type of move, this is not, and they can use these chess engines to actually refine in some case overturn um um conventional wisdom about chess. And I do hope that uh that mathematics will will have a larger experimental component in the future perhaps powered by AI. We'll of course talk about that but in the case of chess and there's a similar thing in mathematics that I don't believe it's providing a kind of formal explanation of the different positions. It's just saying which position is better or not that you can intuit it as a human being and then from that we humans can construct a theory of the matter. You've mentioned the Plato's cave allegory. Mhm. So in case people don't know, it's where people are observing shadows of reality, not reality itself, and they believe what they're observing to be reality. Is that in some sense what mathematicians and maybe all humans are doing is um looking at shadows of reality? Is it possible for us to truly access reality? Well, there these three onlogical things. there's actual reality, there's our observations and our our models. Um, and technically they are distinct and I think they will always be distinct. Um, but they can get closer um over time. Um, you know, so um and the process of getting closer often means that you you have to discard your initial intuitions. Um so um like astronomy provides great examples you know like you know like you an initial model of the world is is flat because it looks flat you know and um and that it's and it's big you know and the rest of the universe the skies is not you know like the sun for example looks really tiny um and so you start off with a model which is actually really far from reality um but it fits kind of the observations that you have um you know so you know so things look good you know but but over time as you make more and more observations bring it closer to to reality Okay. Um the model gets dragged along with it and so over time we had to realize that the earth was round that it spins. It goes around the solar system. Solar system goes around the galaxy and so on and so forth. And the guys universe is expanding the expansion itself expanding accelerating and in fact very recently in this year. So this uh even the acceleration of the universe itself is this evidence that this non-constant and uh the explanation behind why that is it's catching up. Um it's catching up. I mean it's still you know the dark matter or dark energy this this kind of thing. We have we have a model that sort of explains that fits the data really well. It just has a few parameters that um you have to specify. Um but so you know people say that's fudge factors you know with with enough fudge factors you can explain anything. Um but uh the mathematical point of the model is that um you want to have fewer parameters in your model than data points in your observational set. So if you have a model with 10 parameters that explains 10 10 observations that is a completely useless model. It's what's called overfitted. But like if you have a model with you know two parameters and it explains a trillion observations which is basically uh so yeah the the the dark matter model I think has like 14 parameters and it explains pabytes of data um that that that the astronomers have. Um you can think of of a theory like one way to think about um physical math theory theory is it's a compression of of the universe um and data compression. So you know you have these pabytes of observations you'd like to compress it to a model which you can describe in five pages and specify a certain number of parameters and if it can fit to reasonable accuracy you know almost all of your observations. I mean the more compression that you make the better your theory. In fact, one of the great surprises of our universe and of everything in it is that it's compressible at all. It's the unreasonable effectiveness of mathematics. Yeah, Einstein had a quote like that. The the most incomprehensible thing about the universe is that it is comprehensible, right? And not just comprehensible. You can do an equation like E= MC². There is actually a some mathematical possible explanation for that. Um, so there's this phenomenon in mathematics called universality. So many complex systems at the macro scale are coming out of lots of tiny interactions at the macro scale and normally because of the common form of explosion you would think that uh the macros scale equations must be like infinitely exponentially more complicated than than the uh the microscale ones and they are if you want to solve them completely exactly like if you want to model um all the atoms in a box of of air that's like Avagadro's number is humongous right there's a huge number of particles if you actually have to track each one it'll be ridiculous. this but certain laws emerge at the microscopic scale that almost don't depend on what's going on at the micros scale or only depend on a very small number of parameters. So if you want to model a gas um of you know quintilion particles in a box you just need to know it temperature and pressure and volume and a few parameters like five or six and it models almost everything you need need to know about these 10 to 23 or whatever particles. Um so we we have um we we don't understand universality anywhere near as we would like mathematically but there are much simpler toy models where we do um have a good understanding of why univers universality occurs. Um um most basic one is is the central limit theorem that explains why the bell curve shows up everywhere in nature that so many things are distributed by what's called a Gaussian distribution famous bell curve. There's now even a meme with this curve and even the meme applies broadly universality to the meme. Yeah. Yes, you can go meta if you like. But there are many many processes for example you can take lots and lots of independent um random variables and average them together um uh in in various ways. you take a simple average or more complicated average and we can prove in various cases that that these these bell curves these gaussians emerge and it is a satisfying satisfying explanation. Um sometimes they don't. Um so so if you have many different inputs and they're all correlated in some systemic way then you can get something very far from a bow curve show up. Uh and this is also important to know when this system fails. So universality is not a 100% reliable thing to rely on that um um the global financial crisis was a a famous example of this. Uh people thought that uh um mortgage defaults um had this sort of um Gaussian type behavior that that if you if you ask if a population of of of uh you know 100,000 Americans with mortgages ask what what proportion of them would default on the mortgages. Um if everything was decorated it would be an asset bell curve and and like you can you can manage risk with options and derivatives and so forth and um and it there's a very beautiful theory um but if there are systemic shocks in the economy uh that can push everybody to default at the same time that's very non-gian behavior um and uh this wasn't fully accounted for in 2008 now I think there's some more awareness that this is systemic risk is actually a much bigger issue and uh just because the model is pretty uh and nice uh it may not match reality. Right. So, so the mathematics of working out what models do is really important. Um, but um also the science of validating when the models fit reality and when they don't. Um, I mean that you need both. Um, and but mathematics can help because it it can for example these central limit theorems it tells you that if you have certain aums like like non-correlation that if all the inputs were not correlated to each other um then you have this kind of behavior things are fine. it it tells you where to look for weaknesses in the model. So if you have a mathematical understanding of central limit theorem and someone proposes use these Gaussian copy or whatever to to model um default risk um if you're mathematically um trained you would say okay but what if this systemic correlation between all your inputs and so then then you can ask the economists you know how how how much of a risk is that um and then you can you can you can go look for that. So there's always this this this synergy between science and and mathematics. A little bit on the topic of universality. Mhm. You're known and celebrated for working across an incredible breadth of mathematics reminiscent of Hilbert a century ago. In fact, the great Fields Medal winning mathematician Tim Gow has said that you are the closest thing we get to Hilbert. He's a colleague of yours. Oh yeah. Good friend. But anyway, so you are known for this ability to go both deep and broad in mathematics. So you're the perfect person to ask, do you think there are threads that connect all the disparate areas of mathematics? Is there a kind of deep underlying structure uh to all of mathematics? There's certainly a lot of connecting threads. Um and a lot of the progress of mathematics has can be represented by taking by stories of two fields of mathematics that were previously not connected and finding connections. Um an ancient example is um geometry and number theory you know. So so in the times of the ancient Greeks these were considered different subjects. Um I mean mathematicians worked on both. You know you could work both on on geometry most famously but also on numbers. Um but they were not really considered related. Um I mean a little bit like you know you could say that that this length was five times this length because you could take five copies of this length and so forth. But it wasn't until Deart who really realized that who developed analytic geometry that you can you can parameterize the plane a geometric object by um by two real numbers. Every point can be and so geometric problems can be turned into into problems about numbers. Um and the the today this feels almost trivial like like there's there's there's no content to this like of course uh you you know um a plane is xx and y and because that's what we teach and it's internalized. Um but it was an important development that these these two fields were unified. Um and this process has just gone on throughout mathematics over and over again. algebra and geometry were separated and now we have a student algebraic geometry that connects them and over and over again and that's certainly the type of mathematics that that I enjoy the most. So I think there's sort of different styles to being a mathematician. I think hedgehogs and fox a fox knows many things a little bit but a hedgehog knows one thing very very well. Um and in mathematics there's definitely both hedgehogs and foxes. Um and then there's people who are kind of uh who can play both roles. Um and I think like ideal collaboration between mathematicians involves a very you need some diversity like um a fox working with many hedgehogs or or vice versa. So yeah but I identify mostly as a fox certainly I I like uh arbitrage somehow you like like um learning how one field works learning the tricks of that field and then going to another field which people don't think is related but I can I can adapt the tricks. So see the connections between the fields. Yeah. So there are other mathematicians who are far deeper than I am. Like who really they're really hedgehogs. They they know everything about one field and they're much faster and and and more effective in that field. But I can I can give them these extra tools. I mean you said that you can be both the hedgehog and and the fox depending on the context depending on the collaboration. So what can you if it's at all possible speak to the difference between those two ways of thinking about a problem? say you're encountering a new problem, you know, searching for the connections versus like very singular focus. I'm much more comfortable with with the uh the uh the fox paradigm. Yeah. So, um yeah, I I like looking for analogies, narratives. Um I I spend a lot of time if there's a result I see in one field and I like the result, it's a cool result, but I don't like the proof. like it uses types of mathematics that I'm not super familiar with. Um I often try to reprove it myself using the tools that I favor. Um often my proof is worse. Um but um by the exercise of doing so um I can say oh now I can see what the other proof was trying to do. Um and from that I can get some understanding of of the tools that are used in in that field. So it's very exploratory, very doing crazy things in crazy fields and like reinventing the wheel a lot. Yeah. Whereas the hedgehog style is uh I think much more scholarly, you know, you you you're very knowledge based. You you you you stay up to speed on like all the developments in this field. You you know all the history. Um you have a very good understanding of of exactly the strengths and weaknesses of of each particular uh technique. Um yeah uh I think you you rely a lot more on sort of calculation than sort of trying to find narratives. Um so yeah I mean I can do that too but uh there are other people who are extremely good at that. Let's step back and uh uh maybe look at the the a bit of a romanticized version of mathematics. Mhm. So, uh I think you've said that early on in your life, uh math was more like a puzzle solving activity when you were uh young. When did you first encounter a problem or proof where you realize math can have a kind of elegance and beauty to it? That's a good question. Um when I came to graduate school uh in Princeton, um so John Conway was there at the time. He he passed away a few years ago. But uh I remember one of the very first research talks I I went to was a talk by Conway on what he called extreme proof. So Conway had just had this this amazing way of of thinking about all kinds of things in in a way that you would normally think of. So um he thought of proofs themselves as occupying some sort of space, you know. So, so um if you want to prove something, let's say that there's infinitely many primes, okay, you avoid different proofs, but you could you could rank them in different axes like some proofs are elegant, some are long, some proofs are are um elementary and so forth. Um and so there's this cloud. So the space of all proofs itself has some sort of shape. Um and so he was interested in in extreme points of this shape like out of all all these proofs what is one that is the shortest at the the extent of every everything else or or the most elementary or or whatever. Um and so he gave some examples of well-known theorems and then he would give what he thought was was the extreme proof um in these different aspects. Um and I I just found that really eye opening um that that um you know it's not just getting a proof for a result was interesting but but once you have that proof you know trying to to uh to optimize it in various ways. Um that that proof um uh proofing itself had some craftsmanship to it. Um it it certainly informed my writing style. Um but you know like when you do your your math assignments and as undergraduate your homework and so forth, you you're sort of encouraged to just write down any proof that works, okay, and hand it in and get a get as long as it gets a tick mark, you you move on. Um but if you want your your results to actually be influential and be read by people, um it can't just be correct. It should also um be a pleasure to read, you know, um motivated um be adaptable to to generalize to other um things. Um it's the same in many other disciplines like like coding. It's a there's a lot of analogies between math and coding. I like analogies if you haven't noticed. Um but um you know like you can code something spaghetti code that works for a certain task and it's quick and dirty and it works. But uh there's lots of good principles for for um writing code well so that other people can use it build upon it and so on and has fewer bugs and whatever. Um and there's similar things with mathemat mathematics. So yeah the first of all there's so many beautiful things there and and is one of the great minds uh in mathematics ever and computer science. Uh just even considering the space of proofs. Yeah. and saying, "Okay, what does this space look like and what are the extremes?" Uh, like you mentioned, coding as an analogy is interesting because there's also this activity called the code golf. Oh, yeah. Yeah. Yeah. Which I also find beautiful and fun where people use different programming languages to try to write the shortest possible program that accomplishes a particular tasks. Then I believe there's even competitions on this. Yeah. And uh it's also a nice way to stress test not just the sort of the programs or in this case the proofs but also the different languages maybe that's the different notation or whatever to use to to accomplish a different task. Yeah, you learn a lot. I mean it may seem like a frivolous exercise but it can generate all these insights which if you didn't have this artificial um objective to to to pursue you might not see. What to you is the most beautiful or elegant equation in mathematics? I mean one of the things that people often look to in in beauty is the simplicity. So if you look at E= MC² so when when a few concepts come together that's why the oiler identity is often considered uh the most beautiful equation in mathematics. Do you do you find beauty in that one and the oil identity? Yeah. Well, as I said, I mean, what I find most appealing is is connections between different things that um so the if ei= minus one um so yeah people oh uses all the fundamental constants okay that that's I mean that's cute um but but to me so the exponential function was interested by oil to measure exponential growth you know so compound interest or decay anything which is continuously growing continuously decreasing growth and decay or dilation or contraction is modeled by the exponential function Um whereas pi uh comes around from circles and rotation right if you want to rotate a needle for example 180° you need to rotate by pi radians and i complex numbers represents the swing between imagine axis of a 90° rotation so a change in direction so the x function represents growth and decay in the direction where you really are um when you stick an i in the exponential it now it's it's instead of motion in the same direction as your current position it's the motion has right angles to composition. So rotation um and then so e e pi equ= minus 1 tells you that if you rotate for time pi you end up at the other direction. So it unifies geometry through dilation and exponential growth or dynamics through this act of of complexification rotation by by i. So it connects together all these tools mathematics. Yeah. Yeah. dynamic structure and complex and complex and um the complex numbers they all considered almost yeah they were all next door neighbors in mathematics because of this identity. Do do you think the thing you mentioned is cute the the the collision of notations from these disperate fields? Um it's just a frivolous side effect or do you think there is legitimate like value in when the notation all the our old friends come together night? Well, it's it's it's confirmation that you have the right concepts. Um so when you first study anything um you you have to measure things and give them names. Um and initially sometimes your because your your model is again too far off from reality you give the wrong things the best names and you only find out later what's what's really important physicists can do this sometimes I mean but it turns out okay so actually with physics okay so E= MC² okay so one of the the big things was the E right so when when Aristotle first came up with his laws of of motion and then and then um Galileo or Newton and so forth you know they saw the things they could they could measure they could measure mass and acceleration and force and so forth and so Newtonian mechanics for example F= ma was the famous Newton second law of motion so those were the the primary objects so they gave them the central building in the theory it was only later after people started analyzing these equations that there always seemed to be these quantities that were conserved um so momentum and energy um uh and it's not obvious that things happen energy like it's not something you can directly measure the same way you can measure mass and and and velocity so forth but over time people realize is that this was actually a really fundamental concept. Hamilton eventually in 19th century reformulated Newton's laws of physics into what's called Hamiltonian mechanics where the energy which is now called the Hamiltonian was the dominant object once you know how to measure the Hamiltonian of any system. You can describe completely the dynamics like what happens to to all the states like it's um it it really was a central actor which was not obvious initially. Um and this uh helped actually uh this change of perspective really helped when quantum mechanics came along. Uh because um the early physicists who studied quantum mechanics, they had a lot of trouble trying to adapt their Newtonian thinking because everything was a particle and so forth to to to quantum mechanics, you know, because I think because it was a wave. It just looked really really weird. Um like you ask what is the quantum version of F equals MA? And it's really really hard to to give an answer to that. Um but it turns out that the Hamiltonian which was so um secretly behind the scenes in classical mechanics also is the key uh object in um um in quantum mechanics that there's there's also an object called Hamiltonian. It's a different type of object. It's what's called an operator rather than than a function. But um and um but again once you specify it you specify the entire dynamics. So there's something called Shingers equation that tells you exactly how quantum systems evolve once you have a Hamiltonian. So side by side they look completely different objects you know like so one involves particles one involves waves and so forth but with this centrality you could start actually transferring a lot of intuition and facts from classical mechanics to quantum mechanics. For example, in classical mechanics, there's this thing called ner's theorem. Every time there's a symmetry in a physical system, there is a conservation law. So the laws of physics are translation invariant. Like if I move 10 steps to the left, I experience the same laws of physics as if I was here. And that corresponds to conservation momentum. Um if I turn around by by some angle again, I experience the same laws of physics. This corresponds to conservation angular momentum. If I wait for 10 minutes, um I still have the same laws of physics. Um so this time translation variance. this corresponds to the low conservation of energy. Um, so there's this fundamental connection between symmetry and conservation. Um, and that's also true in quantum mechanics. Even though the equations are completely different, but because they're both coming from the Hamiltonian, the Hamiltonian controls everything. Um, every time the Hamiltonian has a symmetry, the equations will will have a conservation law. Um, so it's it's it's it's once you have the right language, it actually makes things um a lot a lot cleaner. One of the problems why we can't unify quantum mechanics and general relativity yet we haven't figured out what the fundamental objects are like for example we have to give up the notion of space and time being these almost uklidian type spaces and there has to be um you know and you know we kind of know that at very tiny scales um there's going to be quite fluctuations of space space-time foam um and trying to to use cartigian coord xyz is going to be it's it's just it's it's a non-starter but we don't know how to what to replace it with um We don't actually have the mathematical um um concepts the analog Hamiltonian that sort of organized everything. Does your gut say that there is a theory of everything. So this is even possible to unify to find this language that unifies general relativity and quantum mechanics. I believe so. I mean the history of physics has been out of unification much like mathematics um over the years. You know electricity and magnetism were separate theories and then Maxwell unified them. you know, Newton unified the the motions of the heavens with the motions on of objects on the earth and so forth. So, it should happen. It's just that the um u again to go back to this model of the observations and and theory. Part of our problem is that physics is a victim's own success that our two big theories of of of physics general relativity and quantum mechanics are so are so good now that together they cover 99.9% of sort of all the observations we can make. Um, and you have to like either go to extremely insane particle accelerations or or the early universe or or or things that are really hard to measure um in order to get any deviation from either of these two theories to the point where you can actually figure out how to how to combine them together. Um, but I have faith that we, you know, we've we've been doing this for centuries and we've made progress before. There's no reason why we should stop. Do you think it will be a mathematician that develops uh theory of everything? What often happens is that when the physicists need uh um some of mathematics, there's often some precursor that the mathematicians um worked out earlier. So when Einstein started realizing that space was curved, he went to some mathematician and asked is there is there some theory of curved space that the mathematicians already came up with that could be useful and he said oh yeah there's I think Reman came up with something um and so yeah Reman had developed remmaning geometry um which is precisely you know a theory of spaces that occurred in various general ways which turned out to be almost exactly what was needed um for Einstein's theory. This is going back to Dwick's unreasonable effectiveness of mathematics. I think the theories that work well to explain the universe tend to also involve the same mathematical objects that work well to solve mathematical problems. Ultimately, they're just sort of both ways of organizing data um in in in useful ways. It just feels like you might need to go some weird land that's very hard to to intuit it like you know you have like string theory. Yeah, that that's that was that was a leading candidate for many decades. It's I think it's slowly falling out of fashion because it's it's not matching experiment. So one of the big challenges of course like you said is experiment is very tough. Yes. Because of the how effective both theories are. But the other is like just you know you're talking about you're not just deviating from spaceime. You're going into like some crazy number of dimensions. You're doing all kinds of weird stuff that to us we've gone so far from this flat earth that we started at like now we're just it's it's very hard to use our limited ape descendants of uh uh cognition to intuitit what that reality really is like. This is why analogies are so important, you know. I mean, so yeah, the round earth is not intuitive because we're stuck on it, but you know, but you know, but round objects in general, we have pretty good intuition over uh and we have intuition about light works and so forth. And like it's it's actually a good exercise to actually work out how eclipses and phases of of the sun and the moon and so forth can be really easily explained by by by by round earth and round moon, you know, um and models. Um and and you can just take you know a basketball and a golf ball and and and a light source and actually do these things yourself. Um so the intuition is there. Um but yeah you have to transfer it. That is a big leap intellectually for us to go from flat to round earth because you know our life is mostly lived in flat land. Yeah. To load that information and we all like take it for granted. We take so many things for granted because science has established a lot of evidence for this kind of thing. But you know, we're on a round rock. Yeah. Flying through space. Yeah. Yeah. And it's a big leap and you have to take a chain of those leaps the more and more and more we progress. Right. Yeah. So modern science is maybe again a victim of its own success is that you know in order to be more accurate it has to to move further and further away from your initial intuition. And so um for someone who hasn't gone through the whole process of science education it looks more more suspicious because of that. So, you know, we we need we need more grounding. I mean, I I think um I mean, you know, there are there are scientists who do excellent outreach. Um but there's this there this there's there there's lots of science things that you can do at home. There's lots of YouTube videos. I did a YouTube video recent of Grant Sanderson. We talked about this earlier that uh you know how the ancient Greeks were able to measure things like the distance to the moon, distance to the earth, and you know, using techniques that you you could also replicate yourself. Um it doesn't all have to be like fancy space telescopes and and very intimidating mathematics. Yeah, that's uh I highly recommend that. I believe you give a lecture and you also did an incredible video with Grant. It's a beautiful experience to try to put yourself in the mind of a person from that time. Mhm. Shrouded in mystery, right? You know, you're like on this planet, you don't know the shape of it, the size of it. You see some stars, you see some you see some things and you try to like localize yourself in this world. Yeah. Yeah. And try to make some kind of general statements about distance to places. Change your perspective is really important. You say travel bordens the mind. This is intellectual travel. You know put yourself in the mind of the ancient Greeks or or some other person some other time period. Make hypothesis spherical cows whatever you know speculate. Um and you know this is this is what mathematicians do and some what artists do actually. It's just incredible that given the extreme constraints, you could still say very powerful things. That's why it's inspiring looking back in history. How much can be figured out right when you don't have much to figure out stuff like if you propose axioms then the mathematics lets you follow those a to their conclusions and sometimes you can get quite a quite a long way from you know initial hypothesis. If we can stay in the land of the weird, you mentioned general relativity. You've uh you've contributed uh to the mathematical understanding of Einstein's field equations. Can you explain this work and from a sort of mathematical standpoint uh what aspects of general relativity are intriguing to you, challenging to you? I have worked on some equations. There's something called the the wave maps equation or the sigma field model which is not quite the equation of space-time gravity itself but of certain fields that might exist on top of spaceime. Um so Einstein's equations of relativity just describes space and time itself. Um but then there's other fields that live on top of that. There's the electromagnetic field. Um there's control fields and there's this whole hierarchy of different equations of which Einstein is considered one of the most nonlinear and difficult. But relatively low in the hierarchy was this thing called the wave maps equation. So it's a wave which at any given point uh is fixed to be like on a sphere. Um so uh I can think of a bunch of arrows in space and time and and the arrows pointing in in different directions. Um but they propagate like waves. If you wiggle an arrow it was it will propagate and make all the arrows move kind of like sheets of wheat in the wheat field. And I was interested in the global regularity problem again for this question like is it possible for for all the energy here to collect at a point. So the equation I considered was actually what's called a critical equation where it's actually the behavior at all scales is roughly the same. Um and I was able barely to show that um that you couldn't actually force a scenario where all the energy concentrated at one point that the energy had to disperse a little bit and the moment it dis little bit it it would it would stay regular. Yeah. This was back in 2000. That was part of why I got interested in narrows afterwards actually. Yeah. So I developed some techniques to um solve that problem. So part of it is it was um this problem is really nonlinear uh because of the curvature of the sphere. Um this there was a certain nonlinear effect which was a non-perturbative effect. It was when you sort of looked at it normally it looked larger than the linear effects of the wave equation. Um and so it was hard to to keep things under control even when the energy was small. But I developed what's called a gauge transformation. So the equation is kind of like an evolution of of of heaves of wheat and and they're all bending back and forth and so there's a lot of motion. Um but like if you imagine like stabilizing the flow by attaching little cameras at different points in space which are trying to move in a way that captures most of the motion and under this stabilized flow the flow becomes a lot more linear. I discovered a way to transform the the equation to reduce the amount of of nonlinear effects. Um and then I was able to to to to solve the equation. I found this transformation while visiting my aunt in Australia and I was trying to understand the dynamics of all these fields and I I couldn't do it with pen and paper. Um and I had not enough facility of computers to do any computer simulations. So…

Transcript truncated. Watch the full video for the complete content.

Get daily recaps from
Lex Fridman

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.