Michael Levin: Hidden Reality of Alien Intelligence & Biological Life | Lex Fridman Podcast #486

Lex Fridman| 03:18:09|Mar 27, 2026
Chapters13
Michael Levin discusses his work at Tufts on building biological systems to explore intelligence, memory, and life, framing it as a practical venture to translate deep ideas into therapies and improvements for sentient beings.

Michael Levin reframes mind and life as scalable, interactively understood phenomena, arguing for a continuum of cognition across biological and non-biological systems and pushing us to map a Platonic space of minds through actionable experiments.

Summary

In this second appearance, Michael Levin joins Lex Fridman to push our intuition about life, mind, and intelligence beyond human-centric boundaries. Levin argues that understanding embodied minds requires examining third-, second-, and first-person perspectives as operational claims we can test, not metaphysical absolutes. He emphasizes a bottom-up view where behavior science and multisystem control precede physics in explaining intelligent action, coining this the spectrum of persuadability. Through regenerative medicine and bioengineering, Levin shows that we can persuade living systems to develop new capabilities—sometimes by high-level prompts rather than micromanaging molecular events. He introduces the cognitive light cone as a way to measure the scale of goals a system can actively pursue, from single cells to multicellular organisms to intelligent collectives. The conversation then dives into Xenobots and Anthropods, explaining how cells stripped of their native context can form novel, self-motivated beings that heal neural wounds and even age differently, all without changing their DNA. Levin connects these findings to a broader framework, the Technological Approach to Mind Everywhere (TAME), which maps how different embodiments move along a continuum of agency and how interfaces determine which patterns come through. Finally, the discussion pivots to big questions: the Platonic space of minds, the possibility of uploading or re-embodying cognition, and the ethical and epistemic implications of treating minds as scalable, testable patterns rather than fixed entities. This is not just philosophy; Levin argues these ideas guide concrete experiments with regenerative medicine, AI interfaces, and future alien cognition. The episode closes with a hopeful note: by mapping these spaces and testing interfaces, we may finally understand the diversity of minds on Earth and beyond, learning to relate to them with empathy and rigor.

Key Takeaways

  • Persuadability is a spectrum: as systems gain agency, our control tools shift from micromanagement to high-level prompting and shared goals.
  • Cognitive light cone: the scale of a system’s most active goal determines its potential for intelligence and social integration, from bacteria to humans.
  • Xenobots and Anthropods show that cells stripped from their usual context can acquire novel, goal-directed behaviors without changing their DNA.
  • The TAME framework posits that cognitive claims are protocol claims—what tools are used to elicit a system’s response—and can be tested empirically.
  • Physics is a powerful lens but not a complete theory for life or mind; interfaces and multi-scale dynamics reveal cognition beyond low-level mechanisms.
  • A Platonic space of mind suggests a structured latent space where patterns of cognition can be mapped, compared, and potentially harnessed through interfaces.
  • Intrinsic motivations (e.g., clustering in sorting algorithms) reveal that even simple systems exhibit non-design-driven competencies that become accessible via interfaces.

Who Is This For?

Essential viewing for bioengineers, regenerative medicine researchers, AI/ML researchers exploring embodied cognition, and anyone curious about how minds emerge across diverse substrates (biological, computational, or alien). Levin’s framework offers practical experiments and a bold philosophical map for mapping minds beyond humans.

Notable Quotes

"How do embodied minds arise in the physical world, and what determines the capabilities and properties of those minds?"
Opening framing of Levin's central investigative question.
"There is a spectrum of persuadability, because it means you aren’t just arguing with a clock or a thermostat—you’re interfacing with a system that can learn."
Explains the core idea behind the persuasion framework.
"A cognitive light cone is the size of the biggest goal state you can actively pursue, for better or worse."
Defines the key scaling concept for mind and intelligence.
"Xenobots and Anthropods demonstrate that cells without their usual neighbors can form novel, self-motivated beings with new transcriptomes."
Illustrates transformative bioengineering results that challenge traditional biology.
"We don’t know what spaces minds can inhabit until we build interfaces that allow them to ingress into our world."
Articulates the Platonic space concept and the role of interfaces.

Questions This Video Answers

  • How does Levin's TAME framework change regenerative medicine strategies?
  • What exactly is a cognitive light cone and how can we measure it in non-human systems?
  • Can xenobots teach us about alien cognition and interspecies communication?
  • What would a Platonic space of minds imply for AI alignment and ethics?
  • Is there a practical path to communicating with unconventional intelligences (plants, microbes, AI) using high-level prompts rather than lower-level genetic tweaks?
Lex Fridman PodcastMichael LevinPlatonic SpaceTAME frameworkPersuadabilityCognitive light coneXenobotsAnthrobotsRegenerative medicineBioelectric signaling
Full Transcript
- The following is a conversation with Michael Levin, his second time on the podcast. He is one of the most fascinating and brilliant biologists and scientists I've ever had the pleasure of speaking with. He and his labs at Tufts University study and build biological systems that help us understand the nature of intelligence, agency, memory, consciousness, and life in all of its forms here on Earth, and beyond. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here's Michael Levin. You write that the central question at the heart of your work from biological systems to computational ones is, "How do embodied minds arise in the physical world, and what determines the capabilities and properties of those minds?" Can you unpack that question for us, and maybe begin to answer it? - Well, the fundamental tension is in both the first-person, the second-person, and third-person descriptions of mind. So, in third-person, we want to understand how do we recognize them, and how do we know, looking out into the world, what degree of agency there is, and how best to relate to the different systems that we find. And are our intuitions any good when we look at something and it looks really stupid and mechanical, versus it really looks like there's something cognitive going on there? How do we get good at recognizing them? Then there's the second-person, which is the control, and that's both for engineering but also for regenerative medicine, when you want to tell the system to do something, right? What kind of tools are you going to use? And this is a major part of my framework, is that all of these kinds of things are operational claims. Are you going to use the tools of hardware rewiring, of control theory and cybernetics, of behavior science, of psychoanalysis and love and friendship? Like, what are the interaction protocols that you bring, right? And then in first-person, it's this notion of having an inner perspective and being a system that has valence and cares about the outcome of things. Makes decisions and has memories and tells a story about itself and the outside world. And how can all of that exist and still be consistent with the laws of physics and chemistry and various other things that we see around us? So that, that I find to be maybe the most interesting and the most important mystery for all of us to both on the science and also on the personal level. So that's what I'm interested in. - So your work is focused on starting at the physics, going all the way to friendship and love and psychoanalysis. - Yeah, although, actually I would turn that upside down. I think that pyramid is backwards, and I think it's behavior science at the bottom. I think it's behavior science all the way. I think in certain ways, even math is the behavior of a certain kind of being that lives in a latent space, and physics is what we call systems that at least look to be amenable to a very simple, low agency kind of model, and so on. But that's what I'm interested in, is understanding that and developing applications. Because it's very important to me that what we do is transition deep ideas and philosophy into actual practical applications that not only make it clear whether we're making any progress or not, but also allow us to relieve suffering and make life better for all sentient beings, and enable to, you know, enable us and others to reach their full potential. So these are very practical things, I think. - Behavioral science, I suppose, is more subjective, and mathematics and physics is more objective? Would that be the clear difference? - The idea basically is that where something is on that spectrum, and I've called it the spectrum of persuadability. You could call it the spectrum of intelligence or agency or something like that. I like the notion of the spectrum of of the spectrum of persuadability, because it's an engineering approach. It means that these are not things you can decide or have feelings about from a philosophical armchair. You have to make a hypothesis about which tools, which interaction protocols you're going to bring to a given system, and then we all get to find out how that worked out for you, right? So you could be wrong in many ways, in both directions. You can guess too high or too low, or wrong in various ways, and then we can all find out how that's working out. And so, I do think that the behavior of certain objects is well-described by specific formal rules, and we call those things the subject of mathematics. And then there are some other things whose behavior really requires the kinds of tools that we use in behavioral cognitive neuroscience, and those are other kinds of minds that we think we study in biology or in psychology or other sciences. - Why are you using the term persuadability? Who are you persuading, and of what? - Well- - In this context. - Yeah, the beginning of my work is very much in regenerative medicine, in bioengineering, things like that. So for those kinds of systems, the question is always, how do you get the system to do what you want it to do? So there are cells, there are molecular networks, there are materials, there are organs and tissues and synthetic beings and biobots and whatever. So the idea is, if I want your cells to regrow a limb, for example, if you're injured and I want your cells to regrow a limb, I have many options. Some of those options are I'm going to micromanage all of the molecular events that have to happen, right? And there's an incredible number of those. Or maybe I just have to micromanage the cells and the stem cell kinds of signaling factors. Or maybe actually I can give the cells a very high-level prompt that says, "You really should build a limb," and convince them to do it, right? And so which of those is possible? I mean, clearly people have a lot of intuitions about that. If you ask standard people in regenerative medicine and molecular biology, they're going to say, "Well, that convincing thing is crazy. What we really should be doing is talking to the cells, or better yet, the molecular networks." And in fact, all the excitement of the biological sciences today are at single molecule approaches and big data and genomics and all of that. The assumption is that, going down is where the action's going to be, going down in scale, and... I think that's wrong. But the thing that we can say for sure is that you can't guess that. You have to do experiments and you have to see because you don't know where any given system is on that spectrum of persuadability. And it turns out that every time we look and we take tools from behavioral science, so learning different kinds of training, different kinds of models that are used in active inference and surprise minimization and perceptual multi-stability and visual illusions and all these kinds of interesting things. Stress perception and memory, active memory reconstruction. All these interesting things. When we apply them outside the brain to other kinds of living systems, we find novel discoveries and novel capabilities, actually being able to get the material to do new things that nobody had ever found before. And precisely because I think that people didn't look at it from those perspectives, they assumed that it was a low-level kind of thing. So when I say persuadability, I mean different types of approaches, right? And we all know if you want to persuade your wind-up clock to do something, you're not going to argue with it or make it feel guilty or anything. You're going to have to get in there with a wrench and you're gonna have to, you know, tune it up and do whatever. If you want to do that same thing to a cell or a thermostat or an animal or a human, you're going to be using other sets of tools that we've given other names to. And so that's... Now, of course, that spectrum, the important thing is that as you get to the right of that spectrum, whereas the agency of the system goes up, it is no longer just about persuading it to do things. It's a bidirectional relationship, what Richard Watson would call a mutual vulnerable knowing. So the idea is that on the right side of that spectrum, when systems reach the higher levels of agency, the idea is that you are willing to let that system persuade you of things as well. You know, in molecular biology, you do things, hopefully the system does what you want to do, but you haven't changed. You're still exactly the way you came in. But on the right side of that spectrum, if you're having interactions with even cells, but certainly, you know, dogs, other animals, maybe other creatures soon, you're not the same at the end of that interaction as you were going in. It's a mutual bidirectional relationship. So it's not just you persuading something else, it's not you pushing things. It's a mutual bidirectional set of persuasions, whether those are purely intellectual or of other kinds. - So in order to be effective at persuading an intelligent being, you yourself have to be persuadable. So the closer in intelligence you are to the thing you're trying to persuade, the more persuadable you have to become, hence the mutual vulnerable knowing. What a term. - Yeah. Richard, you should talk to Richard as well. He's an amazing guy and he's got some very interesting ideas about the intersection of cognition and evolution. But I think what you bring up is very important because, There has to be a kind of impedance match between what you're looking for and the tools that you're using. I think the reason physics always sees mechanism and not minds is that physics uses low agency tools. You've got voltmeters and rulers and things like this. And if you use those tools as your interface, all you're ever going to see is mechanisms and those kinds of things. If you want to see minds, you have to use a mind, right? You have to have some degree of resonance between your interface and the thing you're hoping to find. - You said this about physics before. Can you just linger on that and expand on it, what you mean, why physics is not enough to understand life, to understand mind, to understand intelligence? You make a lot of controversial statements with your work. That's one of them 'cause there's a lot of physicists that believe they can understand life, the emergence of life, the origin of life, the origin of intelligence using the tools of physics. In fact, all the other tools are a distraction to those folks. If you want to understand fundamentally anything, you have to start at physics to them. And you're saying, "No, physics is not enough." - Here's the issue. Everything here hangs on what it means to understand, okay? For me, because to understand doesn't just mean have some sort of pleasing model that seems to capture some important aspect of what's going on. It also means that you have to be generative and creative in terms of capabilities. So for me, that means if I tell you this is what I think about cognition in cells and tissues, it means, for example, that I think we're going to be able to take those ideas and use them to produce new regenerative medicine that actually helps people in various ways, right? It's just an example. So if you think as a physicist you're going to have a complete understanding of what's going on from that perspective of fields and particles, and, you know, who knows what else is at the bottom there. Does that mean then that when somebody is missing a finger or has a psychological problem, or you know, has these other high-level issues, that you have something for them, that you're going to be able to do something? Because my claim is that you're not going to, and even if, even if you have some theory of physics that is completely compatible with everything that's going on, that is... it's not enough. That's not specific enough to enable you to solve the problems you need to solve. In the end, when you need to solve those problems, the person you're going to go to is not a physicist. It's going to be either a biologist or a psychiatrist, or who knows, but it's not going to be a physicist. And the simple example is this. You know, let's say, let's say someone comes in here and tells you a beautiful mathematical proof, okay? It's just really, you know, deep and beautiful, and there's a physicist nearby, and he says, "Well, I know exactly what happened. There were some air particles that moved from that guy's mouth to your ear. I see what goes on. It moved the cilia in your ear and the electrical signals went up to your brain." I mean, we have a complete accounting of what happened, done and done. But if you want to understand what's the more important aspect of that interaction, it's not going to be found in the Physics Department. It's going to be found in the Math Department. So that's my only claim is that physics is an amazing lens with which to view the world, but you're capturing certain things, and if you want to stretch to sort of encompass these other things, it just, we just don't call that physics anymore, right? We call that something else. - Okay. But you're kind of speaking about the super complex organisms. Can we go to the simplest possible thing where you first take a step over the line, the Cartesian cut, as you've called it, from the non-mind to mind, from the non-living to living? The simplest possible thing, isn't that in the realm of physics to understand? How do we understand that first step where you're like, that thing is no mind, probably non-living, and here's a living thing that has a mind. That line. I think that's a really interesting line. Maybe you can speak to the line as well, and can physics help us understand it? - Yeah, let's talk about it. Well, first of all, of course it can. I mean, it can help, meaning that I'm not saying physics is not helpful. Of course it's helpful. It's a very important lens on one slice of what's going on in any of these systems. But I think the most important thing I can say about that question is I don't believe in any such line. I don't believe any of that exists. I think there is a continuum. I think we as humans like to demarcate areas on that continuum and give them names because it makes life easier, and then we have a lot of battles over you know, so-called category errors when people, they transgress those those categories. I think most of those categories at this point, they may have done some good service at the beginning of when the scientific method was getting started and so on. I think at this point they mostly hold back science. Many, many categories that we can talk about are at this point very harmful to progress, because what those categories do is they prevent you from hoarding tools. If you think that living things are fundamentally different from non-living things, or if you think that cognitive things are these like advanced brainy things that are very different from other kinds of systems, what you're not going to do is take the tools that are appropriate to these to, to these kind of cognitive systems, right? So the, so the tools that have been developed in, in behavioral science and so on, you're never going to try them in other contexts because, because you've already decided that there's a categorical difference, that it would be a categorical error to apply them. And, and people say this to me all the time is that you're making a category error, and as, as if these categories were given to us, you know, about from, from, from on high, and we have to, we have to obey them forevermore. The categories should change with the science. So yeah, I don't believe in any such line, and I think a physics story is very often a useful part of the story, but for most interesting things, it's not the entire story. - Okay. So if there's no line, is it still useful to talk about things like the origin of life? That's the, the, one of the big open mysteries before us as a human civilization, as scientifically minded curious homo sapiens. How did this whole thing start? Are you saying there is no start? Is there a point where you could say that invention right there was the start of it all on Earth? - My suggestion is that much better than trying to... in my experience, much better than trying to define any kind of a line, okay, because, because inevitably I've never, I've never found, and the people try to ... Th- y- you know, we play this game all the time when I make my continuum claim. Then people try to come up, "Okay, well, what about this?" And I haven't found one yet that really shoots that down that, that you can't zoom in and say, "Yeah, okay, but right before then this happened, and if we really look close, like here's a bunch of steps in between," right? Pretty much everything ends up being a continuum, but here's what I think is much more interesting than trying to make that line. I think what's, what's really more useful is trying to understand the transformation process. What is it that happened to scale up? And I'll give you a really dumb example. And we al- and we always get into this 'cause people, people often really, really don't like this continuum view. The word adult, right? E- everybody is going to say, "Look, I know what a baby is. I know what an adult is. You're crazy to say that there's no difference." I'm not saying there's no difference. What I'm saying is the word adult is really helpful in court because, because, because you just need to move things along, and so we've decided that if you're 18, you're an adult. However, what it hides is, is ... Th- what, what it completely conceals is the fact that first of all, nothing happens on your 18th birthday, right? That's special. Second, if you actually look at the data, the car rental companies actually have a much better estimate because they actually look at the accident statistics and they'll say it's about 25 is really what you're looking for, right? So theirs is a little better. It's less arbitrary. But in either case, what it's hiding is the fact that we do not have a good story of what happened from the time that you were an egg to the time that you're the supposed adult and what is the scaling of personal responsibility, decision-making, judgment. These are deep fundamental questions. Nobody wants to get into that every time somebody, you know, has a traffic ticket. So, okay, we've just decided that there's this adult idea. And of course, it does come up in court because then somebody has a brain tumor or somebody's eaten too many Twinkies or something has happened. You say, "Look, that wasn't me. Whoever did that, I was on drugs." "Well, why'd you take the drugs?" "Well, that was, you know, that was yesterday. Me today, this is I'm..." Right? So we get into these very deep questions that are completely glossed over by this idea of an adult. So I think once you start scratching the surface, most of these categories are like that. They're convenient and they're good. You know, I get into this with neurons all the time. I'll ask people, "What's a neuron? Like, what's really a neuron?" And yes, if you're in neurobiology 101, of course you just say like, "These are what neurons look like. Let's just study the neuroanatomy and we're done." But if you really want to understand what's going on, well, neurons develop from other types of cells and that was a slow and gradual process, and most of the cells in your body do the things that neurons do. So what really is a neuron, right? So once you start scratching this, this happens, and I have some things that I think are coming out of our lab and others that are very interesting about the origin of life. But I don't think it's about finding that one boon like this is. Yeah, there will be... There are innovations, right? There are innovations that allow you to scale in an amazing way, for sure. And there are lots of people that study those, right? So things like thermodynamic, kind of metabolic things and all kinds of architectures and so on. But I don't think it's about finding a line. I think it's about finding a scaling process. - ... the scaling process, but then there is more rapid scaling and there are slower scaling. So innovation, invention, I think is useful to understand so you can predict how likely it is on other planets, for example. Or to be able to describe the likelihood of these kinds of phenomena happening in certain kinds of environments. Again, specifically in answering how many alien civilizations there are. That's why it's useful. But it is also useful on a scientific level to have categories, not just because it makes us feel good and fuzzy inside, but because it makes conversation possible and productive, I think. If everything is a spectrum, it's... It becomes difficult to make concrete statements, I think. Like, we even use the terms of biology and physics. Those are categories. Technically, it's all the same thing, really. Fundamentally, it's all the same. There's no difference between biology and physics. But it's a useful category. If you go to the physics department and the biology department, those people are different in, in... at some kind of categorical way. So somehow, I don't know what the chicken or the egg is, but the categories. Maybe the categories create themselves because of the way we think about them and use them in language, but it does seem useful. - Let me make the opposite argument. They're absolutely useful. They're useful specifically when you want to gloss over certain things. The categories are exactly useful when there's a whole bunch of stuff. And this is what's important about science, is like the art of being able to say something without first having to say everything, right? Which would make it impossible. So categories are great when you want to say, "Look, I know there's a bunch of stuff hidden here. I'm going to ignore all that and we're just going to like, let's get on with this particular thing." And all of that is great as long as you don't lose track of the stuff that you glossed over. And that's what I'm afraid is happening in a lot of different ways. And in terms of... Look, I'm very interested in life beyond Earth and all these kinds of things so that we should also talk about what I call SUTI, S-U-T-I, the search for unconventional terrestrial intelligences. I think we got much bigger issues than actually recognizing aliens off Earth. But I'll make this claim. I think the categorical stuff is actually hurting that search. Because, if we try to define categories with the kinds of criteria that we've gotten used to, we are going to be very poorly set up to recognize life in novel embodiments. I think we have a kind of mind blindness. I think this is really key. To me, the cognitive spectrum is much more interesting than the spectrum of life. I think really what we're talking about is the spectrum of cognition. And, it is... Well, I know it's weird as a biologist to say, I don't think life is all that interesting a category. I think the categories of different types of minds, I think is extremely interesting. And to the extent that we think our categories are complete and are cutting nature at its joints, we are going to be very poorly placed to recognize novel systems. So for example, a lot of people will say, "Well, this is intelligent and this isn't," right? And there's a binary thing. And that's useful occasionally; that's useful for some things. I would like to say, instead of that, let's make a let's admit that we have a spectrum. But instead of just saying, "Oh, look, everything's intelligent," right? Because if you do that, you're right, you can't do anything after that. What I'd like to say instead is, no, you have to be very specific as to what kind and how much. In other words, what problem spaces they're operating in? What kind of mind does it have? What kind of cognitive capacities does it have? You have to actually be much more specific. And we can even name, right? That's fine. We can name different types of, I mean, this is doing predictive processing. This can't do that, but it can form memories. What kind? Well, habituation and sensitization, but not associative conditioning. It's fine to have categories for specific capabilities, but it actually makes for much more rigorous discussions because it makes you say what is it that you are claiming this thing does, and it works in both directions. So, some people will say, "Well, that's a cell. That can't be intelligent." And I'll say, "Well, let's be very specific. Here are some claims about... Here's some problem solving that it's doing. Tell me why that doesn't... you know, why doesn't that match?" Or in the opposite direction, somebody comes to me and says, "You're right, you're right. You know, the whole, the whole solar system, man. It's just like this amazing..." I'm like, "Whoa, okay. Well, what is it doing?" Like, "Tell me what tools of cognitive and behavioral science are you using to reach that conclusion," right? And so I think it's actually much more productive to take this operational stance and say, "Tell, tell me what protocols you think you can deploy with this thing that would lead you to use these terms." - To have a bit of a meta-conversation about the conversation. I should say that part of the persuadability argument that we two intelligent creatures are doing is me playing devil's advocate every once in a while. And you did the same, which is kind of interesting, taking the opposite view and see what comes out. Because you don't know the result of the argument until you have the argument, and it seems productive to just take the other side of the argument. - For sure. It's a very important thinking aid to first of all, you know, what they call steel manning, right? To try to make the strongest possible case for the other side and to ask yourself, "Okay, what are all the, places that I am sort of glossing over because I don't know exactly what to say? And where are all the holes in the argument, and what would a, you know, a really good critique really look like?" Yeah. - Sorry to go back there just to linger on the term because it's so interesting, persuadability. Did I understand correctly that you mean that it's kind of synonymous with intelligence? So it's an engineering-centric view of an intelligence system. Because if it's persuadable, you're more focused on how can I steer the goals of the system, the behaviors of the system? Which, meaning an intelligence system, maybe is a goal-oriented, goal-driven system with agency. And when you call it persuadable, you're thinking more like, "Okay, here's an intelligence system that I'm interacting with that I would like to get it to accomplish certain things." But fundamentally, they're synonymous or correlated, persuadability and intelligence? - They're definitely correlated. So, let me... I wanna preface this with one thing. When I say it's an engineering perspective, I don't mean that the standard tools that we use in engineering and this idea of enforced control and steering is how we should view all of the world. I'm not saying that at all. And I wanna be very clear on that because people do email me and say, "This engineering thing. You're going to drain the life and the majesty out of these high-end, like, human conversation." My whole point is not that at all. It's that of course at the right side of the spectrum it doesn't look like engineering anymore, right? It looks like friendship and love and psychoanalysis and all these other tools that we have. But here's what I want to do. I want to be very specific to my colleagues in regenerative medicine and everything. Just imagine if I, you know, if I went to a bioengineering department or a genetics department and I started talking about high-level, you know, cognition and psychoanalysis, right? They don't want to hear that. So I focus on the engineering approach... Because I want to say, look, this is not a philosophical problem. This is not a linguistics problem. We are not trying to define terms in different ways to make anybody feel fuzzy. What I'm telling you is, if you want to reach certain capabilities, if you want to reprogram cancer, if you want to regrow new organs, you want to defeat aging, you want to do these specific things, you are leaving too much on the table by making an unwarranted assumption that the low-level tools that we have, so these are the rules of chemistry and the kind of molecular rewiring, that those are going to be sufficient to get to where you want to go. It's an assumption only, and it's an unwarranted assumption, and actually, we've done experiments now, so not philosophy, but real experiments, that if you take these other tools, you can in fact persuade the system in ways that has never been done before. And we can unpack all that. But it is absolutely correlated with intelligence, so let me flesh that out a little bit. What I think is scaling in all of these things, right? Because I keep talking about the scaling, so what is it that's scaling? What I think is scaling is something I call the cognitive light cone, and the cognitive light cone is the size of the biggest goal state that you can pursue. This doesn't mean how far do your senses reach? This doesn't mean how far can you affect it? So the James Webb Telescope has enormous sensory reach, but that doesn't mean that's the size of its cognitive light cone. The size of the cognitive light cone is the scale of the biggest goal you can actively pursue, but I do think it's a useful concept to enable us to think about very different types of agents of different composition, different provenance, you know, engineered, evolved, hybrid, whatever, all in the same framework. And by the way, the reason I use light cone is that it has this idea from physics that you're putting space and time in the same diagram, which I like here. So if you tell me that all your goals revolve around maximizing the amount of sugar, the amount of sugar in this, in this, you know, 10-20 micron radius of spacetime and that you have, you know, 20 minutes memory going back and maybe five minutes predictive capacity going forward, that tiny little cognitive light cone, I'm gonna say probably a bacterium. And if you say to me that, "Well, I'm able to care about several hundred yards sort of scale, I could never care about what happens three weeks from now, two towns over, just impossible," I would say you might be a dog. And if you say to me, "Okay, I care about really what happens, you know, the financial markets on Earth, you know, long after I'm dead and this and that," I'd say you're probably a human. And if you say to me, "I care in the linear range, I actively, I'm not just saying it, I can actively care in the linear range about all the living beings on this planet," I'm gonna say, "Well, you're not a standard human. You must be something else." Because humans, I don't know, standard humans today, I don't think can do that. You must be some kind of a bodhisattva or some other thing that has these massive cognitive light cones. So I think what's scaling from zero, and I do think it goes all the way down, I think we can talk about even particles doing something like this. the cognitive light cone. And so now this is an interesting... here, I'll try for a definition of life or whatever, for whatever it's worth. I spent no time trying to make that stick, but if we want it to, I think we call things alive to the extent that the cognitive light cone of that thing is bigger than that of its parts. So, in other words, rocks aren't very exciting because the things it knows how to do are the things that its parts already know how to do, which is follow gradients and things like that. But living things are amazing at aligning their their competent parts so that the collective has a larger cognitive light cone than the parts. I'll give you a very simple example that comes up in biology and that comes up in our cancer program all the time. Individual cells have little tiny cognitive light cones. What are their goals? Well, they're trying to manage pH, metabolic state, some other things. There are some goals in transcriptional space, some goals in metabolic space, some goals in physiological state space, but they're generally very tiny goals. One thing evolution did was to provide a kind of cognitive glue, which we can also talk about, that ties them together into a multicellular system. And those systems have grandiose goals. They're making limbs, and if you're a salamander limb and you chop it off, they will regrow that limb with the right number of fingers, then they'll stop when it's done. The goal has been achieved. No individual cell knows what a finger is or how many fingers you're supposed to have, but the collective absolutely does. And that process of growing that cognitive light cone from a single cell to something much bigger, and of course the failure mode of that process, so cancer, right? When cells disconnect, they physiologically disconnect from the other cells. Their cognitive light cone shrinks. The boundary between self and world, which is what the cognitive light cone defines, shrinks. Now they're back to an amoeba. As far as they're concerned, the rest of the body is just external environment, and they do what amoebas do. They go where life is good. They reproduce as much as they can, right? So that cognitive light cone, that is the thing that I'm talking about that scales. So when we are looking for life, I don't think we're looking for specific materials. I don't think we're looking for specific metabolic states. I think we're looking for scales of cognitive light cone. We're looking for alignment of parts towards bigger goals in spaces that the parts could not comprehend. - And so cognitive light cone, just to make clear, is about goals that you can actively pursue now. You said linear, like we're within reach immediately. - No, I didn't. Sorry, I didn't mean that. First of all, the goal necessarily is often removed in time. So, in other words, when you're pursuing a goal, it means that you have a separation between current state and target state, at minimum. Your thermostat, right? Let's just think about that. There's a separation in time because the thing you're trying to make happen, so that the temperature goes to a certain level, is not true right now. And all your actions are going to be around reducing that error, right? That basic homeostatic loop is all about closing that gap. When I meant... When I said linear range, this is what I meant. If I say to you, "This terrible thing happened to, you know, ten people." And you have some degree of activation about it. And then I say, "No, no, no, actually it was 100, you know 10,000 people." You're not a thousand times more activated about it. You're somewhat more activated, but it's not a thousand. And if I say, "Oh my God, it was actually 10 million people," you're, you're not a million times more activated. You don't have that capacity in the linear range. You sort of... Right? If you think about that curve, we sort of reach a saturation point. I have some amazing colleagues in the Buddhist community with whom we've written some papers about this, the radius of compassion is like, "Can you grow your cognitive system to the point that, yeah, it really isn't just your family group, it really isn't just the hundred people you know in your circle? Can you grow your cognitive light cone to the point where, no, no, we care about the whole, whether it's all of humanity or the whole ecosystem, or the whole whatever? Can you actually care about that the exact same way that we now care about a much smaller set of people? That's what I mean by linear range. - But this is separated by time like a thermostat, but a bacteria... I mean, if you zoom out far enough, a bacteria could be formulated to have a goal state of creating human civilization. Because if you look at the... You know, bacteria- ...has a role to play in the whole history of Earth. So, you know, if you anthropomorphize the goals of a bacteria enough, it has a concrete role to play in the history of the evolution- ...of human civilization. So you do need to, when you define a cognitive light cone, you're looking at directly short-term behavior. - Well, no. How do you know what the cognitive light cone of something is? Because as you've said, it could be almost anything. The key is you have to do experiments. And the way you do experiments is you put barriers. You have to do interventional experiments. You have to put barriers between it and its goal, and you have to ask what happens. And intelligence is the degree of ingenuity that it has in overcoming barriers between it and its goal. Now, if it were to be that, now, this is, I think, a totally doable but impractical and very expensive experiment. But you could imagine setting up a scenario where the bacteria were blocked from becoming more complex. And you can ask if they would try to find ways around it, or whether their goals are actually metabolic. And as long as those goals are met, they're not going to actually get around your barrier. The business of putting barriers between things and their goals is actually extremely powerful because we've deployed it in all kinds of... I'm sure we'll get to this later, but we've deployed it in all kinds of weird systems that you wouldn't think are goal-driven systems. And what it allows us to do is to get beyond just the anthropomorphizing claims of saying, "Oh, yeah, I think this thing is trying to do this or that." The question is, well, let's do the experiment. And one other thing I want to say about anthropomorphizing is people say this to me all the time. I don't think that exists. I think that's kind of like, you know. And, and I'll tell you why. I think it's like heresy or, like other, other terms that aren't really a thing. Because if you unpack it, here's what anthropomorphism means. Humans have a certain magic, and you're making a category error by attributing that magic somewhere else. My point is, we have the same magic that everything has. We have a couple of interesting things besides the cognitive light cone and some other stuff, and it isn't that you have to keep the humans separate because there's some bright line. It's just that same old... All I'm arguing for is the scientific method, really. That's really all this is. All I'm saying is you can't just make pronouncements such as, "Humans are this," and let's not sort of push that. You have to do experiments. After you've done your experiments, you can say either, "I've done it, and I've found... Look at that. That thing actually can predict the future for the next, you know, 12 minutes. Amazing." Or you say, "You know what? I've tried all the things in the behaviorist handbook, they just don't help me with this. It's a very low level of..." Like, that's it. It's a very low level of intelligence. Fine, right? Done. So that's really all I'm arguing for is an empirical approach, and then things like anthropomorphism go away. It's just a matter of, have you done the experiment, and what did you find? - And that's actually one of the things you're saying, that if you remove the categorization of things, you can use the tools- ... of one discipline on everything. - You could try. - to try and then see. That's the underpinnings of the criticism of anthropomorphization, because, what is that? That's like psychoanalysis of another human could technically be applied to robots, to AI systems, to more primitive biological systems, and so on. Try. - Yeah. We've used everything from basic habituation conditioning all the way through anxiolytics, hallucinogens, all kinds of cognitive modification on the range of things that you wouldn't believe. And by the way, I'm not the first person to come up with this. So there was a guy named Bose well over 100 years ago who was studying how anesthesia affected animals and animal cells, and drawing specific curves around electrical excitability. And he then went and did it with plants and saw some very similar phenomena. And being the genius that he was, he then said, "Well, how do I... I don't know when to stop, but there's no... You know, everybody thinks we should have stopped long before plants 'cause people made fun of him for that. And he's like, "Yeah, but the science doesn't tell us where to stop. The tool is working, let's keep going." And he showed interesting phenomena on materials, metals, and other kinds of materials, right? And so- The interesting thing is that, yeah, there is no... there is no generic rule that tells you when you need to stop. We make those up. Those are completely made up. You have to just do the science and find out. - Yeah, we'll probably get to it. You've been doing recent work on looking at computational systems, even trivial ones like algorithms, sorting algorithms... ...and analyzing them in a behavioral kind of way. See if there's minds inside those sorting inside those sorting algorithms. And, of course, let me make a pothead statement question here that you could start to do things like trying to do psychedelics with a sorting algorithm. And what does that even look like? It looks like a ridiculous question that'll get you fired from most academic departments, but it may be, if you take it seriously, you could try ...and see if it applies. If a thing could be shown to have some kind of cognitive complexity, some kind of mind, why not apply to it the same kind of analysis and the same kind of tools, like psychedelics, that you would to a complex human mind? At least, it might be a productive question to ask. You've seen spiders on psychedelics, like more primitive biological organisms on psychedelics. Why not try to see what an algorithm does on psychedelics? Anyway. - Well, the thing to remember is we don't have a magic sense or really good intuition for what the mapping is between the embodiment of something and the degree of intelligence it has. We think we do because we have an N of one example on Earth and we know what to expect from cells, snakes to primates, but we really don't. We don't have, and this is, we'll get into more of the stuff on the platonic space, but our intuitions around that stuff is so bad that to really think that we know enough not to try things at this point is, I think, really shortsighted. - Before we talk about the platonic space, let's lay out some foundations. I think one useful one comes from the paper, A Technological Approach to Mind Everywhere. An experimentally grounded framework for understanding diverse bodies and minds. Could you tell me about this framework, and maybe can you tell me about Figure 1 from this paper that has a few components? One is the tiers of biological cognition that goes from group to whole organism to whole tissue organ, down to neural network, down to cytoskeleton, down to genetic network, and then there's layers of biological systems from ecosystem, down to swarm, down to organism, tissue, and then finally cell. So, can you explain this figure and can you explain the TAME, - so-called, framework? So, this is the version 1.0 and there's a kind of update, a 2.0 that I'm writing at the moment, trying to formalize in a careful way all the things that we've been talking about here, and in particular this notion of having to do experiments to figure out where any given system is on a continuum. Let's just start with Figure 2 for a second, then we'll come back to Figure 1. First, just to unpack the acronym, I like the idea that it spells out TAME because the central focus of this is interactions and how do you interact with a system to have a productive interaction with it? The idea is that cognitive claims are really protocol claims. When you tell me that something has some degree of intelligence, what you're really saying is, "This is the set of tools I'm going to deploy and we can all find out how that worked out for you." And so, technological, because I wanted to be clear with my colleagues that this was not a project in just philosophy. This had very specific, empirical implications that are going to play out in engineering and regenerative medicine and so on. A technological approach to mind everywhere, this idea that we don't know yet where different kinds of minds are to be found, and we have to empirically figure that out. So, what you see here in figure two is basically this idea that there is a spectrum, and I'm just showing four waypoints along that spectrum. As you move to the right of that spectrum, a couple things happen: persuadability goes up, meaning that the systems become more reprogrammable, more plastic, more able to do different things than whatever they're standardly doing. So, you have more ability to get them to do new and interesting things. The effort needed to exert influence goes down, that is, autonomy goes up. To the extent that you are good at convincing or motivating the system to do things, you don't have to sweat the details as much, right? This also has to do with what I call engineering agential materials. When you engineer wood, metal, plastic, things like that, you are responsible for absolutely everything because the material is not going to do anything other than hopefully hold its shape. If you're engineering active matter, or you're engineering computational materials, or better yet, agential materials like living matter, you can do some very high-level prompting and let the system then do very complicated things that you don't need to micromanage. We all know that that increases when you're starting to work with intelligent systems like animals and humans and so on. The other thing that goes down as you get to the right is the amount of mechanism or physics that you need to exert the influence goes down. So, if you know how your thermostat is to be set as far as its set point, you really don't need to know much of anything else, right? You just need to know that it is a homeostatic system and that this is how I change the set point. You don't need to know how the cooling and heating plant works in order to get it to do complex things. - By the way, a quick pause just for people who are listening: let me describe what's in the figure. There are four different systems going up the scale of persuadability. The first system is a mechanical clock, then it's a thermostat, then it's a dog that gets rewards and punishments, Pavlov's dog, and then finally a bunch of very smart-looking humans communicating with each other and arguing, persuading each other using reasons. There are arrows below that showing persuadability going up as you go up these systems from the mechanical clock to a bunch of Greeks arguing, and then going down as the effort needed to exert influence, and once again going down as mechanism knowledge needed to exert that influence. - Yeah. I'll give you an example about that, panel C here with the dog. Isn't it amazing that humans have been training dogs and horses for thousands of years knowing zero neuroscience? amazing is that when I'm talking to you right now, I don't need to worry about manipulating all of the synaptic proteins in your brain to make you understand what I'm saying and hopefully remember it. You're gonna do that all on your own. I'm giving you very thin, in terms of information content, very thin prompts, and I'm counting on you as a multi-scale agential material to take care of the chemistry underneath, all right? - So you don't need a wrench to convince me? - Correct. I don't need, and I don't need physics to convince you, and I don't need to know how you work. Like, I don't need to understand all of the steps. What I do need to have is trust that you are a multi-scale cognitive system that already does that for yourself, and you do. This is an amazing thing. I know people don't think about this enough, I think. When you wake up in the morning and you have social goals, research goals, financial goals, whatever it is that you have, in order for you to act on those goals, sodium and calcium and other ions have to cross your muscle membranes. Those incredibly abstract goal states ultimately have to make the chemistry dance in a very particular way, right? Our entire body is a transducer of very abstract things. And by the way, not just our brains, but our organs have anatomical goals and other things that we can talk about, because all of this plays out in regeneration and development and so on. But the scaling, right, of all of these things, the way that you regulate yourself is not by, "Oh my God," you don't have to sit there and think, "Wow, I really have to push some sodiums across this membrane." All of that happens automatically, and that's the incredible benefit of these multi-scale materials. So what I was trying to do in this paper is a couple of things. All of these were, by the way, drawn by Jeremy Gay, who's this amazing graphic artist that works with me. First of all, in panel A, which is the spiral I was trying to point out, is that at every level of biological organization, like we all know we're sort of nested dolls of organs and tissues and cells and molecules and whatever. But what I was trying to point out is that this is not just structural. Every one of those layers is competent and is doing problem-solving in different spaces, and spaces that are very hard for us to imagine. We humans are, because of our own evolutionary history, we are so obsessed with movement in three-dimensional space. Even in AI you see this all the time. They say, "Well, this thing doesn't have a robotic body, it's not embodied." Yeah, it's not embodied by moving around in 3D space, but biology has embodiments in all kinds of spaces that are hard for us to imagine, right? So your cells and tissues are moving in high-dimensional physiological state spaces, in gene expression state spaces, in anatomical state spaces. They're doing that perception, decision-making, action loop that we do in 3D space when we think about robots wandering around your kitchen. They're doing those loops in these other spaces. And so the first thing I was trying to point out is that every layer of your body has its own ability to solve problems in those spaces. And then on the right, what I was saying is that this distinction between, you know, people say, "Well, there are living beings and then there are engineered machines," and then they often follow up with all the things machines are never gonna be able to do and whatever. And so what I was trying to point out here is that it is very difficult to maintain those kinds of distinctions, because life is incredibly interoperable. Life doesn't really care if the thing it's working with was evolved through random trial and error or was engineered with a higher degree of agency, because at every level within the cell, within the tissue, within the organism, within the collective, you can replace and substitute engineered systems with naturally evolved systems. And that question of, "Is it real, is it biology or is it technology?" I don't think is a useful question anymore. So I was trying to warm people up with this idea that what we're going to do now is talk about minds in general, regardless of their history or their composition. It doesn't matter what you're made of. It doesn't matter how you got here. Let's talk about what you're able to do and what your inner world looks like. That was the goal of that. - Is it useful, as a thought experiment, as an experiment of radical empathy, to try to put ourselves in the space of the different minds at each stage of the spiral? Like, what state space is human civilization as a collective embodied? Like, what does it operate in? So humans, individual organisms, operate in 3D space. That's what we understand. But when there's a bunch of us together, what are we doing together? - It's really hard, and you have to do experiments, which at larger scales are really difficult. - But there is such a thing? - There may well be. We have to do experiments. I don't know. Here's an example: Somebody will say to me, "Well, you know, with your kind of panpsychist view, you might as well think the weather is agential too." It's like, "Well, I can't say that, but we don't know, but have you ever tried to see if a hurricane has habituation or sensitization?" Maybe. We haven't done the experiment. It's hard, but you could, right? And maybe weather systems can have certain kinds of memories. I have no idea. We have to do experiments. So I don't know what the entire human society is doing, but I'll just give you a simple example of the kinds of tools, and we're actively trying to build tools now to enable radically different agents to communicate. So, we are doing this using AI and other tools to try and get this kind of communication going across very different spaces. I'll just give you a very kind of dumb example of how that might be. Imagine that you're playing tic-tac-toe against an alien, so you're in a room. You don't see him. And so you draw the tic-tac-toe thing on the board, on the floor. And you know what you're doing. You're trying to make straight lines with Xs and Os, and you're having a nice game. It's obvious that he understands the process. Like, sometimes you win, sometimes you lose. Like, it's obvious. In that one little segment of activity, you guys are sharing a world. What's happening in the other room next door? Well, let's say the alien doesn't know anything about geometry. He doesn't understand straight lines. What he's doing is he's got a box, and it's full of basically billiard balls, each one of which has a number on it. And all he's doing is he's looking through the box to find billiard balls whose numbers add up to 15. He doesn't understand geometry at all. All he understands is arithmetic. You don't think about arithmetic, you think geometry. The reason you guys are playing the same game is that there's this magic square, right? that somebody constructed that basically is a three-by-three square, where if you pick the numbers right, they add up to 15. He has no idea that there's a geometric interpretation to this. He is solving the problem that he sees, which is totally algebraic. You don't know anything about that. But if there is an appropriate interface like this magic square, you guys can share that experience. You can have an experience. It doesn't mean you start to think like him. It means that you guys are able to interact in a particular way. - Okay, so there's a mapping between the two different ways of seeing the world that allows you to communicate with each other. - Of seeing a thin slice of the world. - Thin slice of the world. How do you find that mapping? So you're saying we're trying to figure out ways of finding that mapping... ...for different kinds of systems. What's the process for doing that? - So, the process is twofold. One is to get a better understanding of what the system... what space is the system navigating, what goals does it have, what level of ingenuity does it have to reach those goals. For example, xenobots, right? We make xenobots. This is... Or anthropods. These are biological systems that have never existed on Earth before. We have no idea what their cognitive properties are. We're learning. We found some things. But you can't predict that from first principles, because they're not at all what their past history would inform you of. - Can you actually explain briefly what a xenobot is and what an anthropod is? - So one of the things that we've been doing is trying to create novel beings that have never been here before. The reason is that typically when you have a biological system, an animal or a plant, and you say, "Hey, why does it have certain forms of behavior, certain forms of anatomy, certain forms of physiology? Why does it have those?" The answer is always the same. Well, there's a history of evolutionary selection, and there's a long, long history going back of adaptation, and there's certain environments, and this is what survived, and so that's why it has. So what I wanted to do was break out of that mold, and to basically force us as a community to dig deeper into where these things come from. And that means taking away the crutch where you just say, "Well, it's evolutionary selection that's... That's why it looks like that." So in order to do that, we have to make artificial synthetic beings now. To be clear, we are starting with living cells, so it's not that they had no evolutionary history. The cells do. They had evolutionary history in frogs or humans or whatever. But the creatures they make and the capabilities that these creatures have were never directly selected for. And in fact, they never existed. So you can't tell the same kind of story. And what I mean is, we can take epithelial cells off of an early frog embryo, and you don't change the DNA. No synthetic biology circuits, no material scaffolds, no nanomaterials, no weird drugs, none of that. What we're mostly doing is liberating them from the instructive influences of the rest of the cells that they were in in their bodies. And so when you do that, normally these cells, are bullied by their neighboring cells into having a very boring life. They become a two-dimensional outer covering for the embryo, and they keep out the bacteria, and that's that. So you might ask, "Well, what are these cells capable of when you take them away from that influence?" So when you do that, they form another little life form we call a xenobot. And it's this self-motile little thing that has cilia covering its surface. The cilia are coordinated so they row against the water, and then the thing starts to move, and has all kinds of amazing properties. It has different gene expression, so it has its own novel transcriptome. It's able to do things like kinematic self-replication, meaning make copies of itself from loose cells that you put in its environment. It has the ability to respond to sound, which normal embryos don't do. It has these novel capacities. And we did that, and we said, "Look, here are some amazing features of this novel system. Let's try to understand where they came from." And some people said, "Well, maybe it's a frog-specific thing, you know? Maybe this is just something unique to frog cells." And so we said, "Okay, what's the furthest you can get from frog embryonic cells? How about human adult cells?" And so we took cells from adult human patients who were donating tracheal epithelia for biopsies and things like that, and those cells, again, no genetic change, nothing like that. They self-organized into something we call anthropods. Again, self-motile little creature. 9,000 different gene expressions. So about half the genome is now different. And They have interesting abilities. For example, they can heal human neural wounds. So in vitro, if you plate some neurons and you put a big scratch through it so you damage them, anthropods can sit down, and they will, they will spontaneously, without us having to teach them to do it, they will spontaneously try to knit the neurons across. - What is this video that we're looking at here? - So this is an anthropod. So often when I give talks about this, I show people this video, and I say, "What do you think this is?" And people will say, "Well, it looks like some primitive organism you got from the bottom of a pond somewhere." And I'll say, "Well, what do you think the genome would look like?" And they say, "Well, the genome would look like some primitive creature." Right? If you sequence that thing, you'll get 100% Homo sapiens. And that doesn't look like any stage of normal human development. It doesn't act like any stage of human development. It has the ability to move around. It has, as I said, over 9,000 differential gene expressions. Also interestingly, it is younger, than the cells that it comes from. So it actually has the ability to roll back its age, and we could talk about that and what the implications of that are. But to go back to your original question, what we're doing with these kind of systems ... - Trying to talk to it. - We're trying to talk to it. That's exactly right. And not just to this. We're trying to talk to molecular networks. So, we found a couple years ago, we found that gene regulatory networks, never mind the cells, but the molecular pathways inside of cells can have several different kinds of learning, including Pavlovian conditioning. And what we're doing now is trying to talk to it. The biomedical applications are obvious. Instead of, "Hey, Siri," you want, "Hey, liver, why do I feel like crap today?" And you want an answer. "Well, you know, your potassium levels are this and that, and I don't feel good for these reasons." And you should be able to talk to these things, and there should be an interface that allows us to communicate, right? And I think AI is gonna be a huge component of that interface of allowing us to talk to these systems. It's a tool to combat our mind blindness, to help us see diverse other... very unconventional minds that are all around us. - Can you generalize that? So let's say we meet an alien or an unconventional mind here on Earth. Think of it as a black box. You show up. What's the procedure for trying to get some hooks into communication protocol with the thing? - Yeah. That is exactly the mission of my lab. It is to enable us to develop tools to recognize these things, to learn to communicate with them, to ethically relate to them. And in general, to expand our ability to do this in the world around us. I specifically chose these kinds of things because they're not as alien as proper aliens would be. So we have some hope. I mean, we're made of them. We have many things in common. There's some hope of understanding them. - You're talking about xenobots and anthropods? - Xenobots and anthropods and cells and everything else. But they're alien in a couple of important ways. One is the space they live in is very hard for us to imagine. What space do they live in? Well, your body, your body cells, long before we had a brain that was good for navigating three-dimensional space, was navigating the space of anatomical possibilities. It was going from, you start as an egg, and you have to become, you know, a snake or a giraffe or whatever, or a human, whatever we're gonna be. And I specifically am telling you that this general idea when people model that with cellular automata type of ideas, this open loop kind of thing where, well, everything just follows local rules and eventually there's complexity, and here you go. Now, you've got a giraffe or a human. I'm specifically telling you that that model is totally insufficient to grasp what's actually going on. What's actually going on, and there have been many experiments on this, is that the system is navigating a space. It is navigating a space of anatomical possibilities. If you try to block where it's going, it will try to get around you. with things it's never seen before, it will try to come up with a solution. If you really defeat its ability to do that, which you can. You know, they're not infinitely intelligent, so you can defeat them. You will either get birth defects or you will get creative problem-solving such as what you're seeing here with xenobots and anthropods. If you can't be a human, you'll be you'll find another way to be in. You can be an anthropod, for example, or you'll be something else. - Just to clarify, what's the difference between cellular automata type of action where you're just responding to your local environment and creating some kind of complex behavior, and operating in the space of anatomical possibilities? - Sure. - So there's a kind of goal, I guess, you're articulating- - Yes. - There is some kind- - Yes - ... of thing. There's a will to X something. - The will thing, let's put that aside- - Okay, sorry. - ...because that's a... Well, it's fine too. - There I go, anthropomorph- I just always love to quote Nietzsche, so there we go. - Yeah. Yeah, yeah. And I'm not saying that's wrong. I'm just saying I don't have data for that one, but I'll tell you the stuff that I'm quite certain of. There are a couple of different formalisms that we have in control theory. One of those formalisms is open-loop complexity. In other words, I've got a bunch of subunits, like a cellular automaton. They follow certain rules, and you turn the crank, time goes forward, whatever happens, happens. Clearly you can get complexity from this. Clearly you can get some very interesting-looking things, right? So the game of life, all those kinds of cool things, right? You can get complexity. No, no, no problem. But the idea that that model is going to be sufficient to explain and control things like morphogenesis is a hypothesis. It's okay to make that hypothesis, but we know it's false despite the fact that that is what we learned, you know, in basic, uh, cell biology and developmental biology classes. When the first time you see something like this, inevitably, especially if you're an engineer in those classes, you go, "Hey, how does it know to do that? How does it know, you know, four fingers instead of seven?" What they tell you is, "It doesn't know anything." Make sure. That's very clear. They all insist, like, when we learn these things, they insist, "Nothing here knows anything. There are rules of chemistry, they roll forward, and this is what happens." Okay. Now, that model is testable. We can ask, "Does that model explain what happens?" Here's where that model falls down. If you have that model and situations change, either there's damage or something in the environment that's happened, those kind of open-loop models do not adjust to give you the same goal by different means. This is William James' definition of intelligence: the same goal by different means. And in particular, working them backwards, let's say you are in regenerative medicine and you say, "Okay, but this is the situation now. I want it to be different." What should the rules be? It's not reversible. So the thing with those kind of open-loop models is they're not reversible. You don't know what to do to make the outcome that you want. All you know how to do is roll them forward, right? Now, in biology, we see the following: If you have a developmental system and you put barriers between- So, I'm going to give you two pieces of evidence that suggest that there is a goal. One piece of evidence is that if you try to block these things from the outcome that they normally have, they will do some amazing things. Sometimes very clever things, sometimes not at all the way that they normally do it, right? So this is William James' definition. By different means, by following different trajectories, they will go around various local maxima and minima to get to where they need to go. It is navigation of a space. It is not blind, turn the crank, and wherever we end up is where we end up. That is not what we see experimentally. And more importantly, I think, what we've shown, and this is this is something that I'm particularly happy with in our lab, over the last 20 years, we've shown the following: We can actually rewrite the goal states because we found them. We have shown through our work on bioelectric imaging and bioelectric reprogramming, we have actually shown how those goal memories are encoded, at least in some cases. We certainly haven't got them all, but we have some. If you can find where the goal state is encoded, read it out, and reset it, and the system will now implement a new goal based on what you just reset, that is the ultimate evidence that your goal directed model is working. Because if there was no goal, that shouldn't be possible. Right? Once you can find it, read it, interpret it, and rewrite it, it means that by any engineering standard, it means that you're dealing with a homeostatic mechanism. - How do you find where the goal's encoded? - So, through lots and lots of hard work. - The barrier thing is part of that? Creating barriers and observing? - The barrier thing tells you that you should be looking for a goal. - So step one when you approach an agentic system is to create a barrier of different kinds until you see how persistent it is at pursuing the thing it seemed to have been pursuing originally. And then you know, okay, cool, this is a... This thing has agency, first of all. And then second of all, you start to build the intuition about exactly which goal it's pursuing. - Yes. The first couple of steps are all imagination. You have to ask yourself, "What space is this thing even working in?" And you really have to stretch your mind, because we can't imagine all the spaces that systems work in, right? So, step one is, what space is it? Step two, what do I think the goal is? And let's not mistake step two, you're not done. Just because you have made a hypothesis, that doesn't mean you can say, "Well, there, I see it doing this, therefore that's the goal." You don't know that. You have to actually do experiments. Now, once you've made those hypotheses, now you do the experiments. You say, "Okay, if I want to block it from reaching its goal, how do I do that?" And this, by the way, is exactly the approach we took with the sorting algorithms and with everything else. You hypothesize the goal, you put a barrier in, and then you get to find out what level of ingenuity it has. Maybe what you see is, "Well, that derailed everything, so probably this thing isn't very smart." Or you say, "Oh, wow, it can go around and do these things." Or you might say, "Wow, it's taking a completely different approach using its affordances in novel ways, like that's a high level of intelligence." You will find out what the answer is. - Another pothead question: Is it possible to look at, speaking of unconventional organisms, and going to Richard Dawkins, for example, with memes, is it possible to think of things like ideas? Like, how weird can we get? Can we look at ideas as organisms, then creating barriers for those ideas, and seeing if the ideas themselves... If you take the individual ideas and trying to empathize and visualize what kind of space they might be operating in, can they be seen as organisms that have a mind? - Yeah. Okay, if you want to get really weird, we can get really weird here. Think about the caterpillar-butterfly transition, okay? So, you've got a caterpillar, a soft-bodied creature, which has a particular controller that's suitable for running a soft body, you know, a robot. It has a brain for that task, and then it has to become this butterfly, a hard-bodied creature that flies around. Okay. During the process of metamorphosis, its brain is basically ripped up and rebuilt from scratch, right? Now, what's been found is that if you train the caterpillar, so you give it a new memory, meaning that if the caterpillar sees this color disc, then it crawls over and eats some leaves. Turns out, the butterfly retains that memory. Now, the obvious question is, how do you retain memories when the medium is being refactored like that? Let's put that aside. I'm going to get somewhere even weirder than that. There's something else that's even more interesting than that. It's not just that you have to retain the memory. You have to remap that memory onto a completely new context, because guess what? The butterfly doesn't move the way the caterpillar moves, and it doesn't care about leaves. It wants nectar from flowers. And so, if that memory is going to survive, it can't just persist. It has to- - be remapped. - ...be remapped into a novel context. Now, here's where things get weird. We can take a couple of different perspectives here. We can take the perspective of the caterpillar facing some sort of crazy singularity and say, "My God, I'm going to cease to exist, but, you know, I'll sort of be reborn in this new higher-dimensional world where I'll fly." Okay, so that's one thing. We can take the perspective of the butterfly and say that, "Well, here I am, but, you know, I seem to be saddled with some tendencies and some memories, and I don't know where the hell they came from, and I don't remember exactly how I got them, and they seem to be a core part of my psychological makeup, and, you know, they come from somewhere. I don't know where they come from." Right? So you can take that perspective. But there's a third perspective that I think is really interesting and useful. The third perspective is out of the memory itself. If you take a perspective of the memory, What is a memory? It is a pattern. It is an informational pattern that was continuously reinforced within one cognitive system, and now here I am on this memory. What do I need to do to persist into the future? Well, now I'm facing the paradox of change. If I try to remain the same, I'm gone. There's no way the butterfly is going to retain me in the original form that I'm in now. What I need to do is change, adapt, and morph. Now, you might say, "Well, that's kind of crazy. How are you taking the perspective of a pattern within an excitable medium?" Right? Agents are physical things. You're talking about information, right? So let me tell you another quick science fiction story. Imagine that some creatures come out from the center of the Earth. They live down in the core. They're super dense, okay? They're incredibly dense because they live down in the core. They have gamma ray vision for... and so on. So they come out to the surface. What do they see? Well, all of this stuff that we're seeing here, this is like a thin plasma to them. They are so dense. None of this is solid to them. They don't see any of this stuff. So they're walking around, you know, So they're walking around, you know, because the planet is, you know, covered in this thin gas, you know. covered in this, like, thin gas, you know. And one of them is a scientist, and he's taking measurements of the gas, and he says to the others, "You know, I've been watching this gas, and there are, like, little whirlpools in this gas, and they almost look like agents. They almost look like they're doing things. They're moving around. They kind of hold themselves together for a little bit, and they're trying to make stuff happen." and they're trying to make stuff happen." And the others say, "Well, that's crazy. say, "Well, that's crazy. Patterns in a gas can't be agents. We're agents. We're solid. We're solid. This is just patterns in an excitable medium. And by the way, how long do they hold together?" He says, "Well, about 100 years." "Well, that's crazy. Nothing... Nothing... You know, no real agent can exist to dissipate that fast. Okay. We are all metabolic patterns, among other things, right? And so one of the things that... And so you see what I'm warming up to here. So one of the things that we've been trying to dissolve, and this is some work that I've done with Chris Fields and others, is this distinction between thoughts and thinkers. distinction between thoughts and thinkers. So, all agents are patterns within some excitable medium. We could talk about what that is, and they can spawn off others. and they can spawn off others. And now you can have a really interesting spectrum. Here's the spectrum. You can have fleeting thoughts, which are like waves You can have fleeting thoughts, which are like waves in the ocean when you throw a rock in. in the ocean when you throw a rock in. You know, they sort of go through the excitable medium and then they're gone. through the excitable medium and then they're gone. They pass through and they're gone, right? So those are kind of fleeting thoughts. Then you can have patterns that have a degree of persistence, so they might be hurricanes, that have a degree of persistence, so they might be hurricanes or solitons or persistent thoughts or earworms persistent thoughts or earworms or depressive thoughts. Those are harder to get rid of. or depressive thoughts. Those are harder to get rid of. They stick around for a little while. They often do a little bit of niche construction, so they change the actual brain to make it easier to have more of those thoughts, right? so they change the actual brain to have- to make it easier to have more of those thoughts, right? Like, that's a thing. And so they stay around longer. Like, that's a thing. And so they- they- they stay around longer. Now, what's further than that? Now, what's further than that? Well, fragments, personality fragments of a dissociative Well, fragments, personality fragments of a dissociative personality disorder, they're more stable. personality disorder, they're more stable. And they're not just on autopilot. And they're not just on autopilot. They have goals and they can do things. They have goals and they can do things. And then past that is a full-blown human personality. And who the hell knows what's past that? Maybe some sort of trans-human, you know, transpersonal, like, I don't know, right? I don't know, right? But this idea, again, I'm back to this notion of a spectrum. It's there is not a sharp distinction between, you know, we are real agents and then we have these thoughts. you know, we are real agents and then we have these- these- these thoughts. Yeah, patterns can be agents too, but again, Yeah, patterns can be agents too, but again, you don't know until you do the experiment. So, if you want to know whether a soliton or a hurricane or a thought within a cognitive system is its own agent, system is its own agent, do the experiment. See what it can do. Does it- can it learn from experience? Does it have memories? Does it have goal states? Does it- Can it learn from experience? Does it have memories? Does it have goal states? Does it, you know, what can it do, right? Does it have language? Does it have language? So, coming back to your original question, yeah, we can definitely apply this methodology to ideas and concepts yeah, we can definitely apply this methodology to ideas and concepts and social, uh, you know, whatevers, but you've got to do the experiment. and social, uh, you know, whatevers, but you've got to do the experiment. That's such a challenging thought experiment of, like, - That's such a challenging thought experiment of, like, thinking about memories, from the caterpillar to the butterfly as an organism. thinking about memories, from the caterpillar to the butterfly as an organism. I think at the very basic level, intuitively, we think of organisms as hardware. I think at the very basic level, intuitively, we think of organisms as hardware. ... and software as not possibly being able to be organisms, but- ... what you're saying is that it's all just patterns in an excitable medium, and it doesn't really matter what the pattern is. We need to... and what the excitable medium is. We need to do the testing of what, how persistent is it? How goal oriented is it? And there are certain kind of tests to do that, and you can apply that to memories. You can apply that to ideas. You can apply that to anything, really. I mean, you could probably think about, like, consciousness. You could... there's really no boundary to what you can imagine. Probably really, really wild things could be minds. - Yeah. Stay tuned. I mean, this is exactly what we're doing. We're getting progressively, like, more and more unconventional. I mean, so this whole distinction between software and hardware, I think it's a super important, concept to think about. And yet, the way we've mapped it onto the world, I would like to blow that up in the following way. And again, I want to point out what the practical consequences are because this is not just, you know, fun stories that we tell each other. These have really important research implications. Think about a Turing machine. So one thing you can say is the machine's the agent. It has passive data, and it operates on the data, and that's it. The story of agency is the story of whatever that machine can and can't do. The data is passive, and it moves it around. You can tell the opposite story. You can say, "Look, the patterns on the data are the agent. The machine is a stigmergic scratch pad in the world of the data doing what data does." The machine is just the consequences, the scratch pad of it working itself out. And both of those stories make sense depending on what you're trying to do. Here's the biomedical side of things. So our program in bioelectrics and aging, okay? One model you could have is the physical organism is the agent and the cellular collective has pattern memories, specifically what I was saying before, goals, anatomical goals. If you want to persist for 100 plus years, your cells better remember what your correct shape is and where the new cells go, right? So there are these pattern memories. They exist during embryogenesis, during regeneration, during resistance to aging. We can see them. We can visualize them. One thing you can imagine is, fine, the physical body, the cells, are the agent. The electrical pattern memories are just data, and what might happen during aging is that the data might get degraded. They might get fuzzy. And so what we need to do is reinforce the memories, reinforce the pattern memories. That's one specific research program, and we're doing that. But that's not the only research program because the other thing you might imagine is that, what if the patterns are the agent in exactly the same sense as we think in our brains? It's the patterns of electrophysiological, computations, whatever else, that is the agent, right? And that what they're doing in the brain are the side effects of the patterns working themselves out. And those side effects might be to fire off some muscles, glands, and other things. From that perspective, maybe what's actually happening is, maybe the agent's finding it harder and harder to be embodied in the physical world. Why? Because the cells might get less responsive. In other words, the cells are sluggish. The patterns are fine. They're having a harder time making the cells do what they need to do, and maybe what you need to do is not reinforce the memories. Maybe what you need to do is make the cells more responsive to them, and that is a different research agenda, which we are also doing. We have evidence for that as well, actually now. We published it recently. So my point here is, when we tell these crazy sci-fi stories, the only worth to them, and the only reason I'm talking about them now, and a year ago I wasn't talking about this stuff, is because these are now actionable in terms of specific experimental research agendas that are heading to the clinic, I hope, in some of these biomedical approaches. So now here we can go beyond this and say, "Okay, up until now we've considered, what are disease states?" Well, we know there's organic disease, something that's physically broken. We can see the tissues breaking down. There's damage in the joint, you know, where the liver is doing what, you know, we can see these things. But what about disease states that are not physical states? They're physiological states, informational states, or cognitive problems? So in all of these other spaces, you can start to ask, what's a barrier in gene expression space? What's a local minimum that traps you in physiological state space? And what is a stress pattern that keeps itself together, moves around the body, causes damage, tries to keep itself going, right? What level of agency does it have? This suggests an entirely different set of approaches to biomedicine. And, you know, anybody who's, let's say, in the alternative medicine community is probably yelling at the screen right now saying, "We've been saying this for hundreds…

Transcript truncated. Watch the full video for the complete content.

Get daily recaps from
Lex Fridman

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.