The Future Of Brain-Computer Interfaces
Chapters13
Max Hodak explains a retinal prosthesis that uses a tiny chip implanted behind the retina to convert camera input into electrical stimulation, bypassing damaged rods and cones to restore some vision in blind patients. The chapter covers the clinical trial, device mechanics, and signaling pathway from eye to brain.
Max Hodak argues brain-computer interfaces will proliferate as multiple devices for diverse tasks, from restoring vision to gutting high-bandwidth brain links, reshaping healthcare and humanity by 2035.
Summary
In this conversation with Gary Tan, Max Hodak—co-founder of Neuralink and founder of Science—lays out a bold, multi-pronged roadmap for brain-computer interfaces (BCIs). He explains how a retinal prosthesis, implanted behind the eye, has already yielded measurable visual gains in Europe and what that means for future FDA approvals and real-world use. Hodak sketches a future where BCIs become a category similar to pharma, with many devices and modalities serving different needs rather than a single product. He contrasts implantable approaches with noninvasive options like ultrasound, predicting consumer-focused applications as a natural extension of the technology. The discussion dives into plasticity, critical periods, and the brain’s remarkable ability to learn from feedback, which underpins decoding and rehabilitation strategies. Hodak then expands into his broader vision: three fronts—retina implants, neural interfaces, and profusion (organ-support intersections)—each steering toward higher bandwidth between brain and world. He reflects on his long journey from software to biotech, the decision to pursue first-principles engineering, and the culture shifts required to scale breakthrough science into real-world impact. The interview closes with a candid look at 2035, where Hodak believes the first thousand-year lifespans may already be within reach for some people, driven by the convergence of BCIs and AI.
Key Takeaways
- Retinal prostheses can restore functional vision; Hodak cites a NEJM-published trial and plans for market approval later this year.
- BCIs will be a category, not a single product, with multiple probes and modalities (electrical, optogenetic, ultrasound) tailored to different applications.
- Brain plasticity and feedback learning are central to improving motor decoding and user adaptation in BCIs, enabling rapid skill acquisition.
- Noninvasive and biohybrid approaches will coexist: ultrasound for consumer focus/sleep and implanted BCIs for high-fidelity restoration, with trade-offs in invasiveness and durability on a patient-by-patient basis that affect deployment timelines and risk profiles.
Who Is This For?
Designed for biotech enthusiasts and developers curious about the near-term and long-term potential of BCIs, including clinicians evaluating new restore-versus-enhance paradigms and investors tracking multi-modal neural interface startups.
Notable Quotes
""The brain is this powerful computer, but it's encased in the skull.""
—Hodak sets up the fundamental constraint of interfacing with the brain.
""I think that there are going to be a bunch of BCI companies going after different applications where different types of probes will make sense.""
—Defines the multi-modal, product-category view of BCIs.
""We can take a patient who's been unable to see faces for a decade and allow them to read every letter on an eye chart.""
—Cites retinal prosthesis efficacy and patient-level impact.
""The brain stays way more plastic throughout life than I think is widely appreciated.""
—Explains learning and rehabilitation potential in BCIs.
""By 2035 it's just impossible to see past it. I think the first people to live to a thousand are alive right now.""
—Hodak frames longevity and brain-computer progress as a near-future symmetry.
Questions This Video Answers
- How soon will retinal prostheses become widely available and what are their current limitations?
- What are biohybrid neural interfaces and how do they differ from traditional implants?
- Could ultrasound-based BCIs replace invasive surgery for some consumers?
- What are the ethical and economic challenges of deploying profusion/ECMO-like brain technologies broadly?
- How does neural plasticity enable learning in brain-computer interfaces?
Brain-Computer InterfaceRetinal ProsthesisBiohybrid Neural InterfacesOptogeneticsUltrasound Neural StimulationNeural DecodingNEJM retinal trialScience (company)NeuralinkGary Tan interview
Full Transcript
I think it is very possible that the first people to live to a thousand are alive right now. It still takes some suspension of disbelief because I think biotech has just been so incremental. One of the things that's so exciting about what's happening now is that no longer really feels so incremental to me. I think that BCI we're going to come to see is not is not a specific product. I think there going to be a bunch of BCI companies going after different applications where different types of probes will make sense. To me, it feels like we're firmly in like the takeoff era now.
Like something new has happened on Earth. Welcome back to another episode of How to Build the Future. Today we've got a real treat, Max Hodak, the co-founder of Neurolink and also founder of science, one of the most exciting BCI brain computer interface companies that we've ever seen. Max, welcome to How to Build the Future. Thanks for having me. So science recently announced more than 40 people have received one of your first BCI treatments which gives people their sight back. What is that? You what h what's happening? So we finished a big clinical trial last year which was published in the New England Journal of Medicine in the fall.
So it's a it's a little chip a tiny little 2mm x 2mm silicon chip that's implanted in the back of the eye under the retina that it's it's this tiny little array of essentially solar panels. So the patients wear glasses that have a camera that looks out at the world and then a laser projector that projects an image into the eye. And wherever the laser hits the implant, it like the solar panel absorbs the light and that excites the cells directly above it. It's a retinal stimulator and this allows us to bypass the dead rods and cones like the the cells that normally make the eye light sensitive to get a visual signal back into the retina if they've gone blind because they've lost the rods and cones.
And so yeah, I mean there there's a big clinical trial in Europe across 17 sites and it was a huge effect. And so we are submitting for approval now. It's not it's not approved on on the market yet. Hope to have that later this year. For those watching who have never heard of a brain computer interface. What is it? And what have people been able to do? What are they able to do now? So the brain is this powerful computer, but it's encased in the skull. Like it is not magically connected to things. And so um it has these these handful of connections to the world.
And these give you the senses that you know and in and the motor control that you know but you can kind of ask like is that so either do we want to replace these with something else. So for example like the simulated reality or the matrix use case. The other is restoring lost functionality. So this is I mean this is how they're deployed today. So if someone has gone blind you can restore the ability to see. If they've gone deaf you can restore the ability to hear. If they're paralyzed you can restore the ability to move.
And then you can think about structural neural engineering. And this is the this is the thing that people haven't really we haven't gotten to as a field as much. But looking at how how does the brain process information? Can you add new brain areas? Are there ways to understand how the brain is like what what is going on either to use this to build smarter machines or to think about how to treat things like depression or addiction? I'm taken by uh to what degree right now it's about sort of um taking someone who has a condition or a disease and then bringing them like sort of restoring them to like sort of capability, right?
I think that's playing out in AI right now as well, right? like you had computers that had no ability to do like any sort of pure cognition or like you know and uh you know no neurons and then suddenly a bunch of neurons and then AGI is sort of like what a human can do. sort of like a restoration of capability and then of course there's like this other thing after that which is you know uh ASI super intelligence do you ever think about what that might be down the road you know what is that for BCI there are many types of BCIs so it's there it really is going to be a category like pharma it's not it's not one product I don't think there's going to be like the VCI that people get and there are different modalities that will work for different things so for example Um I don't work on ultrasound but one of the things I think will be possible with ultrasound is like a digital ambient or like a digital aderall.
So can you like stimulate part of the brain to cause focus or sleep and things like that would not surprise me if that was possible and that could I could see as being more of a consumer application almost and that won't require brain surgery hopefully right now that the high quality ultrasound stuff does require drilling through the skull but I think that that will be overcome for the implantable BCIs. I mean this is a very serious brain surgery. Um I think that's important to appreciate. So when you think about how do you actually get this into humans and who's going to use it.
I mean these are going to be very disabled patient populations. You always look at riskreward you start at the most disabled patients. You get the most benefit for even relatively basic functionality. Like I don't think that you or I would want to get one of the cortical motor decoders that you might have seen out there today. Um because the reality is that like a keyboard and mouse is like great. It is a much higher performance. Like it you can get like spoken word is like 40 bits per second. you can many people can type in the like 20 20-ish bits um and so the 10 bit per second cortical motor decode is like not going to make your life better.
I wouldn't get serious brain surgery for that. Now as it gets more powerful and as we are able to produce kind of access richer representations from more of the brain especially birectionally um then you'll start to see like the risk benefit change where like my my view on this is not that I think healthy 30-year-olds are going to be getting these soon but eventually many people become patients aging is like the coralate of kind of everything getting worse and so there's some critical age where it kind of crosses over where it makes sense to have something that will restore some functionality that you had and then eventually that will kind of c like cross the origin and then you'll see people that had something terrible happen to them who now have a capability that you're jealous of and that will be kind of when you start to see it changing.
Talk to me about how uh people who maybe never had sight, you know, why is was the optic nerve not you not actually set up? Like is that not something that you can do later? How does plasticity fit in? You know, do you have to get BCIs when you're incredibly young while the brain is still plastic? like how does all this come together? Neuroplasticity is really interesting and really misunderstood. Um there are genuine critical periods in early development that if you miss them, there are some things that will be very hard to wire up later.
Um there actually are some cases of patients that were born blind who um but it wasn't a it wasn't a loss of the optic nerve. It wasn't something in the brain, but they had congenital cataracts. So their vision was blurry from birth and they were never able to really form images who then had this fixed as adults and that did not work. This was um they didn't their brain could not make sense of the information. It was totally overwhelming. They would wear eye patches. Several of them committed suicide. And so there is there are clear critical periods in early development where if you miss that, some things are not going to work.
With that said, the brain stays way more plastic throughout life and adulthood than I think is is widely appreciated. That's a relief. Um yeah, if I put an electrode almost anywhere in your brain and then wake you up in during surgery and I show you a flashing light that is that flashes proportionally to how much that neuron is firing at least almost anywhere in cortex within a couple minutes you can learn to control uh like that neuron and so the brain is very plastic under feedback and this is partly how the the cortical motor decoders work.
Some of it is you're decoding what the brain was originally representing um in terms of like a hand or an arm representation, but also just if you're getting these signals out of the brain and you're giving the patient feedback for like what those signals are doing, then the brain also adapts to you. And so in the first experiments for this, they actually didn't fit anything at all. They just took a couple they took two neurons or a handful of neurons and fixed the weights. So it said when this neuron fires more, we're going to go up the screen.
When this neuron fires more, we're going to go down the screen and sideways. They fix the weights and let the brain figure it out. Let the brain learn. And again, the brain is very plastic under feedback and can do this. A powerful moment. You have a learn, you know, we have uh two learning systems that can learn off of one another instead of sort of a fixed one with if statements on this side. Totally. Yeah. And the brain really like if you give the cortex information, it is really good at extracting the meaning. Now, in adulthood, I think one of the reasons that you don't see it as being so plastic is because it has already fit well to reality.
And so there's like if you think of it as this like energy surface and like the state of brain states is this like you've got these hills and valleys. So during normal development typically for most people there's this like enormous basin in this energy surface. And so for most people like you like during development you descend into this basin and then you're down there and it's stable because you've like fit to reality and if I show you like weird movies it's not going to really push you out of that. You can I think like one of the theories of what psychedelics do is they kind of add kind of anneal it so it kind of shrinks the surface a little bit so you kind of access these other states but then when it wears off you just immediately descend back down into the energy well that the brain had fit to and so even though the brain is still plastic it is in this stable like part of the attractor system so that it doesn't you don't see the plasticity as much but this was selected for um and this was absolutely selected for yeah and so there's There's this tension between there absolutely is ongoing plasticity.
If there wasn't plasticity, you couldn't learn things. And so like your ability to learn new stuff is like and have memory like all memory is brain plasticity in many ways. And so we are constantly experiencing very dramatic plasticity. But there are also clear limits to it especially in how like the modules of the different brain areas end up interconnected past these critical periods. I have like a million questions honestly. I mean one of the things that I'm super curious about is like well what is the qualia of the person who has prima and what is you know I'd be curious like with the biohybrid approach like what does it feel like and you know is it like having a second screen like you know is there an input or output I'm very curious yeah so for prima actually on the topic of plasticity in the time that the patients are blind the brain the brain wants to see like again you the thing you experience is this world model constructed by the brain and that is this is this generative model that is conjuring your reality.
And so when it's not getting input from the from the optic nerve, it is still trying to see things. So it kind of turns up the noise. And so um blind patients often report like hallucinations and these like internally generated percepts. When you first turn on the implant in these patients, like you hit it with the laser, um they'll they'll say, "Oh, I see a flash." But then you can do a thing where you'll you'll turn on the laser, they'll see a flash, and you'll play a tone. And you do this a couple times and then you like don't turn on the laser but you play the tone and they're like I see the flash.
And so for the first couple hours of rehab they kind of just have to like learn to like dissociate the real percepts from the phantom percepts because the brain is like so it is like so turned up the gain like turned up turn down the noise floor that um just like getting learning how to discriminate real information coming in from the optic nerve takes a little bit of rehab. The quailia of prima is is normal sight. um it's black and white. It's only a it's a small field of view, but it's it's vision. The deeper question is like what is the quality of like a brainto brain of like an ultra high bandwidth like a bio-hybrid neural interface and that is just like I don't like impossible to imagine.
I those devices will get built and we're going to find out but um there are some natural case studies. So there's a pair of conjoined twins in Canada that it's really like one head with four hemispheres. And what's really interesting is that the two hemispheres of each of the twins's brains are connected normally, but they're not connected with each other except for this one cable connecting the the the phalami like from the phalamus to phalamus. There's this big biological cable that you can see on an MRI. And over this they can share meaningful elements of their conscious experience.
And one of the open questions that hasn't really been studied in in the depth that I would like love to see it um is when they they can see to some degree through each other's eyes, but does this show up as new visual field? Like how is this how do those get experienced directly? Like we already most people have two image modes like you've got your eye open vision but you also have imagination. Some people are aphantasic and they don't have internal imagery. Most people have kind of two image modes. Do they have three image modes or four image modes?
Or if they um have internal monologue, they can they seem to each individually have internal monologue, but they also can clearly communicate over this channel because they've done they've done tasks where like they can coordinate without saying anything to to do stuff and they're conscious of it. And they're conscious of it and it also they don't confuse it for each other. It's not like like with a schizophrenic where it's like, oh, I'm hearing voices and that they're coming from internally generated me. It's misattributed monologue. That doesn't happen to them. they can tell it apart. Um, but they're experiencing it directly in some way.
And so there's a question of is this like when you look at that cable, are they sending the like information in the classical way or is this is there like an effect of like phenomenal binding happening over this cable where it's more like the two hemispheres of your brain that are bound together into one moment. And so there's these natural case studies that tell us that some really interesting things might be possible here, but it's kind of tough to imagine what it would feel like. paint the picture for us. You know, you're here, everything goes really, really well.
Where are we in 5 to 10 years with this technology? I mean, I do think that that you can get to close to native acuity, so kind of like your normal 2020 vision. We're definitely not there yet, but I see a path to get there and be able to get color and fill in a lot of the field of view. To be be clear, that is not where we are right now, but in the next 10 years, I think that that's possible. But beyond that, I'd say that our worldview or my worldview kind of the motivating idea behind the company is you can contrast this this like there's like a drug discovery approach to medicine versus a neural engineering approach to medicine.
I this is much broader than the retinal prosthesis. We started with that because it's a huge unmet need and I think it's the most valuable BCI like product on the like on the horizon that I thought was doable now. Humanity just isn't very good at drug discovery. every now and then you kind of find a thing it's amazing like you find a GLP-1 or you find um like there's every like there's a handful of drugs that are we were lucky to find but it's much more common that you spend a decade going down this this path and then at the end you run a study and the answer is no and then it's like where do you go from there?
There's been a huge amount of work that's gone into finding drugs to to like stop um blindness getting worse or to or to reverse and restore vision to to basically no effect. there's a million dollar per patient gene therapy that has a really very marginal like if any benefit to a very small small percentage of patients in the first place and with our retinal prostthesis that what we saw in the trial was we can take a patient who's been unable to see faces for a decade and allow them to read every letter on an eye chart and so not only is the brain the only organ that really in some deep sense matters we are also just empirically much better at engineering it and so I think this like allows like a really fundamental reframing of medicine and over the next decade I think like beyond people need people see hear have balance have a kilobit per second of motor control that is like you and I think like we have coar implants we have we know how to do motor decoding the thing we didn't know how to do is restore vision we're working we are making real progress on that I think all of this adds up to something that speaks I think to the really foundations like this paradigm shift in what's possible in healthcare something like this uh I remember reading about maybe like 10 maybe even 20 years ago they were able to stimulate the optic nerve ve with electricity directly, but it was very very low resolution and it was so invasive that it could probably only be done in a clinical setting or in a surgical setting.
It's relatively easy to get flashes of light um to cause a patient to kind of see these these flashes. We call these phosphines. There was a company a decade ago called Second Sight that had an electrical stimulator that was implanted in the eye. It was a 4 and 1 half hour surgery with a titanium box on the side of the eye. um it stimulated a different layer of cells than we do and they were able to get these flashes where like if a patient looked at it they could say like oh there's some flashes here there's some flashes here it's connected that's an A and like the next letter and it's like there's some here there's some here it's an H but it doesn't the brain doesn't assemble together these flashes of light into like a gestalt hole that is an image in the mind's eye um similarly when you stimulate cortex um like the back of the head where the visual cortical areas are you can get these flashes of light and you can even in some cases He's got a lot of them, but again, the brain doesn't like you.
It's kind of this more psychedelic effect like this doesn't get assembled together into form vision. And as far as I know, our clinical trial was the first time ever that form vision had been like had created like a coherent image in the mind's eye of a of a person. Is there something uh specific about macular degeneration that causes you know this to be possible for this set of patients? So there's a bunch of reasons why people lose rods and cones. Um there's macular degeneration, there's retitis pigmentotosa, there's some rare like inherited diseases like stararts disease, diabetic retinopathy can do it, age- related macular degeneration.
It's the most common. Um so this globally affects 200 million people. The severe form geographic atrophy is is a million to a couple million. In that sense, it's a big need. One of the nice things about our device is that it doesn't we're somewhat agnostic to the reason that you lost the photo receptors. And so we we think it'll also work. um for retinized pigmentotosa, for stararts, for these other indications. We're actually just about to start a new clinical trial on on inherited retinal disease um which affects much younger people. And this again this goes back to like the drug discovery versus neural engineering view of the world.
Like if you want to make a if you want to make a drug then you care a lot about exactly like what molecularly went wrong in the rot and that is different by disease then even if you figure this out it's really hard to like understand what to do about it. here. We don't really care why the rods are coincided. We just care that we can get the the visual signal back into the computer. I guess I'm just very fascinated by you obviously uh as a computer scientist spend a lot of time thinking about inputs and signals and then what I'm hearing is that like some of that thinking does actually translate into uh from software into wetwware.
Well, I mean the brain is a computer and it's going to saying that is going to get me yelled at by some corner of of the field, but I think like I think that you can take that like almost literally. It's a it's a very different architecture than like a like a vonoyoman architecture electrical computer, but it processes information. It gets information down one of 12 cranial nerves or 31 spinal. So all of the information that flows in or out of the brain goes through a small number of cables. The optic nerve we'd call cranial nerve 2.
Um the vestibular coar nerve that carries hearing balance, cranial nerve 8. Um there's 31 spinal nerves that carry commands out to the muscles and sensory information into the brain. And you can think of that as like the API of the brain. And if you can like get all the signals going down those then like that's like the brain is not magically connected to the environment. It is reality is whatever spikes are on the cranial and spinal nerves. And in that sense you've got this like well- definfined interface to it. Then with the processing once it gets this information is enormously complicated.
It constructs everything we experience. Like I think it's important to appreciate you experience yourself being in the world. You kind of see the the walls and the room and the lights and everything. But that of course you're not experiencing directly. you're experiencing a world model like fabricated by your brain. But I I think one of the interesting things that's come out of progress in artificial intelligence is we're seeing this big unification in neuroscience and and AI. I think we're actually learning a lot from AI re more than I think we thought we would learn from AI research.
I mean I can tell you 10 years ago we thought it would go the other way and that the AI people would learn a lot from neuroscience and it's really been the other way around. I'm always curious. I mean you were mentioning second side sort of you know flashes of light and yet you know here you know how did you figure out the API I mean if I was you know trying to reverse engineer it I guess I would like try to measure the signals is it similar with you know biology it's just it's difficult to measure the signals so brain brain computer interface research and development is limited by your ability to record and stimulate these signals that neuroscience comparatively is actually pretty simple as soon as you can record these signals we've very quickly figured out what we we talk about neural representations what they are second sight's instructive so in the retina there's three layers of cells that matter there's 150 million rods and cones this connects to 100 million bipolar cells bipolar because they've got two ends and that connects the rods and cones to 1.5 million optic nerve cells call them retinal ganglen cells gang is like a fancy word for like reaches a far distance and connects to somewhere we stimulate the 100 million bipolar cells second sight stimulated the 1.5 million ganglen cells and so they were trying to get the signal into the brain past that 100x compression and the retina was doing a lot of computation there.
The eyes of camera light shines in from the front, it hits the rods and cones like that. The representation in the rods and cones is a bit mapped image. It's just like you take the image, you tile it across the rods and cones that that's what it is. Now, in the the 1.5 million optic nerve cells, it's not like that. Like if you just project an image onto them, you get a bunch of trash because at that point it's already compressed things like edges, relative motion, a bunch of other like blobby shapes, color. And so if you stimulate a cell there, you're not going to get just like a pixel.
You're going to get like some uh edge mo like direction gradient thing. And when you excite that, you you can't do that selectively because we don't like first of all, you just can't do it selectively enough. And we don't know like the codec. We don't have like the know how to pattern it appropriately. And so you end up getting these flashes of light. It was an empirical discovery of of our study that if you excite the bipolar cells with an image, you get an image in the mind's eye because that is clearly the critical processing step in the retina that you wanted to preserve.
Did you know that that would happen or did you have to try different parts? When we started the company, we I think we're a little bit different than most medical device or biotech companies because they're often founded around like a specific asset like a a patent or some specific piece of IP that they're going to spin out of a university or maybe something that the founders have worked on. We weren't like that. We did we had a couple ideas at the beginning. Um we had this like neural engineering centric view of healthcare. We had a specific um BCI probe idea in biohybrid and we had a sense that the most valuable thing that we could build in the near term was a retinal prostthesis.
We thought the time was there like the technology was all there that that would be possible circuit 2021 and that was also further from stuff that I had worked on before and so it felt like a good thing for us to to kind of go explore. I think we took this very very like first principles approach and you have to be careful with first principles in biology because first principles are not enough in biology like they'll get you very far in many other areas of engineering but in biology you also have to understand like what did evolution actually do and there's a lot of other nuance there but in this case we we looked at the retina there were kind of reasons intuitions to think that past that would be much harder and so in the retina you've got this 2x2 matrix you've got a choice of do if you've lost the rods and cones do you stimulate the bipolar cells or the optic nerve cells And do you do it electrically or with a technique called optogenetics?
And we just went and explored all four quadrants of that. We uh very quickly figured out that stimulating the the optic nerve cells was very difficult for these reasons. You end up with this like 1 million degree of freedom calibration that you have to do per patient that like can't be done in practice. And so that led us to the bipolar cells which was before this compression. And so then the question was do you want to stimulate them electrically or using optogenetics? And we developed both. And so we have a state-of-the-art optogenetic gene therapy in house.
Published a paper last fall on on the world's most sensitive optogenetics option proteins. These are proteins that you can express in a neuron to make a neuron that is not normally light sensitive responsive to light. Oh wow. But the drawback was that the conventional optogenetic proteins take like a bright laser to activate them. And so what we were able to do were find optogenetic proteins that are so sensitive that they're sensitive to like indoor office lighting. And so this you could use in very different ways. and then we could target them to the bipolar cells, but that still has like 5 to seven years of clinical translation away if it ends up working and there's a bunch of pitfalls it could run into along the way.
And then we also um just surveyed the world to see what was the state-of-the-art for the best out there in um in electrical stimulation and there was this technology that had been invented at Stanford about a decade ago that a small company in Europe had been uh kind of developing in the meantime and we got convinced that that was the right way to go and so we acquired them a few years ago and this was kind of all from this like bird's eye view of if you want to restore vision in the retina kind of how would you do that what are the promising approaches narrow that down and and that brought us to hear.
That's insane. That's so cool. I wanted to jump to your start in tech broadly. I mean, did you start in bio and software and engineering? Like, you know, what was your sort of journey into what you're doing now, which is I mean, giving people blindsight is the wildest thing people watching might be asking themselves like, well, you know, I hear a lot about B2B SAS, but you know, how do I actually become uh something more like you? I was certainly doing software and my deepest hard skill is software. Um my I have a degree in biomedical engineering but I grew up programming and so I was doing that well before I was doing any any biotech stuff.
My parents tell me a story about how I um sat on the floor of a Barnes & Noble and cried until they bought me a Learn Visual Basic book. I was always interested in the brain. I was definitely inspired by science fiction. Um the Matrix had a big impact on me. Um, both because the idea of this like world of bits was just so alluring for for a bunch of like fundamental reasons. Like when I look around at at the world like it's hard to build things. Um, space is constrained. It's like the earth is small.
The resources are intensely contested. The like space is large. The speed of light is low. Like you don't have any of those constraints in in the machine. And so if you could simulate a world kind of anything was possible there. But then also if you then kind of turned that inside out, if you realize that you can build this and that you couldn't tell the difference, then the coral area of that was must be like the thing that matters is the brain and if you can engineer the brain and support the brain, then kind of all the rest of it is replaceable.
And that just seemed like a kind of a fairly deep insight that was not being borne out in the world in the way that it seemed like like it should be. Some of it is um if you can surround that consciousness with like the correct inputs. Yeah. I mean this also gets into questions of like what is consciousness like the how does the brain create our experience. There's this meme out there that BCI is an artificial intelligence adjacent story um and that the goal is to we have to merge humans and machines. And I do think that there's something to that but I think in the more immediate thing here is that ICBC is really a longevity like healthcare adjacent story.
If the end of the quest of artificial intelligence are super intelligent machines, then I think the end of the BCI quest are actually conscious machines, it might turn out that there's actually no measurement that we can take that will tell us if something is conscious or not or what it's like. And the only thing that you can actually know on that is your own. And so if that's the case, then to study consciousness, we will need to use brain computer interfaces to like see it for ourselves. And once you've developed that, then I think that you kind of can understand the fundamental physics of what's happening there, whether that's new fundamental physics or it's emergent in some way.
But if you can learn how to build like kind of understand whatever the brain is taking advantage of that our universe supports, then eventually you get super intelligent conscious machines that we can be part of through these these ultra high bandwidth connections. Uh I think that's a very different narrative than how people usually think about BCI today. I mean, we're at the beginning of that, right? Oh yeah, we're at the very beginning of that. the current trial that you have I mean it's uh low it's relatively low bandwidth but it's going to get much higher bandwidth and then I mean like anything you sort of bootstrap with the thing that works which I think you know what what you have is a clear breakthrough as it is and then if you look at like the PC revolution for instance it's like could you believe that all of this that we have today started with like a little blue box like in Altter it still takes some suspension of disbelief because I biotech has just been so incremental.
Like it's been so like there's there's been big advances, but at the same time, these time constants historically, I mean, you could easily spend 10 years on something that feels very incremental. And I think that one of the things that's so exciting about what's happening now is that no longer really feels so incremental to me. To me, it feels like we're firmly in like the takeoff era now. Like something new has happened on Earth. But I think it's also important to remember that this didn't start in like 2019 or 1999. This started in the late 1800s with the industrial revolution.
just a few years before the industrial revolution really kicked off. I mean, life was more or less unchanged in a fundamental sense for several thousand years. And they didn't really even have like a concept of progress in many ways. And I don't think there's any way they could have imagined like the way that their life would have changed over the course of the like first 10 15 years of the steam engine. And that is how I feel like looking at the next 15 years right now. Yeah. I mean, so we have an electrical stimulation right now.
And then at the same time you also do have a bioupling like it's not purely just electrical. Would you call it a V2 or like sort of a next frontier? So this is a totally different area. I mean the you might be able to use a provision. So one of the diseases that prima or electrical stimulator doesn't treat is glaucoma which is loss of the optic nerve itself. And so it's possible that you could use our biohybrid BCI technology for that. But that's not what we're doing right now. There are three elements to our pipeline at at science.
The first is our work in the retina in blindness especially with the prima implant. The second is our work in neural interfaces and the third is is um our work in profusion with our vessel program. The biohybrid neural interfaces the idea here is like if your brain is a bunch of neurons like how would how would nature solve this problem like we often look to nature for inspiration. Evolution is a way better engineer than we are at least when dealing with biology. I think the intuition here kind of started from your brain is is composed of two hemispheres and they kind of process different halves of the world separately but you don't experience two hemispheres or two hemi fields we would say you experience one integrated moment and this is there's a cable that connects the two hemispheres of the brain called the corpus colosum it's about 200 million fibers and I was thinking like if nature wanted to build a ultra high bandwidth braintobrain connection Like what would how did or if you wanted to make a new cranial nerve.
So instead of having an optic nerve or a vestigular nerve, it wanted to have like the internet nerve like how would nature solve this problem is it would grow like a new nerve. It would have a new fiber bundle with a USB port at the end. So the intuition here is like if your brain is a bunch of neurons, what happens if I culture some neurons on your neurons? Do they like when you do that in in a lab that neurons will typically grow together and wire up and form new biological connections? And so we have an approach to the device where we seed our the implant with living neurons.
These heavily engineered stem cell derived neurons that we've created. Are they related to your own neurons or No. So really interestingly, this is actually one of the deep areas of research. So we um there's it's one cell line and the probably the single deepest area of of of IP on this is that we've hidden them from the immune system. So, we're one of a really small number of companies that have, I think, like pretty convincing what we call hypoimmunogenic stem cells. You don't need to manufacture it per patient, which would be really expensive and take much longer.
We've got this hypoamogenic um stem cell derived engineered neuron that we load into the device in a dish and then that kind of gets stuck there and then you engraft this onto the brain. So, we don't um we don't place any wires into the brain. We also don't need to genetically modify the like your brain. um some of the other ideas out there, for example, using optogenetics or things like ultrasound. This requires using a gene therapy to genetically modify the neurons in your brain, which first of all, that's like a one-way door. And if it goes wrong, that can go really wrong.
Whereas here, because we're adding the only thing that has been edited are the graft cells that we add. And if if those die off, then like you're really not worse off than you were before for the most part. Um, but it comes with the potential of growing throughout the brain, forming biological connections all over the place. Um, and I mean that's what we've seen in the animal models. That's not in humans yet, but have you seen James Cameron's Avatar movies? Definitely. Like you know the ponytails that the aliens have. That's how I think about it.
Basically, it's like it's a big new cranial nerve with a connector at the end. I think that's actually the the Avatar Q. I think is like a pretty direct reference for how I think about our biohybrid neural interfaces. So earlier you were saying sort of this how do we find a USB port? I mean obviously an avatar that's uh you know one of the manifestations in the blue creatures the optic nerve in a way is like a port. Um and then you know jumping to Neurolink uh when you were co-founding it that you know sort of enters the brain and then you there is no not necessarily like an obvious port like how do you think about that you know you know where where do you attach and how does it work and what did what did you learn from Neurolink that you know was useful here?
Well, I mean a lot of what I learned from Neuralink was like just like the in many ways it was kind of the ultimate startup PhD and so that was more about like how do you execute a technically complex company that requires this type of like multi-disiplinary team and infrastructure like I'm very curious from those days like what was the V1 and then you know there's the hypothesis and then you know the outcome and then here like the outcome is very very awesome with science so far not done obviously. Yeah. Yeah, when you think about the brain, like cuz I I remember it being like totally magical to me, like what is like how do you even understand what the brain's doing?
Like what is like what language is it speaking? How do we understand what's going on there? That seems like impossibly complicated. The way that I would think about like the brain from this information processing perspective is the brain is full of these these things that we call representations. And so you can have a representation of like hand activity. So there's like a like a geometric object in the brain. Like if you record from some neurons, then when your finger is is like held open, a neuron will be firing. When it's closed, another neuron will be firing.
There's neurons that kind of correspond to every possible state here. And often in prim primary motor cortex, which is where many of the other BCI companies record from, primary motor cortex is a couple synapses, often two synapses from the muscle. So it projects all the way from the top of the head down to the spine, and then there's another synapse from the spine out to the muscle. And so the representation that you get in primary motor cortex um is kind of easy to understand because it looks like like it it directly corresponds to things that we can easily reason about like hand state and specifically often joint joint torques.
One of the things that I like to do sometimes with the LLMs is like I'll pick like a neuron to start from for example like the retinal ganglion cell and I'll be like okay go forward one synapse like what are all the cells that we're connected to? I'll pick another one be like okay go forward one synapse like what are all the cells that we're connected to just kind of try to walk through the brain and each generation of model your ability to do this gets better but one of the things that you see is that when you're close to like an input or an output like a muscle or a coclear hair cell or a retinal ganglen a roer cone like in these cases we think of the representations as being concrete because they correspond to things that are intuitive for us like colors and like image intensities or frequencies of sound or uh muscle control.
But as you go deeper into the brain, it very quickly kind of blows up into these very abstract things. And so um like there's a part of the brain called infratemporal cortex where the representation that it has is a map of face like a map of objects or a map of another area right next to is a map of faces. We think about this like map of object space this normal representation of general objects. There's like one point you can think of as like a long list of numbers and there's some point in that that's like a vase.
There's some point that's like the Eiffel Tower. There's some point that's a car. There's some point that's a person. There's some point that's like a zebra. And as you move around in this on this like manifold, you get um kind of the percept of any possible object. And there's millions of neurons there that are representing this like this space of possible objects that the brain could be identifying. Sounds like latent space. It is a latent space. Exactly. And so there's this huge unification going on between AI and and neuroscience. And you know, one of the most interesting things is that um when you train AI models like like image models or and even language models, um the representations that you get inside them look a lot like the representations you see in the brain.
Fascinating. And so this is like a real hint that the AI people I mean that's really good are on the right track. Yeah. No, I mean the whole idea like there's these things are like stochastic parrots or glorified autocompletes like these people just don't know what they're talking about. Many people in neuroscience have gone over to AI because they're basically still doing neuroscience but it's just way easier to do it on the models. It sounds like it's very good news for you in that like there is actually some kind of latent space mapping and then the job of science in terms of being sort of like the API to the brain.
Totally. Exactly. like entirely possible the neural activity that you when you record neural activity from the brain this is just another this is just another latent and if you can translate this into another model then you can do we think really cool stuff with that so you have input now and then you earlier saying I mean a lot of the earlier BCI uh experiments involved figuring out like motor yeah so motor decoding is kind of this very classic task and you can do it any number of ways um but getting like cursor control or keyboard control in a human.
That was first done in the late '9s. And so I think a lot of the BCI companies are doing that now just because like we know it definitely works. Um you know there's some patient need and it really is just like an electronics problem. Like if you can shrink the electronics so they're small enough and low power enough so they they don't dissipate a lot of heat so you can close the skin then that is like a big advance. And that I think is really the first thing that Neuralink has done. There were prior devices that could do that type of motor decoding but they required a connector coming out through the scalp.
And as long as the skin is open, there's a risk that like an infection will climb down that and then you're going to have a really bad day. So being able to close the skin is really important. But that was really difficult because it required really efficient electronics that were small enough to fully implant and also were power efficient enough that they wouldn't get hot. And so I think the thing that made this possible is is what we call the smartphone dividend. Like BCI couldn't have done this on its own, but Apple and Samsung and others have poured epic amounts of money onto making these types of electronics exist in the world so that people like us can use them.
And then it feels like you have um a really significant advantage around being a biohybrid. I mean there are all these issues uh famously about you sort of trying to electrically stimulate uh brain cells for a long period of time. Yeah. I mean I think that there are different products here. I think that on the like on the one hand I mean I that's why I'm doing it. I think that's a good idea. On the other hand I think some people look at this and they're like that is now you have a cell to deal with like you took a device and you added a bunch of biology to it.
And I think we have a good handle on that. that's why we're doing it. But there's definitely a trade-off there. And I think that BCI we're going to come to see is not is not a specific product in the way that like pharma is not a product. I think they're going to be a bunch of BCI companies going after different applications where different types of probes will make sense. And I think biohybrid in particular is only really necessary for some of like the very highest end things. And on the flip side, it will be harder to deploy for many other important medical needs and important applications along the way.
Um, and will probably be a little backloaded relative to some other things in in that scalable impact. So earlier you're referring to uh, you know, there's a third part of science which is vessel. Talk more about that because it feels like you're applying a lot of the first principles thinkings that got you here to this thing that is also like pretty pretty groundbreaking. So this is this is our smallest project. So there's this field of profusion. You can think of it as they're kind of like heart and lung machines. And I I was first clued into the need here about a decade ago when I read an article in in a medical journal called the Lancet, which was this case study of a the 17-year-old living in Boston who was waiting for a lung transplant.
And while he was waiting for this lung transplant, he was being kept alive on a on an ECMO circuit. ECMO is sacra corporeal membrane oxygenation. this fancy word for like heart lung machine. And in his case, his heart was okay, but his lungs had failed. And so this was keeping him alive. And after a while on the transplant list, he was diagnosed with a complication that made him no longer a priority recipient for donor lungs. And so they took him off the transplant list. And so this article is kind of about the ethical dilemmas of like, what do we do with him?
But he's alive because he's Yeah. He's like playing video games. He's doing homework, hanging out with friends. If we turn off the circuit, he will immediately die. Well, don't do that then. On the other hand, he's consuming a half a million dollar a month ICU suite. And so there are these quotes in this article from the doctors being like his family and friends derived benefits from his continued survival and how this raised fairness questions because if we like support him for a longer period of time than why would we do this for everybody? And so I saw this I'm like those were great questions.
I need answers to those questions because there seemed to be this big gap between what was technically possible and what was economic to deploy for some reason. I mean that's exactly what being a founder is about. Yeah. Yeah. So I saw this and I there's this database of medical literature called PubMed and I realized that if I searched PubMed for the phrase ECMO ethical dilemma, there were multiple pages of results. So this was not like a one-off. And when I looked at this literature there, it was often a lot of it was talking about how ECMO shouldn't be used as a as a quote bridge to nowhere and how many doctors were basically trying to discourage families from like even pursuing it in these critical care cases because it would create this bridge nowhere and then like what do we do?
And it creates these dilemmas. And then I went and asked some some doc, this was a long time ago. This was almost a decade ago now. Like, oh well, like why don't we consider it as a destination? That the phrase is like a destination therapy versus a bridge therapy. What if the technology just isn't good enough yet and it needs to be improved? It needs to be improved definitely. But that wasn't even the response that I got. The response that I got was just like shouting and throwing things. And so I was like something feels wrong here.
But I wasn't really in a position then to pursue it. But this was always a thing that was kind of I saw that there was a really important unification here. It also this the same fundamental type of technology has really transformed organ transplantation. So there they call it NMP normotheric machine profusion rather than ECMO but it's the same idea. Um so 20 years ago if you needed a like a kidney transplant or a liver if the car crash happened at 3 in the morning the surgery would happen at 4:00 or 5 in the morning. But now it gets scheduled for like the afternoon or the next day.
And over 75% of liver transplants in the US use this type of profusion technology now. But like the the systems that exist for this are like $500,000. They can only be moved by private jet. Like one of the big companies in the space, it turns out that their like private jet logistics business is bigger than their medical device business. And it just like there was just like clearly an engineering that could refine this. And so we looked at this and we thought like, well, what if you could refine this to the point where you could check a kidney's luggage on a United flight to the East Coast?
Or what if you could make a thing that that 17-year-old could have brought home as a backpack um instead of just what they did in his case is they stopped changing the oxygenator filter and a week later it clotted and he died. And that's what happened. There are other problems here like like being able to close the skin around the brain implant. also need to make it so that the the tubes that connect the the blood supply to the circuit can the skin can heal to it. So that's not an infection risk. You can otherwise you have to clean it very carefully.
But just overall there's this huge gap between like clearly like where the scientific breakthroughs like were put were pointing and like what was being done like I think I think that people don't appreciate is that in many cases like there's like if you want to be a brand in a vat like this basically already exists like you can keep a like an end life like patient alive in an ICU almost indefinitely but this is very poor quality of life and so patients like ask for that to be withdrawn like nobody wants to be basically like a brain and like a hospital bed connected to tubes.
you need to be able to provide a high quality of life. And so you need something that people can like live with. And I think to see this like if you can get vision, hearing, balance, motor control, um the ability to like be out in the world and doing things, I just saw this like very fundamental way to reframe the problems of medicine here. And so that like I said like at science even though there's these several different projects, I really see them as like as one project over the next 10 years. So you know started as an engineer um first principles thinking which often now is quite associated with Elon Musk.
Uh how did Neurolink start? How did you get to know him? And how did all of this sort of come together? Because I first met you when you were uh doing Y Combinator many years ago, my first stint at YC. So, I um got an email one night in early 2016 um from Sam uh subject line crazy question be like Alon starting a brain computer interface company like who should who should run it and I assumed they're talking to a lot of people and my first reaction was actually I I had some friends at MIT that I thought I'm like well these guys are really smart you should talk to them but then like an hour later I was like wait a second and so I I emailed him back I'm like can I like and uh Sam introduced me to Elon and Elon was going around he'd already had the idea like on his own that he wanted to start a company and he had the name Neurolink.
I also think that he heard my name from enough people that he was talking to at the time and kind of over the second half of 2016 there was just this group of people that was kind of some some degree ever shifting that would meet once a week or so in the evening and that snowballed into into Nurlink and of the the initial group a bunch of them were people that I knew from Duke. So Tim Hansen, the guy who had originally had the the sewing machine idea, he was in the lab that I came from at Duke, he was a a grad student, um I was an undergrad working for him and then the professor that he and one of our other friends had gone to at UCSF and then a collaborator of theirs.
So it was kind of a very small community. What was that like initially to talk about the idea of like you know connecting a computer to a human being's brain? Elon he I mean he saw what was coming in AI like very much more clearly than many other people much earlier and I think the implications of like if like you got to this this can't be a separate thing from humanity and that needs to merge somehow I think that implication was just very clear to him and so that was the genuine motivating factor of like how do we make it so that this allows us to upgrade humanity rather than get left behind.
I mean if you look at the natural history of earth um it's not like this is a totally speculative thing. Humanity has totally dominated the planet and we keep our closest living relatives in glass boxes so they don't go extinct. And so there's a real history here of of greater intelligence being very dangerous. Like in the beginning there wasn't like a specific technical idea necessarily, but there was that motivating force and then the idea is we'd pull together like the smartest group of people that he could find and and enough resources to to do whatever made sense and eventually got consensus around what you see now is the uh as the thin film polymer threads.
You're one of the best examples of someone who came from a pure software world and then went into hard- techch and now is actually doing real breakthrough type of research and work that is also commercializable. The people watching, they might be on a similar track. Knowing what you know now, like what would you tell to the sort of 2016 version of yourself? So, I think there's two things. The first is um like the thing that I did and then there's the thing I didn't do. The thing that I did I think was I had I had a a clear sense of what I wanted and then I was very high agency towards that.
When I was in college I knew that I wanted to work in brain computer interfaces. There was a great lab that was doing that work at Duke where I went and I was pretty persistent in figuring out how to like place myself into that lab. It was in the medical center. They didn't usually take undergrads. They're like it took me a little while to get in there. I eventually figured out that I could sneak in by taking an independent study in the chemistry department that would like be a back door into this like primate neuroscience group.
But then really most of my education in college happened in that lab. So yeah, I grew up programming in my my deepest hard skill is software, but I I've been doing primate brain computer interface like closely neural decoding stuff since 2008. And so that was just like you had to be pretty high agency and and like um persistent in trying to like if like follow through on that. But that only works if you have a sense of where you want to go. And so the first is like figure out what you want. The thing I didn't do was my so after college I started a company um called transcryptic that was a the it was a robotic cloud laboratory.
So the idea was and I also in college had the experience of working in a synthetic biology group where I needed to go press a button on a device called a plate reader every 3 hours for 3 days to take a measurement that I wanted. And I was like in software like this doesn't we wouldn't do this. Like this just clearly doesn't make sense. Like we would automate it. This was also the time when AWS was just emerging and cloud computing was becoming a thing. And it seemed very obvious to me that instead of every researcher having their own lab and spending millions of dollars for all their equipment and then like needing to press these buttons, like what we should build is a central robotic cloud laboratory that expose APIs that scientists can use to run experiments over the internet.
I did that, raised a bunch of money when I stepped down as CEO in in 20 uh beginning of 2017 to to join Neuralink. Um it had millions of dollars in revenue like I felt like we got it to kind of an early promising point. Um, and then since then, over the last decade, um, it that I don't that promise was not fulfilled. That was still that was hard mode. That was like a slog. That era from 2012 to like 2016, I strongly identified with Ben Horus's essay, The Struggle. And I think the thing that I should have done earlier is go work for somebody like Elon cuz that just like so dramatically leveled up like my ability to do this and and know how the game is played.
And um, and I think that often you'll see these really promising kids who are just like, I'm going to do it myself. like I don't want to work for anybody else. I'm going to start my own company. I'm going to plow through it. And like sometimes that works. Like who am I to say? But I can tell you that very often startup like running a startup is an oral tradition. There have been a couple nucleating times in history where like a really remarkable group of people have kind of figured it out from scratch. Like I think PayPal was like this.
But almost always beyond that, it's like it's an oral tradition that you pass down from one of like this handful of Silicon Valley cultures that can make a huge difference on the trajectory of your career to get that right when you're 20 versus when you're 26 or 28. Well, science is the next. And it sounds like you're assembling, you know, you've already assembled a really accomplished crew of people. And then what we've learned from startups over the years is that um once something works like more and more resources, more and more smart people sort of come together and then you know zooming out that's what we really hope happens uh a whole lot more in exactly the spaces that you're in right now.
So you know science sounds like one of those places to go to right now. It's pretty cool. Yeah. I mean, I'm I definitely feel pretty lucky that I get to that I get to do this because it's such an interdiciplinary problem and the to innovate on it, you need all these different areas and really great people in each of them, but at the same time, it's there's the the things that you can do today were unimaginable a few years ago and and yeah, I mean, I think that I think we have the best team in the in the field.
So I mean next 10 20 years of you know science BCI like I guess where do you see this going and you know what are you most excited about? I have this like event horizon at 2035 now like when I was earlier in my life I always kind of prided myself on the ability to see the future and that is the next few years I think I have a sense of but like by 2035 it's just like impos there's like I can't see past it. I think it is very possible that the first people to live to a thousand are alive right now.
And I think it might be many more people than you think. It's not going to be like one or two people on Earth today. Earth as a whole is at a not not unique like this moments in history like have happened all the time before, but right now it's a time of exceptional change. This is going to be really really influenced by the technological changes that are happening. And I the the twin plot lines of brain computer interfaces and artificial intelligence. People are are beginning to get that artificial intelligence is real. It is still not priced in.
People still don't appreciate it. I agree. But they really don't get what's coming in in what's possible with brain computer interfaces. And those are really parallel but very distinct stories. Intelligence is going to become widely available for those that have the agency to deploy it. And I am generally pretty optimistic about that. Like I don't my my pdoom is pretty it's not zero but it's it's not 50%. It's well below that. Yeah. Yeah, I don't know if we'll have cured um all disease. In fact, I definitely wouldn't use that term. I wouldn't say we'll have cured all diseases by 2035.
But I think that there will be kind of new lateral options that that totally reframe how we think about the human condition on that time scale and totally reconfiguring basically that sort of interface between computers and humans. It's Yeah. and humans and each other. If a brain computer interface is equivalent to a a braintobrain interface in many cases, this takes you to like totally new territory. Max, thank you so much for joining us. Thanks for building the future and we can't wait to see what you build next. Thanks, Gary.
More from Y Combinator
Get daily recaps from
Y Combinator
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.





