Sam Altman on Building the Future of AI

OpenAI| 00:46:11|Apr 9, 2026
Chapters13
Chris Nicholson introduces the forum and frames the discussion around AI progress, its implications for science, society, and governance.

OpenAI leaders debate how ever-accelerating AI could reshape science, economy, and society, stressing broad access, resilient institutions, and careful governance.

Summary

In this OpenAI forum, Sam Altman sits with Josh Ackoff (OpenAI’s chief futurist), Adrien Akafe, and host Chris Nicholson to map the road ahead for supercharged AI. They argue progress is accelerating and that we’re near a tipping point where incredibly capable models will reshape science, industry, and daily life. Adrien Akafe reflects on the researchers’ shift to AI-assisted coding and the urgency of policy work that goes hand‑in‑hand with technical breakthroughs. The discussion also centers on resilience—security, governance, and the need for layered defenses as multiple AIs proliferate. Josh Ackoff emphasizes social infrastructure: universal compute access, worker involvement, portable benefits, and the design of institutions to ensure AI boosts prosperity for all, not just a few. Sam Altman adds that democratizing AI infrastructure is as crucial as safety, envisioning startups with small teams and vast AI power, and contemplating new tax and ownership models in an AI-driven economy. Across topics from healthcare to education to food supply, the speakers stress practical benefits (personalized medicine, cheaper energy, better diagnoses) alongside the imperative to mitigate risks (cyber threats, misaligned incentives, and inequitable access). The conversation ends with a call to continue the public dialogue, inviting feedback and proposing pilots, fellowships, and a May workshop in Washington, DC to advance the policy and infrastructure discussion.

Key Takeaways

  • AI progress is accelerating and we may soon live with super‑capable models that will redefine science and work.
  • A broad, layered resilience strategy is needed—combining safety testing, incident reporting, cyber defense, and societal readiness for AI-driven disruption.
  • Democratizing AI infrastructure (compute access, tools, and education) is essential to spread opportunity and avoid concentration of power.
  • New institutions, policies, and economic models (portable benefits, revised taxation, worker co‑creation in AI deployment) should be explored to distribute AI prosperity.
  • Startups can be dramatically democratized by AI tooling, enabling small teams to build transformative ventures with vast compute at their disposal.
  • AI can expand access to healthcare, elder care, and education, while also demanding new governance to safeguard privacy, jobs, and security.
  • There is a need for ongoing public discourse and feedback channels (policy emails, fellowships, and open workshops) to shape AI’s trajectory.

Who Is This For?

This is essential viewing for policymakers, AI researchers, and startup founders who want to understand how to shape AI’s trajectory, governance, and practical adoption so that benefits reach broad society rather than a privileged minority.

Notable Quotes

"The rate of progress is continuing to accelerate and we believe we are very close now."
Altman argues progress is accelerating toward near-term super-capable models.
"If we can really go make a decade's worth of scientific progress in a year, if we can cure diseases, find new materials for cheap safe energy..."
Altman highlights transformative scientific and economic benefits.
"There will be many AIs in the world, and an emergent response across society will be required."
Ackoff on multi-agent risk and societal governance.
"Democratizing AI infrastructure means making the models so good and so capable that people can start startups and make discoveries broadly."
Altman on widespread access to AI for entrepreneurship and innovation.
"We need to modernize the tax base and distribute prosperity broadly in an AI economy."
Policy direction on economic reform in response to AI.

Questions This Video Answers

  • How will AI progress affect jobs and new startup opportunities in the next five years?
  • What layered resilience strategies are suggested to defend against AI-enabled cyber threats?
  • What policies could ensure universal access to AI tools and prevent wealth concentration?
  • How can AI improve healthcare delivery and education at scale without increasing risk?
  • What new institutions might emerge to regulate and socialize AI benefits?
OpenAI forumSam AltmanJosh AckoffAdrien AkafeAI governanceAI safetyAI policyuniversal computeportable benefitsAI startups democratization','AI in healthcare','AI in education','cybersecurity with AI','biothreat defense
Full Transcript
Good afternoon everyone and welcome to the OpenAI forum. I'm Chris Nicholson and I'm glad to be here with all of you today. The forum is a place for serious conversation about how AI is being used in the world, what we're learning from that and how more people can help shape its trajectory. Today's conversation focuses on one of the biggest questions in technology. What it will mean as AI systems grow dramatically more capable and how we should think about their implications for science work our life together uh and governance. To discuss that, I'm joined by Sam Alman, co-founder and CEO of OpenAI, Josh Aim, OpenAI's chief futurist, and Adrien Akafe, a longtime researcher here. So, let's get started. Sam, the blueprint we released this morning talks a lot about super intelligence. Um, a big question on my mind is why are we doing that now? And what are some things that you can see from the inside that you wish everybody knew? The the biggest reason is simply that the rate of progress is continuing to accelerate and we believe we are very close now and this won't be a one-time thing. This will be a you know over the next few years to powerful models that will impact the world in important ways. the researchers that are working on these models uh that did an uh I think an incredible job with this set of ideas and these are meant to be early ideas to start a discussion uh you know I'm sure we'll get to much better ideas as the world debates all of these are staring down the models that are coming um the pipeline in front of us and you know we may be wrong we may hit some wall we we are imperfect but our but given what we see uh we expect to be in a world of extreme extremely capable models quite soon and then for the the ramp of the capability to continue to increase I think this will have huge impacts on the economy on the way we live on what we can do and u one thing I've observed from watching the world go through some number of transitions is that the more time the the public our leaders the political system has to debate ideas before you really have to make a decision the more likely you are to make a good decision Yeah. So starting this now given what we see coming so that as this becomes a very large issue of public debate I think is important for sure. Um and speaking of debate we brought in a lot of researchers Adrian very early in this process. I think it was a a maybe a unique exercise here and how many researchers were working uh with the folks who also think about policy a lot. Um it's closely tied. What was that like for you and for the research group as a whole? Yeah, you know, I think it was well, it was a very interesting experience like it was my first time uh working like actively on on a policy doc. I think it was a little bit like humbling in some cases, you know. Uh uh sometimes as researchers we can have these ab abstract ideas of oh we should really be thinking about the economic impact of this or or about like policy for for safety. But, you know, it's it's one thing to think about this and another thing to like actually put, you know, pen to paper and um and think of concrete uh policy ideas that are going to be debated by by your your peer uh your peer researchers. Um so that was an interesting experience for me. Um I I hope that was uh you know very helpful for the final product uh that we had the researchers involved so early and so and so deeply like for for something like this uh especially since it's so forwardlooking uh you kind of need to to Sam's point like you need people who are um who are dealing with this technology every day who know how to build it know how the safety stack works and also are seeing like the speed of progress. Um and one thing that you know that I remember as part of this process in the in the past few months that we were were working on it like a lot of researchers went through a a transition uh from you know writing most of their own code to having AI most write most of their code and I think that I think to some extent led them to bring a lot of the urgency of like this technology is is real and important moving fast in a way that um not everyone can can see uh and and that's part of also kind of your your earlier question about why now like because we are seeing this urgency. Can I tell a quick story that reminded me of? Um there was a night in I guess it would have been late January, early February of 2020 when uh the OpenAI researchers kind of got obsessed with COVID before the rest of the world did and we were talking about all the time and we're watching the numbers every day and we're like this is going to happen and you know we were like making plans to go uh work from home and there was like some article that came out like mocking us because they're like these crazy people at Open Eye. We had put like some copper or something on some of the door handles because people you remember that and some journalist wrote about this and there's all all these things going on and I and you know we we kind of like made our plans. We assumed a shutdown was going to happen and we're like there's this thing coming the world's not paying attention and for whatever reason uh something about like working on exponential makes you understand these things better. So I think we were like a group of people that were primed to do it. Um And then there was this one night. It was like a very cold night. I lived in the mission at the time. And I was like, I'm about to get locked in my house for a while. And I'm going to go for a walk. I want one more time. I'm like, you know, go for a walk cuz who knows what was going to happen. And I went through this long walk through the city hours, cold, cold night. And I was like watching people, you know, breathe in each other's faces in restaurants and bars through the windows. And I was like wearing my mask, you know, looking crazy. And there was one other dude out wearing a mask and kind of we nodded at each other. That was but other than that it was just like life felt totally normal. I have not felt that so acutely as I do again in this moment um where it's like there is this crazy change. The change has already happened like the models have already hit some level. Society has not digested them yet. We feel like we see it clearly. We are trying to tell the world that it's going to happen. It is hard to get this across but it feels like that night at the very beginning of co walking through the streets again. That's really interesting and and you've talked a bit about the upsides to unlock and a lot about science as a means to do that. Um so so if we're gonna like contrast this with co what are some of the hugely positive things in the pipeline for us? I will answer that. I think the positives are so positive we should talk more about the all the things that we're thinking about here. Like you mentioned science. If we can really go make a decade's worth of scientific progress in a year, if we can go cure a ton of diseases, if we can come up with personalized medicine for people, find new materials to sort of make cheap, safe energy, uh there's if we can make it such that anybody who can come up with an idea for a startup can have the AI implement it, write the piece of software they want, uh you know, have a custom video game that's like the most fun for you to play. Uh this stuff is all wonderful. Now, you know, part of the reason for putting this out with urgency is there are going to there are a lot of things coming that we'll need to mitigate. Yeah, I assume we'll figure it out. I I'm always an optimistic person by nature, but also the benefits of this technology so change the option space in front of us and what we can do that one of my first reaction to reading the blueprint after the researchers wrote it was like it is awesome but also a little crazy that we can credibly be talking about these kinds of things. M um so we do have like if the AI is as good as we think. And all these wonderful things happen. We also have an incredible new tool at our disposal to help us mitigate the potential downsides. Yeah. It's a tool for everyone. Uh Josh, I'd like to ask you um this so I think we've talked about this as social infrastructure that changes the way people work and learn and participate in society. um what what responsibilities do we have and do institutions have to help society get ready for that and how how do you think about it? Yeah. Um you know, one of the things that I think about here, this does kind of come back to the broad benefits for everyone thing. Uh for a long time in society, we've had this great aspiration that everyone could have, you know, food, shelter, electricity, uh healthcare. And we've we've always wanted to see there be some kind of new step in the direction of providing these sorts of things for everyone so that everyone can focus on the things that are most important in their lives. Um and we've often been told over and over again like, you know, actually we can't have that. It's too expensive. there's no way to pay for it. Um, like it's it's nice, but it would be too burdensome to society to do all of that. And I think what AI and super intelligence will unlock is the freedom to do all of it at a much lower cost than has ever been possible. And correspondingly for the folks of us who are sort of the stewards of the technology, we have a special responsibility to make sure that we actually do fully realize those benefits and be a part of building policies and systems that help working people, middle class people, people in low-income countries really make sure that it benefits everyone and not just the very wealthy. Yeah. So I'm I those are the things that I think of as key responsibilities here and I'm very excited about it. Um, plus the downside risks are also quite serious and there are systemic shocks that could happen. We have to prepare. We have to be thoughtful. We have to be honest about them. Yeah. Yeah. So, that sounds like a resilience question. Um, and and when I have look at the document and speak with you folks, it sounds like you think of resilience in layers, right? there's um kind and and a lot of it is not before the AI goes out, but it's also after the AI goes out in terms of um our responses to AI and how people are are prepared for it. I'd actually like to ask each of you how you think about resilience. Um Adrien, why don't we start with you? Yeah, you know, I I think maybe one one distinction that I would draw with uh you know, classically we've thought about about safety and about making sure that we that we run you know safety eval models um that we implement mitigations and I think like that's you know to your point of resilience and layer this is like a very important layer that we should you know still keep doing and keep expanding but um at the same time uh you want society to uh be prepared for the possibility that well maybe there will be some actors who do less safety testing and like what what happens what happens then how is society resilient to risk from AIS uh in these cases um maybe there will be uh incidents in spite of our safety testing or near misses uh in the in the uh blueprint we talked for instance about um incident reporting that's modeled a little bit of after um how the uh aviation industry uh does things when whenever there's kind of like a near miss or any incident like however minor that kind of like gets reported to um to a database so that you know all the companies can kind of know okay well this is a risk and perhaps uh you know here are mitigations that that you could implement and so there's a lot of things that could happen at a societywide u level um another thing would be um you know defense ing against uh against risks. Uh we're talking about models that that can code um a lot better. Now uh that also implies uh cyber capabilities like could they help uh bad actors run cyber attacks. Uh and I think part of resilience is ensuring that we uh that we make our software systems uh more secure uh and use AI to do these things. So, so again there's as you said all these layers. So that's kind of how I take the the resilience question. Sam, before the show we were talking about how prosperity can be emergent when everybody has access to AI. It also strikes me that resilience can be emergent. How do you think about it? Yeah, I I think our original AI safety thinking and the fields in general like I'll call it classical AI safety thinking was that there are going to be a very tiny number of AIs in the world. the only thing that matters is making sure those AIs do the right thing and that as long as you align them and they don't do unsafe things, the world will be okay. I think the picture now is actually more stable but more complex and there are going to be many AIs in the world. Uh and the it will not be enough to just say like you know this one company is going to make sure the AI never does something it shouldn't do. Um but there will need to be an emergent response across society. Uh you know Adrian talked a little bit about some of this but if we just take a few examples of threats that we expect to be coming um cyber security is definitely going to become a huge issue. AI will be incredibly good at finding vulnerabilities in software and I think the world will find that their software is much more brittle and much less secure than we thought and you know humans just had limited capacity to find the exploits. Um it is not enough to say that you know one or two or three model providers are just going to make sure that their systems won't do this because code is like you know the same thing that code is very useful to be good at and being good at writing code can also help find security problems and even if all of us somehow could prevent our models from ever being used for this there will be open source models coming soon that are that are good at code and thus good at security exploits. What has to happen is the world has to use these models and there can be you know differential access. You can give it to good known trusted defenders first. We have a program for that. Other companies will do similar things. Um and you have to empower the companies that defend software because there will be some power plant that no one has understood the software for 20 years and no one can patch it and there's a big problem with it and you have to do something about that. Um but a resilience approach here is okay there's this new thing in the world. there's AI that is really good at exploiting computer systems. Um, let's use AI to defend it. And that is not a one company thing. That's going to require this huge effort. Um, if we go a little bit further, I think there will be a a bio version of this where, you know, classically people have said, well, we're just going to restrict our models from being able to develop pathogens. Someone at some point is going to use some model to develop a pathogen. And the world needs defense shields against that. detection systems, rapid response treatments, a whole bunch of other things. And this is not this doesn't get us off the hook in any way of aligning our systems and building safe systems. We still have to do that. Like we get a time advantage there as long as we stay at the frontier, but we do need the world to like do its thing. Society to have this emergent magic and build these layers of defensive shields. Um there are many threats besides those two, but that's enough given we have short time for sure. Um Josh, you have shared some interesting thoughts with me about how each huge technological shift has produced new institutions and new democratic mechanisms and also you've uh been thinking a lot about new institutions that might emerge now. Um how are you what are what are your kind of what are your most exciting ideas and what what would you really like to think about in terms of the collective response to super intelligence? Yeah, certainly. Um, so, so one thing I'll say first on the subject of resilience, just as grounding, a lot of the problems that we're concerned about AI creating new externalities for are problems and vulnerabilities that exist in the world regardless of whether AI is present. And AI just increases the urgency of action. But coming back to CO, we found back then that everyone had a much deeper dependency on the functioning of supply chains than most people were previously conscious of. Um, supply chains are super important. supply chains for food, for goods, for for everything that sustains civilization. Um, and also of course uh you know for for other types of vulnerabilities like democracy. We've been debating for years how um there are real risks when people are influencing society sort of inappropriately and we're we're worried that AI is going to make these things potentially easier to attack in the near term because there will be tools at people's disposal that they haven't had before. Um, I'm excited about the possibility that we can build new institutions and state capacities to use AI to rectify some of these vulnerabilities. Um, we can systematically identify them with AI in ways we couldn't in the past. We can systematically close them in ways we couldn't in the past and we can use AI to scale up efforts to combat certain types of issues uh in in ways where we can potentially make it too expensive for attackers to really do something. So uh you know on on cyber and bio I am optimistic that maybe we can build an ecosystem of defenders that altogether can make it so expensive for attackers to try to you know run a cyber attack that there just won't be that much of an incentive to do it. And if we can fully implement the the bi-ilient side of things um not just for pathogens that might impact humans but uh especially for the food supply chain. This is one where I have like a hobby horse on this and I'm going to talk about it every chance I get. I think people underattend to food supply chain bio risks. Um but we can use AI to make that resilient at scale in a way that today is cost prohibitive. Um so I'm very excited about you know the things we can build there. Yeah. Neat. Um one more question. When we talk about kind of the individual transition for many people um what work looks like where value shifts to. I know you've been thinking about that a lot. um how how what do you see happening as folks kind of transition to other ways of creating value in an AI economy? Oh, oh goodness, that's very broad. Um I I think people will have a lot more uh opportunity to exercise agency. Um, you know, if you can, uh, start a new business and have a team of AIs that handle all of the functions of putting the business together that you have no expertise in yourself, um, you can get something off the ground an awful lot easier. And so, you know, there there are a lot of ways that the economy just fundamentally changes when you give people access and tools to help them do more things than they could before. Yeah, for sure. Um, Sam, similar question. you have so much experience with startups, running them, managing Y Combinator or just being in the ecosystem for so long. What um how do you see startups changing and like our potential to realize new ideas changing with AI? I'm obsessed with trying to explore the space. I don't know exactly what it's going to look like, but I this idea of one person or very small team being able to create an entire startup as Josh was talking about pretty quickly. Um all my instincts are there's something deep and important to figure out here. Every time in our industry that the friction costs whatever you want to call it of starting a startup has come down a lot. Amazing new things have happened. Um, you know, I remember the transition, the one of these transitions that I was doing a startup during uh was when like basically AWS came out and all of a sudden there was like an idea of a cloud and a small startup didn't have to go do all the crazy things you used to have to do, manage his racks in the closet. And that was like an amazing change to what you could do as a small startup with a few people. This one that's coming is much bigger and there have been several in between. But uh I really want to find out what it looks like when a startup is, you know, two or three people and a ton of GPUs and you can just you can really democratize who can start a startup. Yeah. Yeah. Yeah. It's that democratization aspect and it gets down to the widespread availability of AI. Um, how do you what's what's the best frame for thinking about bringing more people in democratizing AI access? I think when people talk about democratization of AI, they they mean two different things. One is shared access and making sure that everybody gets to use sufficient AI to improve their own lives, build things for other people, all that. And the other is a sort of voice in where it's all going to go. Uh, I think both are very important. Um, part of why we do things like this blueprint are if we're not debating the issues as a society. Also, part of the reason we actually release products in the first place, people don't have a feel for this, if people aren't talking about this. Um, that is kind of a prerequisite to people being able to have input. Um, but it's not enough. You also have to have a way that people's input that we listen and the input is captured back into into the system. Um, so that's one that I think is really important. The other is we need to put not just services like CHBT but the real deal high compute valuable services where people can start a startup or make a scientific discovery or whatever in broadly into many people's hands. Yeah. Yeah. And that's going to take new economic models or much cheaper uh inference to to get it wider spread. Right. And that's an infrastructure problem among many other problems to solve. You know, we we used to talk for years in open eye about when we were going to get through the compute crunch, when we were going to build like enough compute that we wouldn't be so strapped. I don't think we ever get out of it. If we do our job, if we keep driving the cost of intelligence down and the capabilities of intelligence up, I think, you know, effectively, if that happens, the demand is uncapped. Yeah. And the worlds where we don't build enough infrastructure, I think you get a crazy concentration of power and concentration of compute because people will just bid the price up and up and up. So the only thing that I really believe in as a long-term democratization strategy is to just make so much AI infrastructure available and make the models so good and so capable um that we hope to get to a world where people are like I need help coming up with ideas for what to use all this comput. I don't think we will in practice. Um but I certainly think that if compute is very limited you know the richest people and the richest companies in the world will just sort of bid up the price to a kind of extreme degree. it'll be another kind of scarcity that that is um monopolized where so this is a more data centers is actually a very egalitarian um initiative in the sense that they can make AI access more widespread I I would look at many examples throughout history you know one of the best things that we ever did for really increasing people's quality of life was to drive the price of electricity way down price of energy broadly way down um energy correlates incredibly ly well with quality of life or at least it has for a long time. Maybe now it'll be more about AI. Um and by making energy abundant and shockingly cheap relative to what it cost 50, 100, 200 years ago, uh that has done like quite a bit for lifting the entire world up. I think we need to do the same thing with AI and I think that means you need a lot of it and just like with energy, you need to innovate new ways to make it much more affordable. Can I um give give a thought on on this whole issue as well? I think we've talked a little bit about uh you know the broad access to AI which I think is very important and and can create like you know potentially a lot of new products that will be um very helpful for the world, very useful for people. Um I think one thing that you know we try to think about as well in in writing this blueprint is you know ensuring that you know ordinary people who maybe you know aren't going to start start up necessarily um all of them like aren't left behind by this technology. Um I think this is related to to uh something that we might talk about which is kind of how does AI change the composition of the economy? Does it uh move it more towards uh capital or towards like labor the labor that that is done by by AI really? Um and and one of the things that we talk about in the blueprint is uh how do you moni modernize the the the tax base for an economy that is like this? uh uh and how do you distribute kind of uh this prosperity that you know will be created by this technology uh how do you make sure that this is prosperity for everyone uh and not like wealth for you know a relatively few uh few people and so this is something that we try to address in in the blueprint uh but I'm curious if we can uh talk about it here as well we we totally shouldn't a couple of the big ideas that I saw were um kind of workers co-authoring how AI is deployed in their workspaces. I'd like to to ask you about that, Josh, and I'd also ask like to ask all of you about more broadly how society can capture the upside. What are the institutional forms that'll that'll take? But let's start with you, Josh. Sure. Just on the on this compute question for one second, I want to say the compute allocation problem. Figuring out uh what things we use compute to help people do with AI will probably be one of the most important societywide questions to navigate over the next few years while compute is relatively scarce. And we should try to boost the amount of comput in the world as much and as quickly as we possibly can so we don't have to have painful trade-off questions where there's some extraordinary good we could provide to everyone and we're stuck with a hard question where someone says well how are you going to pay for that because the cost of comput is so high. Um on getting workers involved in AI um I I actually I kind of want to back up and just acknowledge an elephant in the room which is that a lot of workers are concerned about AI. They're worried about what AI means for them. they are not immediately excited at the prospect of of figuring out all right how are we going to use AI in our workplace they're thinking oh my gosh is the AI going to replace me um and what I what I think is sort of the important first step here is that those of us who are working on AI and who are kind of the stewards for this um have to be putting out things like this this blueprint document where we say well here's how we're going to advocate for policies to make sure that the economy is fair and that you are supported no matter what and then given that we've made this level of safety net Um, now we can talk about, you know, something where where you have confidence in in us. We can talk together, have a good conversation. Um, we should figure out how do we empower unions to make wise choices about where and how to use AI. Um, how do we empower workers to participate in conversations about the acceptable use of AI in the workplace? I think a lot of folks are rightly concerned about AI surveillance in the workplace. Making sure that workers are a part of the decisions about those types of things feels very important. and um a big push on AI literacy to ensure that folks um you know get the tools they need to use AI to make their lives better uh to start small businesses to to do all the kinds of things that you know will help them realize the potential for sure. Um what are some institutions that you've considered that allow everyone to capture some of the upside Josh? Um so let's see for for new kinds of institutions I think state capacity to measure pieces of the economy in you know greater granular depth so that there can be responses if there's you know like an economic shift in one place or another monitoring. Um I think AI is actually a very exciting tool for making that more scalable and less expensive than it would have been in the past. Yeah. Uh I I also think that there can be more institutions that are kind of in between corporations and governments which have very very different levels of accountability governance wise. There maybe is like a need for something with like more in between level of accountability um that can provide uh you know like social safety net type services or something between the corporate board and the regulator. Yeah. Yeah. Yeah. like there's um like this can't all just happen in private companies that have very minimal governance. Uh and we also don't expect that government which moves fairly slowly is going to do all of it quite quickly. And so we sort of maybe need some in between institutions that can help us prototype things. Um this is like a little bit spitballing. Uh and I don't think this is like quite covered in the blueprint but you know you asked and so that's I think in the in the vein of new institutions the kind of thing we could do. Cool. Um Sam, institutionally, how are you envisioning that people might broadly participate in the upside? I think we talked earlier about very broad access and giving a lot of people a lot of compute. Um I you you hear these ideas like universal basic compute or other things with nice branding. But really what they mean is just like uh instead of the traditional thinking of we're going to give people you know a monthly stipend or money or whatever when AI does all the jobs. I think it's way better to say actually people are pretty good about knowing what they need and pretty creative about fig how to use things but if people are boxed out of access to this resource uh that will be a a challenge. I do suspect that we're going to have to make changes to how we tax like in a world where AI is doing most of the intellectual work in the world or at least of the work of today. you know we probably are going to need to explore some way to tax that instead of taxing human income in the traditional sense. Um I suspect that we will need to provide new kinds of transition assistance, unemployment insurance, things like that. Um and I suspect that eventually we will need to think about how people get to be an owner in the upside of all of this in new ways. Um you know capitalism is dependent on a certain balance between labor and capital and if that gets totally out of whack then the current system is not going to work and there will have to be some sort of evolution. What that is I think is a very open question and again part of the goal here is to throw out some ideas but there's many more. Um, and I will always leave some room and say maybe we're wrong and maybe no change at all is required and somehow this just works differently than we think. But again, in a spirit of trying to use the time we have to think and debate seems like a good time to start ideas here. Can I say something on this maybe we're wrong aspect? Like I I think one thing that you know in the in the blueprint we we have you know proposals around like modernizing the the tax base and around like uh we talk about maybe even a 32-hour work week and and these types of things. And I think there's there's something um important here about uh trying to create counteryclical measures where you know conditional on disruption from uh from AI. We have kind of uh additional like unemployment insurance. uh we have these measures like the um 32-hour work week, but that we um it's I think fairly important to me that you know some of these measures uh maybe are maybe some of these measures are good in the current world but I think many of them would be quite disruptive and we're really talking about a world uh that uh that that changes a lot and so like to me institutionally something that we need is kind of um some thinking about what are potential disruptions that could occur from AI and what are things that we can implement as we as we see those coming uh to kind of uh counter the disruptions and uh distribute benefits uh broadly uh at that time. Um, yeah. And I saw one of the ideas in the report was portable benefits since benefits are so linked to employers in America now, right? Sounded like there were a lot of correct. Yeah. Yeah. And this was this I think an example of the one of the more kind of US focused uh proposals of course. Uh but um I think it's a great idea. I think it's insane we don't already have that and that like the way that the US benefits world has evolved is really bad. But I think it's a great idea. No, no one should lose their healthcare if they lose their job. Like that just shouldn't happen. M agreed. Um so Adrian from inside the research organization you're seeing this in acceleration. It's real for you. You can see some of the um scientific um kind of progress that's being made with these models. Um what's what's our we want institutions we want society to keep pace with technology. What what do you think the window is for adaptation? So you know it's it's a well I was going to say it's hard to put a number. I would say that there's a lot of uncertainty about these numbers. Uh we've talked uh I think about uh having an an automated researcher in 2028 or late 2028. Uh and March of 2028 is the official goal. What's that? March of 2028. March. Thank you. Um uh and I I think one useful thing to to think about here is once you have this automated researcher which is an automated like AI researcher capable of doing AI research you potentially have kind of like a double whammy of disruption I would say like first of all you have a model that is capable of advanced cognitive work clearly which which AI research is and so that in itself is disruptive but it might accelerate uh further AI progress and so Um, you know, I I can't tell you, you know, after that point exactly how much progress we'll have made like a year from then, but it's probably, you know, more than the the pace of progress that we've been making so far. And so, uh, this is kind of the the type of window that uh that we're talking about, maybe. Yeah. Thank you. Okay. So, we are at the point in the conversation where we open up questions. It's not just me anymore. It's uh members of the community. So, I've got one here. Um, it's from Setlana Romanava. As AI becomes more capable, what which human qualities do you think will matter most in the future? I'd like to ask each of you one by one, Josh. Um, you know, I think there are human qualities that are timeless and that will always matter. Uh, character, commitment, effort, um, compassion. I I think they're going to matter a lot. All of those creativity, understand what other people want. But I will share a recent anecdote that really struck home for me that things don't quite go the same way we want. I went to my first robot cafe for the f like I was so excited to try. I thought I was going to love it. And it was the most underwhelming experience. And I thought I was something that did not need like the barista at Starbucks to smile at me and say hi and ask how my day was going. I really thought I was like didn't care about that. It turns out that I really want that. And walking in to like push on the screen and have the robot do the thing and give you a delicious cup of coffee was like a deeply unfulfilling I don't want this experience. And I thought I wouldn't have cared. I totally just those small interactions throughout the day. I I really appreciate them too. Okay. Here's one. Should those Oh yeah, that's fine. That's fine. Go for Well, I mean, I think what I was going to say, it's a little bit along the lines of of what Sam said, and I believe it's in the blueprint, like uh humans need each other. Like, I think we care about other humans. Um, in fact, you know, to some extent, uh, one of the most, uh, uh, scary things with the development of of technology and and social media, video games and stuff like that is like maybe uh that is gotten us to lose a little bit of the connection that that we used to have. But I I I still think that it's something that matters tremendously to people and they they will recognize this more and more in fact uh as as AI becomes uh becomes more advanced and can do some of some of these other tasks that don't require human connection. And so uh that's like one big thing that I believe will um will expand. It seems like a good thing to me right that like this is um this can be of course like a type of work, right? uh you know uh nursing and and uh and all these these types of works in the care economy teaching um and but you know it's also just important to to our lives right and I think that that's to me like going to be the the big quality like how good you are as a person to other people yes I agree um okay so in America things like healthcare childare and elder care are very expensive um here's a question how can AI expand access to those for everyday people. Josh, um, one thing I am so excited about is the way that AI can help provide the best uh, health care in the world to everybody. Um, I've heard a lot of stories from folks navigating the healthare system that they didn't know where to go, um, how to navigate the insurance, uh, what specialists to talk to. And you hear stories of people bouncing around for a diagnosis for years and years and not getting anywhere. You hear about folks who are who are stuck in a healthare system where even if you have the most caring, compassionate nurses and doctors, there's not enough time to give everyone the best quality care. And I think that AI is going to make it possible to deliver the best quality care at scale. I don't think it's going to replace doctors. I think it's going to make their workloads manageable. And I think it's going to make it possible for patients to get the best experience possible. Yeah, I agree with that. A lot of patient empowerment out there. Yeah, I we have all told stories many times of things we've seen on uh social media about people having this like amazing healthcare experience. Uh I don't have anything that amazing, but I'll tell my own because at least it happened to me. Uh I recently got a blood test. Uh they were like nothing seriously wrong, but a few markers just like kind of out of their range and you know, you kind of scan down there's like hundreds of things and you just look at the list of obsessed and the ones that are like a little bit out of range. I asked my doctor, he was like uh you know, probably those are all close enough. It's fine. I put it in ChachiBT. It's like, "Yeah, you're fine, but like here's what's going on. Take this one supplement, get your blood tested again in a month." Mhm. You should be okay. And it did it. Again, I wasn't like that sick. I wasn't. But like the fact that I could just like upload my blood test and instantly get the right answer for like a kind of complex thing was an amazing experience. Uh I think there will be many things like that across healthcare, education, all these other areas. Elder care, you know, that's one where I think we want people doing that very much. Yeah. But there will be a lot of ways we can really drive down the cost of um healthcare and education things like that for sure and even personalized learning like can still be humans but you can figure out how to teach individuals right um how do you think about it? Yeah, I mean you know I would say similar things of course. uh I mean another basic thing in terms of healthcare you know that I hope at least is is simply that you know AI will be able to help medical research and that you know that that's like a basic thing that that would make uh healthcare better for people but also you know um you I just talked about the care economy and and I think to some extent you know to the extent that AI can um facilitate some of the of the more you know bureaucratic frankly aspects of of healthare and and help us have more people actually you know providing the the actual um the actual health care uh to um to individuals that that seems like like a a positive thing like it's not AI would be providing um would be doing this but we'd be freeing up uh uh people to to actually uh do this work. Yeah, I can hope. We we've talked a lot about the capability overhang this year, how it AI now is capable of so many things that most folks are not leaning into it for. And I sometimes suspect that it's because they're resigned to the world like it is, right? I think PG calls it schle blindness, right? So they're they're they have ceased to consider that something better is possible. Um so I think they're going to start leaning in more, right? Um and and I think when they lean in that their behavior is actually what will change society as much as any technological breakthrough. Just that moment of adoption. Um, as you in your lives, what are the moments where you've seen somebody kind of light up and realize, oh, I I can do this. I don't have to live resigned anymore to the old ways. Um, wait, I've taken a lot of questions first. Adrian, do you want to take a crack? Now you get time to think about it. Oh my god. Wow. Well, putting you on the spot. Yeah. Maybe I maybe I maybe I do want time to think about this. But you know I mean to your point I think it I think it's definitely the case that uh that people take a while to adapt to new technologies. I think to some extent you know the the capability overhang might be large in terms of capab capabilities versus how much people are actually using them. Um I think there's an extent to which that's a function of how fast the capabilities are improving versus how fast people are used to you know things getting uh things getting better. I have a favorite to mention. My favorite there many options here but my favorite of all is watching parents who are coders by training watching the parent watch their kids use codecs for the first time. and uh a kid who has a bunch of ideas and no idea about like what the traditional limits are, what would be hard or easy just start describing a video game and having Codex make it uh and the the kind of like creative journey the kid goes through. I mean, often you see the kid doing this mostly by voice and the parent is just like, "That's not going to work." And then it works and then they're like, "Wow, my kid is going to grow up in a world where he or she just like expects this. This happens." Like, and I kind of still like I wouldn't have even thought to try that because I would have been so certain it didn't work. Yeah. So, like watching watching it through the parents eyes, the kid do it for the first time, especially if the parent is a software engineer by training, is awesome. That's fascinating. This is a tale as old as time, right? like the kids always know how to use the the VCR in the 80s or whatever, right? And yeah, interesting. That's that's funny. Um I just on the capability overhang and how fast people notice when a capability has arrived, there's an interesting time scale mismatch where people have like a who who aren't super in the know on AI have this like distant awareness that something is happening. They know there's a product out there. Once every few months, they might check it out. they don't immediately and instinctively probe it to the maximal extent of its capabilities. Uh, and they often don't put it on the thinking setting. Like they don't know that reasoning models have happened. They stay on the default chat model that's that's right out there. And so they wind up with this misperception that things aren't moving as fast as they are. And you hear people talk about, well, there's hallucinations. It's slop. It's making mistakes. It's inaccurate. Why would they ever tell us that it's going to do, you know, these great things? And this like visceral belief gap is I think an issue that will get overcome when they start to see other folks and institutions very successfully use AI at sort of the maximal reasoning settings at the most capable settings in ways that are shocking to them. Um like this video game example, but sort of like at scale for society as a whole. Uh they're going to see a lot of people get diagnoses that, you know, they wouldn't have expected that someone could get quickly and it's going to update them. And I think I think that's just like an interesting phenomena that the the time scale for AI progress is weeks and months and the time scale for people currently checking back in is like every half year or something. And uh yeah some big change will happen when people um realize the maximal extent of capabilities today. Yeah. Um I agree the uh the point you brought up Sam about the kids really illustrates for me what an advantage creative people have. Like there are some folks that are just kind of a font of ideas. Many of those ideas have never been realized. Some are scientists, some are artists, many are children. Um, and it feels like the floodgates are opening for them to realize more things. Um, I think you mentioned that you would actually burn through your codeex list and you're not having it run all night anymore. We just need to make a model that helps you come up with good new ideas. I I I actually think this will be one of the most exciting things to do. Uh, I don't think we're that far away from a model where you can say, "Go look through all my text messages, all my email, look at my entire computer, anything you can find about me and just suggest ideas that I've like gestured at as fragments or that might be interesting to me." Yeah. And then I'll build those. Yeah. Yeah. Agreed. A thought partner. I see a lot of people using it like that. Well, I think we're wrapping up here. So, thank you all for joining us today. Um, and thank you Sam, Josh, Adrian. This has been pretty cool. This was a lot of fun. Thank you. Yeah. Um, so the ideas discussed uh may sound ambitious um to you and they're meant to be. Um, but we know that they're also early and exploratory. We're offering them um not as a final plan but as a starting point in a very public conversation uh with policy makers and everyone in society uh to encourage more discussion and research and debate. Um, OpenAI wants this conversation to continue. uh the company. We are inviting feedback through uh a new email address. It's new industrial policy. That's all one word. New industrial policy at openai.com. Please send us your best ideas. Uh we're launching a pilot program of fellowships and focused research grants uh for up to $100,000 in funding and up to a million in API credits. and we're convening further discussions at the new uh open eye workshop opening in Washington DC in May. So, thank you again for being part of this forum. We appreciate the time and thought that everyone is bringing to this conversation and we look forward to continuing it.

Get daily recaps from
OpenAI

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.