OpenAI Town Hall with Sam Altman
Chapters23
Panelists discuss shaping next-generation builder tools by gathering user needs and questions to clarify what to build for the community.
OpenAI’s Sam Altman rallies builders to shape useful AI tools, stressing GM-like interface variety, cost/efficiency gains, and responsible, human-centered innovation.
Summary
OpenAI’s town hall with Sam Altman gathers builders, developers, and researchers to imagine the next generation of AI-powered tools. Altman emphasizes that powerful models will redefine what it means to be an engineer, predicting more people will create value by getting computers to do what they want rather than writing code alone. The discussion covers practical concerns like go-to-market challenges, UI evolution for multi-agent systems, and how specialization vs. general-purpose models will shape future products. Panelists ask about economic impact, memory and personalization, and how to balance rapid AI advancement with security, privacy, and policy. Altman highlights deflationary AI effects, cheaper inference, and the potential for unprecedented software creation costs to drop dramatically—while cautioning about concentration of power and the policy work needed to mitigate risks. The audience probes into agents running long workflows, durability of product layers, and the need for tools that help generate high-quality ideas. Throughout, Altman reinforces a willingness to listen—and to build user-facing primitives, memory architectures, and collaboration features that fit diverse workflows. The event closes with a call to invent together, inviting attendees to propose exact APIs, runtimes, and UX concepts that OpenAI should prioritize next.
Key Takeaways
- AI will dramatically lower the cost and speed of software development, enabling many more people to create value with computers.
- Future interfaces will accommodate a spectrum—from multi-agent orchestration on many screens to calm, voice-driven interactions, and OpenAI plans to support a variety of user preferences.
- Specialization will not replace general models; OpenAI aims for future models that perform well across coding, reasoning, and writing in a single architecture.
- Memory and personalization will become core features, with users potentially granting AI access to their devices and data to tailor responses—while ensuring strong privacy controls.
- Go-to-market remains hard even with AI; differentiation will come from durable value propositions, creative GTM, and better idea-generation tools that AI can augment.
- Security, biocurity, and resilience are essential as AI tools scale; a shift from blocking to resilience-based approaches is needed to handle real-world risks.
- The role of humans will evolve toward collaboration with AI, not replacement, with teamwork and trusted human guidance remaining crucial for complex tasks.
Who Is This For?
Founders, developers, and product leaders who want to understand how to design and ship AI-powered tools and agents, while balancing cost, security, and user experience for scalable adoption.
Notable Quotes
"I think what it means to be an engineer is going to super change."
—Altman predicts a fundamental shift in the engineering profession driven by AI-enabled automation.
"Intelligence is a surprisingly fungible thing and we can get really good at all these things in a single model."
—Altman argues for broadly capable, generalized models rather than narrow specialists.
"The good news is that AI is going to be massively deflationary."
—Altman frames AI as a driver of cost reduction and broader economic empowerment.
"There will be people who want to be in a calm conversation in voice mode where they're saying something to their computer once per hour."
—Highlights the spectrum of user interfaces that OpenAI anticipates supporting.
"If you are an AI builder, this is not the best time to be in university right now."
—Altman shares candid career advice for aspiring founders amid AI’s rapid evolution.
Questions This Video Answers
- How will AI redefine the role of software engineers in the next decade?
- What are the best practices for building multi-agent workflows with Codecs SDK?
- Can AI significantly lower GTM costs, and what measures ensure durable market advantage?
- What are the main safety and security concerns when deploying autonomous AI agents at scale?
- How should product teams balance general AI capabilities with specialized models?
OpenAI Town HallSam AltmanCodecs SDKAgents and multi-agent orchestrationGPT-5.x and 5.2Memory and personalizationGo-to-market strategyAI deflation and economicsAI safety and biosecurityHuman-AI collaboration
Full Transcript
Okay. All right. Uh, thank you all for coming. uh as we as we start to think about the next generation of tools for builders and how we're going to use these incredibly powerful models that are starting to come online um we wanted to just hear from you all about what you'd like uh what what's on your mind answer any questions but I hope we come out of today with a much clearer sense of what to build for you all and how to make these incredibly powerful models useful. Um I figured I would start with a question from Twitter.
Uh where do you fall in the Jevans paradox of software engineering? If AI makes code dramatically faster and cheaper to create, does that reduce demand for software engineers or does cheaper custom software massively increase demand and keep engineers employed for decades? I think what it means to be an engineer is going to super change. I there will be probably far more people and creating far more value and capturing more value that are getting computers to do what they want. Um getting computers to do what other people want. figuring out ways to make these useful experiences for others.
But the the shape of that job and the amount of time you spend like typing code uh or debugging code or a bunch of other things is going to very much change. This has happened many other times in engineering and each time it's happened uh so far at least more people have been able to join and be productive and the world has gotten much more software. demand for software seems to not be slowing down at all. And my guess on the future is a lot of us use software that was lit written for one person or a very small number of people and we're constantly customizing our own software.
So I think many more people will get computers to do the things they want to do and it will be uh a very different way than we do it today. So if you count that as uh software engineering then I think we'll see much more of it and I think a greater percentage of the world's GDP will be created that way and consumed that way too. Any questions live otherwise I have a long list. Go ahead. Uh well first I want to thank you for giving us this opportunity to be here and then and ask you these questions.
Um, from a consumer standpoint, I use ChadByt heavily. Uh, and I'm always on Reddit seeing everyone's building, whether it's using codeex, lovable, or cursor. Uh, but it seems like the new problem is GTM, right? That I can build these things. I, but how do I find people to gain value from what I'm building? Uh, and I see it to be a bottleneck. Uh, I'm curious as to what you think about that. it. Before this, I used to run Y Combinator and every the consistent thing you'd hear from startup founders is I thought the hard part part of this was going to be building a product and the hard part is getting anyone to care or to use it or like to connect people.
So, I think this has always been extremely hard, but now it's gotten so much easier to build that you feel the the delta even more. Um, I don't have an easy answer for this. Uh, I think always it's been hard to build a business and it, you know, to figure out a way to build differentiated value to get the go to market function working. I think all of the old rules still apply there. The fact that AI can kind of make make it far easier to create software doesn't mean any of the rest of this gets easier.
Except I think you're starting to see in the same way that a AI has transformed software engineering, people are now using it to automate sales, automate marketing, and there will be some success there. But I think it's always going to be difficult because even in a world of incredible abundance, human attention remains like this very limited thing. And so you're always going to be competing with other people trying to build their own go to market muscle and figure out how to get the distribution. and every potential customer is busy and everything else. So, uh, you know, I I could tell a version of the future where all of the radical abundance comes true and human attention really is like the remaining commodity.
And so, I just expect this to be hard and you got to come up with creative ideas and build great things. Thanks, Sam. I'm George. I'm a solo developer and I'm building um like uh on top of the codeex SDK. I'm building a way to orchestrate multiple agents and I have a question about your agent builder tool and sort of your product vision for where that agent builder tool goes. At the moment, it's just kind of workflows and chaining prompts together. But I'm wondering as a builder building on the Codeex SDK, am I safe? Like do you see room for a lot of different types of UIs for multi or multi- aent orchestration or do you see like opens that stay away?
No, I think we don't know what the right interface for all of this is going to be. We don't know how people are going to want to use it. Um, we have seen people build incredible multi- aent setups. We have seen people build just a really great single interactive thread. Um, we're not going to figure this out on our own. And also, not everybody is going to want the same thing. You know, there will be people that'll be like in one of the they're in one of those like old movies with 30 computer screens and they're like looking at this crazy thing here and doing this and, you know, moving stuff around.
And I think there will be people who want to be in a very calm conversation in voice mode where they're saying something to their computer once per hour. The computer's figuring out a lot of things and they're not, you know, they're trying to think really hard about what they say, but they don't want like the constant supervision of tons of agents. Um, like many other things, people will just have to try different approaches and see what they like, and the world will probably converge on a few, but we won't figure out all of them. Um, I think building tools to help people get be productive with extremely capable models is a very good idea.
Um, that's totally, I think, missing right now. The overhang of what these models are capable of relative to what most people can figure out how to get out of them is is like huge and growing. And someone is going to build a tool to really help you do that. And no one's gotten it right yet. Um, we will also try our own versions of that, but this is a place where it seems like there's a lot of room and people are going to have different preferences. If any of you have things you'd like us to build, uh, here, like, let us know and we can try.
Hey Sam, I'm Valerie Chapman and I'm building Ruth on open AI. Um, I would love to know your thoughts on this because currently women actually leave about a million dollars on the table due to the wage gap. And I'm curious um how you think AI can be used to solve econom economic gaps that have that have existed for decades. I think the good news, well, there's like a lot of complex news, but one of the things that I think is mostly good news is that AI is going to be massively deflationary. Um, I've gone back and forth on this a little bit because you can imagine some weird things happening with all of the money in the world going into self-replicating data centers or whatever, but it it looks like on the whole given certainly progress with work you can do in front of a computer, but also what looks like it will soon happen with robotics and a bunch of other things.
Um, we're going to have massively deflationary pressure in the economy. And I said mostly good because there will be some complicated things to navigate through there. But things getting, you know, radically cheaper other than the areas where the sort of social or governmental policy prevents that, like, you know, building more houses in San Francisco or something, I expect that to be pretty strong and pretty quick. the the empowerment of individual people whether or not society has structured in a way where they've naturally had all of the advantages looks like it's going to go up and up.
Um, I I still find it hard to wrap my head around that, you know, I'd say by the end of this year for a hundred or $1,000 of inference, you will be able to and and a good idea, you'll be able to create a piece of software that would have taken teams of people, you know, a year to do. This is like very hard to wrap, at least my head around the sort of magnitude of this economic change. And that should be a very empowering thing for people. Um, massively more abundance and access and massively decrease cost to be able to create new things, new companies, discover new science, whatever.
Um, I think that should be an equalizing force in society and a way that people who have not gotten treated that fairly get a really good shot as long as we don't screw up the the policy around it in a big way, which could happen. Um I am worried that you you can you can imagine worlds in which AI really concentrates power and wealth and that feels like needs to be one of the main goals of policy for that not to happen. Hey uh my name is Ben Hilac. I'm the CTO of a company called Raindrop.
I'm curious as you look into the future how you think about models being specialized versus generalized. So, uh, one example of this is like GPT 4.5 was the first model that I thought was like really good at writing. Like, I remember like seeing the outputs. I was like, "All right, this is really good writing." Uh, there's been a lot of discourse on like Twitter and X recently about chat uh about GT5's writing in chatbt um and being a little unwieldy, hard to read. Um, obviously GT5 is a much better agent model, really good tool use, intermediate reasoning, whatever.
Um, so it feels like it feels like uh models are a little bit spiky or they've gotten even spikier where some spikes like coding got super high, some spikes like uh or it's very unspiky around writing. So I'm just kind of curious how you how OpenAI thinks about that feature. I I think we just screwed that up. Uh we will make future versions of GPT 5.x uh hopefully much better at writing than 4.5 was. Um, we did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing.
Um, and we have limited bandwidth here, and sometimes we focus on one thing and neglect another. But I believe that the future is mostly going to be about very good general purpose models. Um, you know, even if you're trying to make a model that's really great at coding, it'd be nice if it writes well, too. Like, if you're trying to have it be able to generate a full application for you, you'd like good writing in there. When it's interacting with you, you'd like it to have a sort of thoughtful, incisive personality and communicate clearly. Like good writing in the sense of clear thought, not like beautiful pros.
Um so my hope is that we just push to get future models really good in all of these dimensions and I think we will do that. Um I think intelligence is a surprisingly fungeible thing and we can get really good at all these things in a single model. Um it does seem like this is a particularly important time to push on kind of let's call it coding intelligence. Um but we will try to excel and catch up on everything else quickly. I'll do a couple from Twitter after this. Go ahead. Um I'm CTO at this company called Unifi.
Uh to your point earlier, we're doing go to market automation. Um, one thing that we think a lot about and spend a lot of time on is kind of always on AI, sort of like this ubiquitous AI. Something that you've said that really resonated with me is intelligence too cheap to meter. Uh, the limiting factor for us to run millions, tens of millions, hundreds of millions of agents for our customers is cost. um how do you think about small models um costs sort of dramatic cost reductions um for developers over the next uh you know months and years I think we should be able to deliver sort of GPT 5.2x to x high level intelligence by the end of 2027 for do you want to give a better guess I can give one otherwise anyone want to give a guess I would say at least 100x less um but there's another dimension which we haven't thought about as much historically and now as these out model outputs get so complex more people are pushing us on the speed we can deliver it at than the cost and that is we are really good at writing down the cost curve.
You can look at the progress we've made even from like the first 01 preview until now. Um we have not thought as much about how we deliver the output the same output and maybe at a at a much higher price but in 1/100th of the time. And I think for a lot of things you're talking about people are going to really want that. uh we have to figure out how we're going to balance between prioritizing for those two things. Uh they unfortunately are very different problems, but assuming we go push on cost and assuming that's kind of like what you all the market wants, we can go very far down that curve.
Yeah, let me do a couple of Twitter ones. Current interfaces weren't built with agents in mind, but were seen a rise in apps I built for me. why innovations in customizent interfaces could further accelerate the trend towards micro apps. Yeah, so this is one that I have noticed in my own um use of codecs recently is I no longer think of software as this static thing. If I have a little problem, I expect the computer to like write some code right away and get it solved for me. Um and I think this trend is going to go much further.
I I suspect that the whole way we use computers and operating systems is is going to change. I don't think it'll be like, oh, every time you need to edit a document, a new version of a word processor is going to be written for you right on the spot because, you know, we get like very used to our interfaces and it's very important that like that button is in the same place it was last time. But for a lot of other things that we do, I think we will find that we expect software to be written just for us and when maybe I want to use the same word processor every time.
But I do kind of have a bunch of repeated quirks of how I use it and I would like the software to be increasingly customized. you know, a static or a slowly evolving piece of software. Um, but written for me and the way I use it is different than the way you use it. And that idea that our kind of tools are constantly evolving and converging just for us, that seems like it's going to happen. And and certainly internally at OpenAI where people have like very much adopted codecs for their workflows right now. Uh, everybody has their own little custom things and use things super differently.
So that one seems like it's guaranteed and I think that's like a a very good direction to go build in and figuring out what that's going to look like and how people are going to do that seems seems great. How should builders think about durability when startup features can quickly be replaced by model updates? What is the one layer of the stack you promise not to eat? Um we talked about this a little bit earlier. the it's so tempting to assume that like the laws of physics for business have have totally changed and they haven't yet.
They may continue to change over time, but right now what's changed is that you can do work faster and you can kind of create new software much much faster. Um, but all the other rules of building a successful startup, you know, you got to figure out a way to get users, you got to figure out a way to like solve the GTM problem. You got to figure out a way to provide something sticky, have some sort of mode, network effect, competitive advantage, whatever you want to call it. Um, none of that has changed and the good news is like hasn't changed for us either.
So there have been many startups that have done things that maybe in a perfect world we would have done sooner but it was too late and people built up you know a real durable advantage and that will keep happening. I the the general framework I always give people when they ask something some version of this question is will your company be happy or sad if GPT6 is like a wildly impressive update? And I I encourage people cuz we continue to hope to make a lot of progress. Um I encourage people to try to build things where you are so hoping the model gets wildly better.
And there's so many things to build that way. And then the things where it's like a little sort of patch around the edge that actually can still work if you build up enough of an advantage before the models get upgraded, but it is a harder and more stressful path. Let me do one more. We'll go back to the room. um realistic timeline for agents that can run long workflows autonomously without constant human intervention given that even simple onchain tasks often break after 5 to 10 steps. Anybody from open want to give an opinion on that?
Go ahead. I think it really depends on the kind of task. So there's a number of tasks today where just inside OpenAI we see people who are like prompting codecs in a very special way. Maybe they're using the SDK. So it's like a custom harness that keeps prompting it to continue but they basically having it running you know forever. Um and so I think this isn't a question of when but it's a question of like broadening of the horizon. So if you have a very specific task that you understand very well try doing it today.
you know, if you're starting to think like, okay, I want to get to the point where I can like prompt the model to build a startup, like that's a much more open-ended problem with like a much harder verification loop. So, I would encourage you to like figure out, okay, how can I break that down into a different problem where an agent can like verify itself um and then when I can verify its final output at the end of it and then over time we can let the agent do more and more and ber tasks. Other questions there?
Thank you. Um, hi Sam. Um, so I want to go back on this like human attention and this question about GTM. Um, I always think like the other side like human attention is the rate limiting factor on the consumption side. On the production side for all the builders, it's the quality of ideas. And I just wanted to tee this up for you on well I spent all my time helping AI companies with their GTM and a lot of times the products actually just aren't worth uh their attention. So so I guess how do people uh what what tools can you build to improve the quality of ideas that people coming up with?
There's a lot of it's popular to like call AI output slop but there's a lot of like human generated slop in the world too. It's it's very hard to come up with good new ideas and I am increasingly a believer that we think at the limits of our tools. Um I think we should try to build tools that help people come up with good ideas. I believe there are a lot and I believe as the cost of creation keeps to plummet um we'll be able to have such a tight feedback loop on trying them we'll find the good ones faster and as AI can discover new science in addition to writing very complex code bases I I feel confident there will be just a whole new possibility space but the experience of sitting down in front of an AI you know, like a gentic code writer and just not being sure what to ask for next is something that a lot of people report.
And if we can build tools to help you come up with good ideas, you know, uh, and I believe we could do that. I believe we could like look at all your past work and all your past code and uh try to figure out what might be useful to you or interesting to you and and can just continuously suggest things. Um if we can help provide a a really great brainstorming partner. There have been like three or four people in my life that I have consistently found every time I hang out with them, I leave with a lot of ideas.
They're people who are just really good at asking questions or giving you seeds to build on. Um, and like Paul Graham is off the charts amazing at this. If we can build like a Paul Graham bot that you can have the same kind of interaction with to help generate new ideas, even if most of them are bad, um, even if you know you you kind of say absolutely not to 95 out of a hundred of them. Um, I think something like that is going to be a a very significant contribution to the amount of good stuff that gets built in the world.
And the models feel like they ought to be capable of that. With 5.2, like a special version of 5.2 we use internally, we're now for the first time hearing from scientists that these the scientific progress of these models is no longer super trivial. And I just can't believe that a model that can come up with new scientific insights is not also capable, you know, with a different harness and trained a little bit differently of coming up with new insights about products to build and stuff like that. Howdy. Um, Theo dev YouTuber YC founder also really want the Paul Grahambot.
I want to ask about something a little bit different though, more on the technical side. I really love when technologies like the building blocks that we use evolve and I've been there for some crazy revolutions in the web world like moving to TypeScript and Tailwind and all these things. One of the fears I have as the models and the tools we used to build with get better is that we might get stuck with the way we have things working now like the power grid in the US is built a certain way that makes things worse and we can't really change it.
Is do you see this potential here? Are we making foundations out of the technologies as they exist right now that are going to be harder to swap in the future? Because even trying to get the current models to use the update to a technology that happened two years ago can feel like you're pulling teeth. Do you think we'll be able to steer the models enough to get them to use new things or are we just done improving the technologies we build on now? I I think we really will be very good at getting the models to use new things.
Fundamentally, if we're using these models correctly, they're like a general purpose reasoning engine. The way we have things architected right now, they also have, you know, a huge amount of world knowledge built into them. But I think we are moving in the right direction. And I hope that updates and using new things and learning new skills even faster than humans do is like a, you know, next couple of years thing. The a milestone that we will be very proud of is when the model can be presented with something totally new, new environment, new tools, new technology, whatever.
And you can explain it once or you know what I mean to explain the model can explore it once and then just super reliably use that and get it right. And that doesn't feel that doesn't feel very far away. Sorry, I I had a question that I think you've touched on already, but as a as a scientist um and kind of on the older end, uh when you do a scientific project, um it tends to generate, you know, multiple ideas for further research. So, it's exponentially growing while a given scientist only has a, you know, a linearly decreasing amount of time to carry that out.
So, it's a it's uh incredible how these tools are speeding it up. Um, but we're we're all greedy for more. And you touched on this earlier, but do you think that the possibility that you know in addition to helping us uh pursue all those interesting ideas um in a shorter time that there will be a a transition where the models, you know, will take over the whole research enterprise? And uh if so do you see that as coming from the existing algorithms or requiring new ideas or world models or that sort of thing? I think it's a still a long or reasonably long way away from the models doing truly completely closed loop autonomous research in most areas.
Um, we could look at something like mathematics and say, "Okay, that one doesn't need a wet lab or physical input at all." You know, maybe you can just think really hard and make a ton of progress if you kind of continually updating the model. eventually even there for now the the mathematicians who are making the most progress with the models are very heavily involved in looking at intermediate progress and saying nah this just doesn't feel right you know I have an intuition that there's something different on this other path but I've gotten to meet a few mathematicians who now say their entire day is collaborating with the latest models they're making rapid progress but they do something very different than the model.
To be honest, it feels to me like that period in chess history when uh you know, Deep Blue beat Kasparov. Then there was this period of time where okay, you know, AI is better than humans, but a human plus an AI that where the human is like choosing the best of 10 moves from the AI is better than that. And then very quickly after that, the AI was again better and the human was like just making it worse. Um, I sort of suspect for something like many kinds of research, something like that should happen over time and things are going to get so complex that the AI can understand sort of multi-step things better than most people can or all people can.
Um but there seems to be something about creativity, intuition, judgment that we are not close to with the current generation of models. Uh I can't come up with any principled reason why we won't get there. So I assume we will. Uh but today I don't think just sort of saying like hey GPT 5 point GBT6 go solve math. um that is certainly not going to outperform a few very good people doing math with it and saying okay this is a good direction this is you know even though we can verify even though we can say hey you did a great proof put that back into the training set there's something else happening um in terms of the workflows that are working though you touched on something which is you answer one you solve one problem and it opens up many new problems um that's where it's been very cool to talk to the scientists that are really using this uh aggressively.
I mean, they burn a lot of GPUs in the process, but the the I think there is a new skill of being able to say here's the 20 new problems and I'm going to like I'm going to do a breath first search on them. I'm not going to go deep on anyone and I'm going to use the AI to like be a unlimited grad students is how someone described it. I actually recently upgraded them to unlimited postocs. Um, in terms of automating physical science, uh, we go back and forth a lot about whether we should be building automated wet labs for every field.
Um, which we're open to doing, or if the world as a whole will figure out great experiments and has a lot of equipment and will happily contribute data back in. It sort of seems like just watching the scientific community embrace 5.2 and how much they've been willing to help that that might work and that would clearly be a easier, better, more distributed, more smart people. Um, more different kinds of equipment world. Hi Sam, my name is Emmy. I'm a Stamford student and I run a biocurity startup. So to your conversation about scientific experiments and really just like for example like cloud labs and where all this is going, my team spends a lot of time thinking about how we can prevent the harms of AI enabled biological redesign but also how can we use AI to uplift security infrastructure.
So I guess my question is where does security fall in this 126 roadmap and um broadly how do you think about some of these issues? security broadly or biocurity specifically? Um either preferably biocurity. Um there are many ways AI can go wrong in 2026. Certainly one of them that we are quite nervous about is bio. uh the the models are quite good at bio and right now most of our and by like our not just opening eyes the world strategy is to try to restrict who gets access to them and you know put a bunch of classifiers to not help people make novel pathogens um I don't think that's going to work for much longer and the the shift that I think the world needs to make for AI security generally and bio AI biocurity in particular particular is to move from one of like blocking to one of resilience.
Um my co-founder Voychuk uses this analogy I really like about fire safety. Fire did all these wonderful things for society. Then it started burning down cities. Uh we tried to do all of these things to restrict fire. I just actually learned this weekend that curfew uh comes from like when you weren't allowed to have fires anymore because they were burning down cities. And then we got better at like resilience to fire and we came up with fire code and flame resistant materials and a bunch of other things. Um, and now we're pretty good at that as a society.
Uh, I think we need to think about AI the same way. AI is going to be a real problem uh for bioteterrorism. Uh AI is going to be a real problem for cyber security. AI is also a solution to those things. It's a solution to a lot of other problems as well. And I think we need like a societywide effort, not to to to sort of provide the infrastructure for this resilience, not labs that we trust to sort of always block what they're supposed to block. And you know, there will be many good models in the world.
Um, we've been talking to a lot of bio researchers, companies about what it takes to be able to deal with novel pathogens. I think there are a lot of people interested in the problem and a lot of people reporting that AI actually seems helpful at this but it won't be a technological it won't be an entirely technological solution. You will need the world to think about these things uh differently than we have been. So I am very nervous about where things are, but I don't see a path other than the sort of resiliencebased approach. And it does seem like AI can really help us do that fast.
But um if something goes really wrong, like visibly really wrong for AI uh this year, I think bio would be a reasonable bet for what that could be. and then as we get into next year and the following year, you can imagine lots of other things going really wrong, too. Hi, uh my name is Magna. Uh my question is kind of related to human collaboration in the sense that when we're talking about AI models improving and stuff like that, they become really good, I think, for learning topics or learning subjects really quickly on your own.
And that's something that we've explored in the chat GBT and education lab and which I really valued and I appreciate a lot. But one thing that I kind of often find myself coming back to is the role of other humans and human collaboration where it's just like if you can get an answer at your fingertips then why would you necessarily take the time and maybe even the friction to ask another human about it? Um, and so that's something I've been thinking about a lot also related to I think something that you mentioned before in this session where you're just like um all these AI coding tools that can perform the work of human teams in at a much faster pace.
So when I'm thinking about like cooperation, collaboration and kind of collective intelligence output, I know that human plus AI is a very powerful kind of avenue for that. But what about maybe humans plus AI? Um, yeah, if that makes sense. Totally. Many things in there. I I'm much older than most of you, but I was still like I was in middle school when Google came out. And uh the teachers tried to make the kids promise not to use it because there was this feeling that if you could look up anything at your fingertips, then why come to history class?
Why memorize anything, you know? Uh and it seemed to me totally insane. And I was like, actually, I'm gonna be way smarter and I'm gonna learn way more. I'm gonna be able to do way more. And this is the tool that I'm gonna like live with as an adult. And if I don't learn, it would be crazy to make me learn something that assumed it. It felt like it would be like, you know, many decades earlier if you like made me learn to use an abacus even though you knew there were calculators because that was just like a important thing to learn or slide.
I don't even know what came before the calculator at this point, but it was not a valuable skill to learn. Um, and I feel the same way about AI tools. Uh, I understand that in the current way we teach kids, um, AI tools are a problem. But I think that suggests that we need to change the way we teach people. Not not that like we don't want, you know, the fact that you can like have Chacht write something for you. Uh, that's the way the world's going to be. You still need to learn to think and writing, learning to write or the practice of writing is very important to learn how to think.
But probably the way we should teach you to think and the way we should evaluate your thinking ability has changed and we shouldn't pretend otherwise. Uh so and I feel totally like this is going to be fine. Um we we will you know the 10% of learners that are like extreme autodidexs are already doing amazing. we will figure out new ways to teach the curriculum and bring the other students along. And then there's another thing you said which is how do we make this be a collaborative thing and not just you're learning and performing and doing amazing things but you're just with your computer by yourself.
Um we haven't seen evidence of this yet and it is something that we try to measure. Uh, I I suspect that human connection is going to be more valuable in a world of lots of AI, not less, and that people are going to value getting together with other people and and working with other people more. Um, but we have started to see people explore interfaces to make that easier. And um as we think about making our own hardware, our own devices, we have thought a lot about I maybe we've even thought first about what a collaborative sort of multiplayer plus an AI experience looks like.
And my sense is that although no one has cracked it quite yet, we will be surprised at how much this is enabled by AI in a way that no other technology has enabled. So you can have, you know, five people sitting around at the table and a little, you know, kind of robot or something also there and you will be able to be way more productive as a group and you'll just be used to this all the time. Like every every group brainstorm, every time you try to solve a problem, there'll just be a AI as part of it and it'll help the group do better.
Super how we talk about anything. As a reminder, any requests like if you tell us something, we'll probably just build it. [snorts] Oops. Thanks. Um, I'm curious about like as a agents start moving and operating more production systems especially at scale like where do you think or what would you think is the most underestimated failure mode like security cost reliability and related to that where do you think the really hard work is being underinvested in right now? tons of problems everywhere. you mentioned uh one of the things that surprised me personally um and I think is in in two ways and has surprised many of us here is when I first started using codeex I said look I don't know how this is going to go but for sure I'm not going to give this thing like complete unsupervised access to my computer I was so confident in that and I lasted about like 2 hours and then I was like you know what it seems very reasonable the agent seems to really do reasonable things.
I hate having to approve these commands every time. I'm just going to turn it on for like a little bit and see what happens. And I never turned it I never turned like you know full access off. And I think other people have had a similar thing. So the the the general worry I have is that the power and convenience of these are so high and the failure rates are maybe catastrophic. the failures when they happen are maybe catastrophic, but the rates are so low that we are going to kind of slide into this like you know what yolo and hopefully it'll be okay.
And then as the models get more capable uh and harder to understand everything they're doing, if there's a misalignment in the model, if there's some sort of complex problem that emerges over weeks or months of usage and you kind of put some security vulnerability into something you're making and you know, you can have various opinions on this curve of like how crazy sci-fi you want to get with the AI mis being misaligned or whatever. But I think what's going to happen is the pressure to adopt these tools, to use them, not just the pressure, the like delight and the power of them is going to be so great that people aren't that people get pulled along into sort of not thinking enough about the complexity of how they're running these things, how they're sure about, you know, their this whatever sandbox they've set up.
And the the general worry I have is that capability is going to rise very steeply. We're going to get used to how the models work at a certain level and decide we trust them and without building very good I'll call it big picture security infrastructure around it we will sleepwalk into something. I think that would be a great kind of company to build. Hi, I just wanted to go back to the conversation about education. My name is Claire. I'm a sophomore at Berkeley studying cognitive science and design and I was in high school when I saw my peers using ChadBt to generate essays and homework and stuff like that.
And now I'm in college and we're having these discussions about AI policy and coursework and CS and humanities and everywhere. And I wanted to go back to this idea of kindergarten and middle school and what it might look like for AI to be in classrooms during those periods of really, you know, it's the time when you figure out how to problem solve and how to write and how to think. And, you know, as someone who is a father now, how do you foresee education um changing and being shaped by AI during those really formative years?
Uh, I mean, generally speaking, I'm a fan of keeping computers out of kindergarten. Uh, and I think kindergarteners should be like running around outside and playing with physical things and trying to learn how to interact with each other. Uh, so not only would I not have AI in most kindergartens, [clears throat] most of the time, I wouldn't put computers either. Um I I think developmentally we still don't understand all of the impacts of technology uh on you know there's been like a lot written about the impact of social media on teenagers and that seems like it's been pretty bad but I have a sense that unfortunately a bunch of technology on young children has been even much worse and is still talked about relatively little and I think until we understand that better.
Like probably we don't need kindergarteners using a ton of AI. Hi, my name is Alan and um I'm in bioarma. So uh genai has been really helpful for clinical trial document writing accelerated a lot of things been amazing. Uh we're also trying to use it for drug design particularly for compounds and one of the things one of the things we run into is uh 3D reasoning. I was wondering if there's going to be an inflection point or if there's something you see down the line on that. We're going to get that solved. I don't know if it's a 2026 thing.
Um, but that is a super common request and I think we know how to do it. We just have a lot of other urgent areas to push on, but we will get there. Thank you. Hi, Sam. I'm Dan. Um, I just dropped out of university in London to join the W26 Y cominator patch and I got two quick questions. First one is my parents are still kind of pressuring me to finish university and do you think university could be in its current state limiting sometimes and second is uh do angel invest? I dropped out of university and it took my parents 10 years to stop asking when I was going to go back.
Uh, so I think like parents are just going to do that and they love you and they're trying to give you advice they think is best and you just sort of keep explaining to them that you can always go back if you want, but the world is in a different place now and going to keep being in a different place. I I mean, everybody's got to make their own decision, but I think you do need to make your own decision and not just do what society tells you to do. Uh, and I don't personally I think this is a time where for if you are an AI builder, it is probably not the best use of your time to be in university right now.
If you're just like a sort of ambitious high agency driven person, this is this is an unusual period of time. And, you know, you can always go back later and I think just tell your parents that you're not like it doesn't mean that it's not the right thing for many people. uh it doesn't mean that it won't be the right thing for you sometime, but like right now you got to do this thing and they'll I think they'll understand eventually. Um and then on the second thing, I respect the hustle, but not anymore. I miss it.
I just got really busy with OpenAI and it kind of gets strange cuz if I end up investing in companies that are like big OpenAI customers, I decided it's easier not to. Hey Sam. Um I have I'm Michael from work OS. We do a lot of stuff with authentication and identity and signing in. So I have a feature request for you which is sign in with my chat GBT account. I think a lot of people would like that. We are going to do that. People ask me for it all the time. What what do you need?
Do do you want people to like be able to bring their own token budget or do you want them to like bring their chasht memories or all? Yeah. So that's my question. I mean definitely token budget. people be able to bring their accounts and you know what models they have access to. But I think there's all this other stuff too like what you know MCP servers does my company have access to or what memory does chatbt have about me or what projects am I working on? I'm I'm curious how you're thinking about that like chat GPT knows so much about me from a work perspective but also a very personal perspective.
Yeah. And and how developers can leverage that. It's so we do want to figure out how to do this. Uh it's very scary because Chacheti does know so much about you. Uh and you know if you like tell a person that you're very close to a bunch of secrets, you can be like relatively confident they'll know this exact social nuances and when they share what with who and when something overrules something else. Um, our models are not quite there, although they're getting like pretty good at it. I would, I think, feel uncomfortable if I connected my CHBT account to a bunch of sites and said, "Just use your judgment about like when to share what you know about me from all of my chat history and everything I've connected." Um, but when we can get there, it will clearly be a cool thing to offer.
And in the meantime, I think doing something just with, you know, token budgets and if I pay for the pro model, then I can use it on other services, that seems like a cool thing to do. Uh, so I think we will at least do that and we'll try to figure out a way to get the information sharing right, but like we really don't want to we really don't want to screw that up. Hey, hey, Sam. Uh my name is Oleg and um I guess we all agree here that uh software development as a craft has changed dramatically recently but at the same time uh LinkedIn still has open AI job offerings for uh software developer and I'm curious how did the interview have changed in past month or years?
I we're going to keep hiring software developers, but we are for the first time and I know every other company and every other startup is thinking about this too. Uh we we are planning to dramatically slow down how quickly we grow um because we think we'll be able to do so much more with fewer people. And I think at this point a lot of the impediments that we face or that other companies face is it's just like the internal policies that have built up at most companies did not contemplate a majority AI co-workers. Uh and that's going to take a while.
But what I think we shouldn't do and what I hope other companies won't do either is hire super aggressively then realize all of a sudden AI can do a lot of stuff and you need fewer people and have to have some sort of very uncomfortable conversation. Um so I think the right approach for us will be to hire more slowly but keep hiring and trust that I'm not a believer that like event well maybe someday far in the future open has like zero employees but for a long time I think we'll just have a lot we'll have a gradually increasing number of people doing much more stuff and this is kind of what I expect the shape of the economy to look like more generally in terms of what the interview looks like it has not yet changed as much as it should but I was in a meeting today with people talking about how we want it to change.
We basically would like to sit you down with something that would have been impossible for one person to do in two weeks uh you know this time last year and watch them do it in 10 minutes or 20 minutes or whatever. Um there are yeah I think that's like the high order bit is you just you want to interview you want to see that people are going to be able to work in this new way very effectively. Um I think software engineering interviews have been bad for a long time uh and maybe not that relevant but now they're even less relevant.
Uh so that's one thing that there there's like a more general thing that a few of these questions have hinted at which is is the future going to be you know companies don't hire many people and have a lot of AI co-workers or uh you know some version of that or is it going to be that companies the companies that win in the future are entirely AI you know like it's a rack full of GPUs and no people I really hope it's the the former um there are a bunch of reasons why it seems like it could be something closer to the ladder.
But if companies don't adopt AI aggressively, if companies don't figure out how to hire people that are going to use the tools really effectively, they will eventually just be out competed by a fully AI company that doesn't have to have the sort of silly policies that prevent big companies from using AI or whatever. And that feels like it'll be a very destabilizing thing for society. So I we've been trying to figure out how to talk about this, but I think it's very because it sounds self- serving for us to say, but I think it's very important that companies adopt AI in a big way very quickly.
Hi Sam, I'm Cole. I'm a creator and cinematographer. Um I think we've seen especially in the past year it has completely changed the way we tell stories and as a result the way we view ourselves um there have been so many interesting plays in like the creative space like for Sora um it was such an interesting use of like the self as a canvas and allowing you to be able to use AI and put yourself in all of these fantastical scenarios. Um, I'm really curious where you see kind of this relationship between human creative identity and also AI assisted creation is going, especially as these models continue to advance.
The the place that we can study and I think learn the most right now is uh, image gen. It's been around the longest. The creative community has used it and disliked it and liked it the most. Um, one of the interest there's many interesting observations there, but one of them is that uh consumers of it report dramatically consumers of images report dramatically higher um appreciation, satisfaction, whatever, if they are told a person made it versus an AI. Um, and I think this is going to be a deep trend at the coming decades is we care a lot about other people and we care very little about the machines.
Um, of all of the the slurs for AI, clanker is my favorite one. I I think it just it like so evokes people's emotional reaction you can see these incredible beautiful creative to me at least uh you know clanker made images and as soon as you're told that many people's subjective appreciation goes way down. Um, there was a thing I saw on the internet last year of they would go to people who said they they really hated AI generated art, like still images. And the people would also say, and I can tell for sure what the AI generated images are because they're terrible.
And they'd show them 10 images and say, "Rank your favorite ones." You half would be done entirely by a human, half entirely by AI. And like fairly consistently, they would rank the AI ones at the top. And then as soon as they were told that they would say actually I don't like it and you know this is not the one I want. And that is kind of the test is what what you like. When I finish reading a book the that I love the first thing I want to do is like look up the author and understand their life and you know kind of how it led them to do that cuz I felt this connection to this person that I don't know and now I want to understand them.
And I think if I read a great novel and at the end I learned it was written by an AI would sort of be kind of sad and crestfallen. Um, and I I I think this is going to be a deep and durable trend. However, if the art is even a little bit human directed and how little maybe we'll have to figure out how people feel over time, people don't seem to have that same strong emotional reaction. And this is versions of this been going on for a long time. You know, if digital artists used Photoshop, people still love their art.
So, My expectation given the behavior that we're seeing now from creators and consumers is that the person and their life story and their editing or curation or whatever goes into that process is going to matter a lot and we're not going to want the entirely AI generated art broadly speaking at least from what we can learn from images. We have time for two more questions. Hey Sam, my name is Keith Curry, recent graduate of San Francisco State. And my question revolves a bit around like personalization and memories. So first part is kind of how do you see that evolving over time?
And then also what are your thoughts on like more granularity like for example grouping memories? So like this is my work identity, this is my personal identity. So that way as you're doing different prompts you can sort of be more selective about what you want to be included on it. Yeah. So, we're going to push super hard on memory and personalization. Clearly, people want it and it delivers a way better way to use these tools. Um, I I have gone through my own evolution here, but at this point, I am ready for chat to just look at my whole computer and my whole internet and just know everything.
the value from it is so high I don't feel uncomfortable about it in the way that I used to. I really hope you know all AI companies take security and privacy super seriously and I hope that society as a whole does too because the utility is so great like AI is going to know about my whole life. I'm not going to get in the way of that. I don't yet feel ready to like wear the glasses recording everything. I think that's still uncomfortable for a bunch of reasons, but I do feel ready to say like, hey, you can just have access to my computer.
Um, and figure out what's going on and be useful to me and understand everything and like have a perfect representation of my digital life. Uh, I am lazy. I think most users are lazy too though. So, it's like a reasonable representation. And I don't want to sit there and have to group like this is a work memory, this is a personal memory, this is something that what I want and what I believe is possible. We t talked about this a little bit earlier is for AI to have a such a deep understanding of the complex rules and interactions and sort of hierarchy of my life that it knows what to use when and what to expose where.
we better figure that out because I think that's what most users will want to. Hi Sam, my name is Luan. I'm a international school uh student from Vietnam and my question is what do you think is the most important skill that people should learn in the age of AI? These are all kind of like soft skills. None of them are like as go like learn to program was so obviously the right thing you know over recent period of time and now it's not. But skills like become high agency, get good at generating ideas, be very resilient, be very adaptable to a rapidly changing world.
I I think these are going to matter more than any specific and I think these are all learnable. Um, this is one of the surprises to me of having been a startup investor is the degree to which you can like take people and in a three-month sort of boot camp style thing make them extremely formidable and do the things on all those axes I was just talking about is very surprising. Uh, it was a big update to me and and so I think these are the skills that may matter the most and they're they're like quite learnable.
Are we out of time? Okay. Um, thank you all very much for coming and talking. We really do want input on what you'd like us to build. Like assume we will have a model that is 100 times more capable than the current model with 100 times the context length that 100x the speed of the 100x reduced cost, perfect tool calling, extreme like coherence over like we're going to get there. Uh, tell us what you'd like us to build. we're going to hang around and uh you know if you're like hey I just need this API or I just need this kind of primitive or I just need this sort of runtime or whatever it is um like we're building it for you and we'd like to get it right.
But thank you all for coming. [applause]
More from OpenAI
Get daily recaps from
OpenAI
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.



