We all fell for it…

Theo - t3․gg| 00:56:43|May 11, 2026
Chapters10
Open discussion of how AI is changing coding throughput and the shift from manual coding to AI-assisted generation.

AI coding tools boost productivity but threaten skill erosion; the balance lies in deliberate orchestration, ongoing learning, and keeping humans in the loop.

Summary

Theo reviews Lars Fay’s provocative take on AI-coded futures, weighing the thrill of rapid gains against the creeping cognitive debt. Theo highlights Lars’ core concerns: developers chasing the ‘slot machine’ of AI outputs may lose deep understanding, architecture sense, and debugging chops. The discussion covers cost dynamics of model tokens, vendor lock-in, and the paradox of needing supervision to reap AI benefits. Real-world anecdotes—from debugging outages with AI agents to earlier Twitch and React experiences—anchor the debate in tangible trade-offs between speed and quality. Theo also reflects on personal workflows: he still loves fundamentals, uses AI for planning and orbiting tasks, but stresses that the most valuable work remains code that runs thousands of times and the critical thinking that guides architecture. The episode leans into not letting cognitive debt eclipse capability, urging readers to structure their workflows so AI handles repetitive or exploratory work while humans keep strategic oversight and learning alive. A call-to-action closes: be intentional with AI, don’t outsource thinking, and use agents to augment—not replace—your craft.

Key Takeaways

  • AI can dramatically reduce manual coding time and increase throughput, but this shift risks cognitive debt if developers stop engaging with underlying systems.
  • Cost per token may surge with new models, yet overall intelligence-per-dollar can improve; look at GPT-5.5 vs. earlier generations where performance rose while some prices dropped.
  • Vendor lock-in is less about one provider failing and more about complacent teams; diversify tools and maintain portability across models and providers.
  • Effective use of AI in debugging demonstrates that humans must understand the system; AI assists but doesn’t replace the need for deep domain knowledge.
  • Planning in code and iterative prototyping (plan-first vs. code-first) can reduce misalignment between specs and implementation when paired with AI.
  • Cognitive and technical debt rise when the community treats AI as a shortcut for hard problems; ongoing learning and deliberate practice remain vital.
  • Senior engineers’ value shifts from pure code-writing to orchestration, architecture, and mentoring—AI amplifies those roles but won’t erase the need for experience.

Who Is This For?

Essential viewing for senior developers, engineering managers, and AI-tool adopters who want to understand how to balance productivity gains with skill retention and architectural integrity in an AI-assisted workflow.

Notable Quotes

"AI code is here and it's here to stay. Who hasn't seen the productivity gains? We're talking CEOs of companies doing 37,000 lines of code every single day."
Theo opening line framing the hype around AI-powered coding and productivity.
"Not like we're going to lose all our skills as we rely on the slot machine more and more, and as a result, the slot machine becomes a slot machine and nothing ever works how it's supposed to."
Critique of over-reliance on AI outputs and loss of skills.
"Cognitive debt is a much much more interesting idea."
Introduction of Lars Fay's concept of cognitive debt as a lens for evaluating AI in coding.
"The problem is people leaning on AI because they don't understand the system and their refusal to learn the system because it makes them feel dumb results in them only getting anything done with AI."
Emphasis on preserving system understanding and learning alongside AI usage.
"Coding equals planning. Writing code is often less important than planning the right architecture and approach, especially with AI in the loop."
Theo and Lars converge on the idea that planning and architecture matter as much as, if not more than, raw code.

Questions This Video Answers

  • How can I balance AI-assisted coding with maintaining deep system understanding?
  • What is cognitive debt in software development and how can teams mitigate it?
  • Should I fear vendor lock-in when adopting AI coding agents, and how can I diversify tools?
  • What are practical strategies to use AI for planning while still doing hands-on implementation?
  • How does GPT-5.5 compare to earlier models in terms of cost per intelligence for coding tasks?
AI codingcognitive debtLars FayAI toolingvendor lock-incode orchestrationGPT-5.5agentic codingdebugging with AIT3 Code
Full Transcript
AI code is here and it's here to stay. Who hasn't seen the productivity gains? We're talking CEOs of companies doing 37,000 lines of code every single day. I mean, what else is there to our lives as developers? If we can put out that much code, why would we do anything else? This totally isn't going to come back to bite our asses, right? It's not like we're going to lose all our skills as we rely on the slot machine more and more, and as a result, the slot machine becomes a slot machine and nothing ever works how it's supposed to. This definitely isn't currently happening and people definitely aren't starting to notice, right? Oh, Aenta coding is a trap. This sounds like it's going to be a very, very fun article. I have a lot of thoughts here, as you guys can guess. On one hand, I love coding. I love building things. I care about the nitty-gritty of syntax. I've built entire stacks around my specific tech preferences that have went so viral that they helped build my entire career on platforms like this. On the other hand, I very much enjoy these AI coding tools. So much so that they've fundamentally changed how I think about tech and the things that I build. And I've never spent less time actually writing code. In fact, if I open up my code editor right now, I'm not even looking at code in it. I'm looking at a prompt file and then a bunch of CSVs for the data for all of the runs that I'm currently doing with agents. I'm barely looking at code anymore. There are things I'm doing more of, which is cool. Like I've never gotten so good at git in my life. I've never used SSH and remote boxes as much as I do now. I'm trying out all sorts of new things. I am learning all sorts of new technologies, but I also feel some of my skills atrophying and I find myself when something doesn't work just rolling the slots again and again hoping that this time might be the one where I get the output I'm looking for. Pulling that lever costs a lot of money. You never know what you're going to get, but at the very least, you know what you're going to get in our sponsor break is a really cool company like this one. Are you the type that pays attention to what tool calls your agents make? Cuz I am. And I've noticed that there's one thing they tend to do a lot. Curl. The frequency at which I see my agents making curl calls to get information about web pages, documentation, and more is honestly kind of funny, especially because of the frequency at which it fails. Because it turns out a lot of the web doesn't respond with the real contents when you hit it via curl. Whether it's an empty HTML page that requires JavaScript to populate the content or something that's hidden behind a capture or a firewall or some off layer of some form, there's a good chance your agents are going to struggle if they're relying on doing curl exclusively. It'd be really nice if you could just wrap your curl request in some way where it won't fail on those uh oh, it's on the screen already, isn't it? Yeah. Today's sponsor is browser base and they just added a new fetch feature that makes it easier than ever to get the real content from any website. If you want it as HTML, JSON, or even markdown, they'll handle it all for you. Instead of curling the site you want directly, you make a post request to the API.base.com/fetch endpoint. You pass it the body with the URL you want, and then all it needs is an API key, and it will now be able to scan that site reliably for you and get you what you want. On the pages that require JavaScript to run, they have you handled, too, because Browserbase can spin up real browsers in the cloud. That's why they exist. That's how they got their whole name. They provide real browsers for your agents to do real things on the web. Whether it's clicking buttons, signing in, getting information, running JavaScript, and more, they have you covered. Going to be real with you guys though, there were some use cases a cloud browser wasn't great for, like search. Oh, is it on the screen again? I need to stop doing that. The search stuff is really cool, too. Adding a real way for your agents to do search and get the context they need to unblock themselves is incredible, and that's why so many companies rely on them for it. So if your agents need better searching, better fetching, better curling, or just better access to the web through a real browser, look no further than soy. Browserbase. Back to where we were. Agentic coding is a trap written by Lars Fay. I'm actually really excited about this one. Multiple people who I trust sent this to me to take a look at. I haven't read it yet, so we get to go through this together and hopefully we'll have interesting things to add. Remaining vigilant about cognitive debt and atrophy. Interesting that the angle here is cognitive debt and not technical debt. I already have a feeling I'm going to like this a lot because I've personally found tech debt is actually one of the coolest things to deal with with AI. It's actually kind of amazing how willing models are to go after tedious tasks like making a new lint rule work across your whole codebase or updating a package that broke the definitions for a thing that occurs in 5,000 places. I have done crazy work for tech debt in my life that just required so much brute force and manpower it wasn't worth it. Like I I still vividly remember a time at Twitch where we were doing a migration for our GraphQL layer that broke I think it was 8,000 or so TypeScript files. So we took all the ones that were now broken, put them in a Google sheet and would assign chunks to different engineers to go through and manually update on the shared branch to try and get everything up to date. Like those are the types of things we used to do by hand that often would get left and the tech that would just sit there because the process of fixing it was too much manpower to be worth it. AI is phenomenal at those things. So whenever I see the like AI causes tech debt argument, I get frustrated because it also solves a lot of the same tech debt that it can potentially cause. But cognitive debt is a much much more interesting idea. So let's see what Lars has to say. Starts with a quote. AI does the coding and the human in the loop is the orchestrator. This is the sentiment being hyped up around the industry currently. Traditional coding is all but dead. Inspect driven development is the future. Okay, I really think we're going to agree if he's talking [ __ ] on STD already and we barely started. Oh boy. You generate a plan and disconnect from writing any code. The agents know better and handle all the implementation. You are there as the expert to provide good taste, review the outputs, and constantly steer the agents to execute the plan that you meticulously put together. How meticulous are you guys writing these plans out to be? Cuz I I don't believe you. I don't believe any of you are actually putting that much effort into the plans if you're not watching the agent when it runs after. Yeah, there's there's another angle I could talk about with the expert thing here. I'm not going to go into it yet cuz I want to read a bit more first, but I already have so many thoughts. The workflow takes many shapes at this point, but in general, it's a process where someone defines the project's requirements simultaneously at a micro and a macro level. Very, very good call out here. They generate a plan, and then they pull the slot machine lever over and over, iterating and reiterating with often multiple agent instances until it's done. All the while putting a growing distance between the orchestrator and the code that is actually being generated and committed. Coding agents are helpful and powerful, but there's already some quantifiable trade-offs that need to be discussed. First, we have an increase in the complexity of the surrounding system to mitigate the increased ambiguity of AI's non-determinism. Very real problem. Second, we have the atrophying of skills for a wide swath of the population. I'm definitely starting to see this. Third, we have vendor lockin for individuals and entire teams. Cloud code outages have already had entire teams at standstills. Yep. and fluctuating and increasing costs to access the tools. An employees cost is fixed. Tokens are a constantly moving target. Yeah, I do think that the demand going up is causing our brains to fall out a bit with the costs increase. I don't think long-term the cost for agentic work is going to keep going up. The same way I think people are going to do more with AI and use more tokens, but the cost of intelligence at a given level is consistently going down. Flashbang warning. I talk about this a lot with the new GPT models because they are more expensive per token. So if you're measuring a given input and output that are the same work between the old and new models, 5.5 is 2x more expensive. It's double the cost for input and output tokens. But if you compare per level of intelligence and the number of tokens generated, things get much more interesting. GBT55 is the highest scoring model on artificial analysis. It is at 60 points whereas the previous generation with Opus 47, Gemini 31 Pro and GBD54 were only at 57 points. Meaningful improvement, but I'm going to add something to this. GBT 5.5 low as well as GPT 5.5 medium. And here's what's super interesting. 55 medium scores the same as 54x high. And 55 low still scores very well, too. comparable to Claude's sonnet 46. But when we go back down to the cost efficiency, this is the cost that artificial analysis paid or would have paid if they weren't comped to run their entire benchmark suite. Opus 47 is the most expensive by far at 5 grand, even though it's less smart than 55. But remember, 55 medium and 54x high are the same level of intelligence. 54x high cost $2,800 to run this bench. 55 medium cost under 1,200. So if that level of intelligence was enough for you, your cost just got cut in more than half. You're now at less than 50% of the cost to do those same tasks as you were at before. That is a very, very meaningful change. Sure, the best model in the world got slightly more expensive, going from 2,800 to 3,300, but that same tier of intelligence from before, the one that came out 2 months ago that we were totally happy with, is now less than half the price. And if you're okay with going a little bit dumber, not even much dumber, just like sonnet levels, the price drops all the way to $500. Another having of the price. And this model 55 low is as smart as 46 sonnet max, which is the second most expensive model at $4,200. And this model came out very recently as well. So again, if we look at this based on the intelligence level per dollar, the cost per IQ point here has dropped by 8x in just a few months. Sonnet 46 came out earlier this year, $4,200 to run this benchmark, 51 points on the bench. 55 Low, which I would argue is a much smarter and nicer to work with model. Same level of intelligence and eighth the price. So, I just wanted to jump on that because I don't necessarily agree that the costs are increasing in the sense that every month if you do the same thing the number goes up. It's that we are doing more with tokens. So, the amount of tokens we're using is going up, but we're also finding ways to get more intelligence per dollar. So, the cost per task and the cost per level of intelligence is going down. So, we need to be realistic about those differences. It doesn't mean that companies aren't getting a little screwed here because they hired a bunch of developers. They pay their devs I don't know let's say company pays their devs 5 mil a year last year they paid 100k in cursor this year they've already paid a half a mill in cursor and they're only a few months in that that is a legitimate cost that they are eating that sucks so on the bill it does look like prices are going up but by the metrics on how things are actually changing and improving much less so we're just using the AI more just want to jump on that cuz I agree with almost all of the rest of this being successful with this approach to coding agents hinges on a rather crucial element ment. Only a skilled developer who's thinking critically and comfortably operating at the architectural level can spot the issues in these thousands of lines of generated code before they become a problem. Yet, in an ironic twist of fate, it's the individual's critical thinking skills and cognitive clarity that AI tooling has now been proven to impact negatively. This is all around the term cognitive debt, which is a really good phrasing. Many relevant people like Simon Willis and Martin Fowler are describing the experience of cognitive debt directly. They talk about getting lost in their own projects and finding it harder to confidently add new features. They can move faster, but they lose the deeper sense making that connects decisions to intent and the intent to the code. So, my hot take on this is that if you weren't already experiencing a bit of it, you weren't shipping fast enough. I can't tell you how many projects I had that I went really hard on for a month, got to a pretty good state, then had to go do something else, came back to it, and had no idea what the [ __ ] was going on anymore. This is this is inherently the way humans work when they're doing enough different things. You can't keep the details of all of the context of stuff in your head. The problem is I perceive it here is that more people became theos because of AI. I used to ship like five plus projects a year before AI cuz I just loved building things and putting them out there for people. That took a lot of not like expertise in the traditional sense, but it was a just like the way I built, the way I thought about things and how I would cut project scope down to get a deliverable, usable version as efficiently as possible. Hell, this is why I made the T3 stack. I wanted to be easier to build things with confidence rapidly that would work reliably without having to fully understand all of the pieces. I was kind of an early vibe coder. Not because I was using AI to write all my code, but because as soon as it worked, I would move on to the next thing and forget what the hell I had just done. It was just like willy-nilly deleting giant chunks of code, writing code mods to update thousands of things, hoping that it worked and just checking if the build came out correct in the end. I have been slinging far too many lines of code for my levels of expertise for my entire career. And it's fun seeing everybody else have the problems that I have had for the better part of a decade now. Specifically around remembering where everything is in the codebase and having confidence when you review code that things are actually changing the right way. I would also argue and many would push back on it but I understand both sides that my ability to do this was novel and potentially exceptional. It no longer is. The thing that made me special on a team is that I could unblock and make things move and hit deadlines that didn't seem possible before. I was a bargaining chip at my job where my team would negotiate with other teams to lend me to them to unblock something that they were complaining about so they would unblock us on something we had to do. Very common that I would be traded around to make things move faster. Now you can just prompt your way through it. The issue is I had to work to get there. You can argue whether or not I was an expert and that's why I could do this. But you can't argue that it was a novel thing that I could do this. It was novel enough that I was paid exceptionally well and had a really good career because I was able to jump between a lot of things and I could deal with the cognitive load and the cognitive shifting going between all of these different things. Now everyone is doing it and most people aren't built for this and it sucks. There's a separate problem that combines with this though. The reason I could do this is I deeply understood the building blocks I used to build in this way. I might not remember exactly how I assembled the things, but I knew the pieces I used to assemble well enough that I could infer why a problem happened based on my knowledge of the pieces. AI disincentivizes you from learning about the pieces. And I think that's the biggest problem. Humans are very pain. Feeling dumb hurts. When you're trying a thing and it doesn't make sense, you feel pain. When you try a thing and it just goes as expected, you feel good. AI has made it easier to avoid that pain and feel that reward. And what used to be a upfront cost you would pay to learn the pieces and then you could get the reward of solving the puzzle is now a slot machine. And your choices are go learn the pieces so that you can actually solve the puzzle correctly or keep pulling the slot machine until hopefully the correct answer comes out because each pull hurts a lot less than reading the docs for a language you don't understand or learning a library that doesn't map with your mental model properly or debugging something that feels hopeless. I learned about this from skateboarding. The reason most skaters give up before learning to ollie, much less kickflip, is because it feels so bad. You hate the feeling seeing others so effortlessly jump on their skateboard, ollie downstairs, and do all these fancy tricks, and you can't even get the board to come up off the ground with you. And then maybe you try a little too hard, and you hit your shin really hard, and now walking's uncomfortable for a few days. Most people give up before they learn those tricks because the pain is so great and the feeling of stupid and incompetence is so strong that they don't want to push through it. At least in code you didn't have the physical pain. You just had to feel dumb. And I'll be real, I kind of miss feeling dumb. I haven't gotten to as much lately and I try to find ways to like intentionally picking languages I don't know like Rust when I build a new project so that I have to not get it and start learning things about it. But every time I start feeling dumb, I feel the itch. I feel the temptation to pull the slot, to just do one more run in hope that maybe this time it'll generate a solution and now I don't have to learn the thing. And I think other people just don't have the same willpower. And I don't think that is that exceptional a thing. I don't think it was even necessary before. But your your ability to say no as a developer has never mattered more than now. And your willingness to feel stupid, to go through the details and not let the AI bail you out is a superpower. Personally, I have been learning a lot more than I've ever learned because of this. Both because I'm learning all about AI and the business side especially, but also because I'm finding more fun in things that I felt too dumb in before. Like I'm getting way deeper in cryptography. I'm making cryptography challenges. I never thought I'd be able to do that. I was always fascinated by it. Now I'm doing it and I'm asking questions about it and I'm having back and forth with the agents and now I don't feel as bad bothering my friends about it because I'll exhaust the AI's capability to answer my questions first and now when I bug a friend that is deeper in the space I'm asking better questions. Then there's also the aspect of fear here and I have a feeling we'll talk about that later so I'm not going to dive into that just yet. Let's get back to the article cuz it's already making me talk too much. The next section is called not just another abstraction. Common refrain we hear in the community is that programmers are just quote moving up the stack and into a different type of abstraction. Whether or not these tools really are an abstraction layer in the first place is not a settled matter. A high level of ambiguity is not a higher level of abstraction. Woof bars. What he's referring to here is the idea that we've already done abstractions. Like we used to do punch cards and then assembly made it so we didn't have to poke the holes in the cards. Then C happened and it was much easier to write and then generate code that was assembly for different systems. Then we got C++ which would include a lot of the things that you would have to build yourself previously in C. Then we got languages like Python and JavaScript that didn't even have to be compiled. They just ran in a virtual layer, a runtime that was built in these other languages. We kept abstracting higher and higher up. And let's be real, how many JavaScript developers can read assembly? I mean, let's be real. How many JavaScript devs can even read JavaScript at this point? But you get the idea. These abstractions would keep you from knowing as much about the layers below. But the best engineers had strong incentives to learn at least one layer above and below where they were. All the best C++ devs I know are very familiar with V8, the JS runtime. All the best JavaScript devs I know are very familiar with V8 and its quirks. The mid ones don't understand the things above and below them. But the great ones tend to be at least one layer, maybe two or three, above and below where they tend to live. But is natural language another one of these abstractions? Is markdown to JavaScript what JavaScript is to C++? I used to think it might be, but I'm learning now the answer is probably no. Let's see what Lars has to say about this. If we put the abstraction conversation to the side, it is true that programmers tend to be weary of new languages and new ways of programming. When Fortrren was released, programmers were skeptical of it as well. They had similar claims. It was likely to introduce more bugs and instability. Writing assembly more directly was more efficient. Later, there would be discourse around the integration of compilers, introducing too much magic into the process. These were normative arguments around the fear of what might be lost if these new technologies were embraced. The difference with what is happening today is that those previous fears were speculative and theoretical. In just the short few years that AI tooling has existed, we've already seen significant impacts. These aren't just junior devs, but even those with a decade or more of experience. Here are some posts that he grabbed from Reddit. Losing my ability to code due to AI. Hello everyone. I don't see it come up a lot, but even after a few years of coding, using AI on a regular basis for over a year has made me feel a lot more insecure about my coding abilities. I feel like my skills are really deteriorating while simultaneously feeling like there might be no need to know how to code at all. Handling feeling dumber or like losing skills due to the need of using AI. Recently, my company started enforcing using only AI for all software dev. That's a terrible idea. Erase the code. I love that this dev is so deep that code autocorrects to codeex. It writes the code, then agents review the code, etc. We're supposed to be architects who only look at outcomes and not the code. For context, I have 9 years of experience. I'm 31, a father of a 1-year-old, so let's agree that a side project after hours is not an option. Every time I use AI to do everything, I feel like I'm losing my skills, that I'm becoming a worse professional, etc. Especially that if you want to get a new job, you're mainly and mostly graded based on your technical skills. Letting AI do 100% coding fried my brain. Help. AI reliance and cognitive decline. I find myself using more and more AI to speed my efficiency. Whether it's organizing a schedule or a quick screening of code to actually writing small snippets. There are definitely people experiencing this. I was afraid I would experience this and somehow haven't. I I know I keep making the same comparison over and over again. I think the fact that my job wasn't just coding and hasn't been for a while prepped me better for this. I've had a team of engineers for the better part of a decade at this point. And realizing my best use of my time isn't being in the weeds writing the code. It's helping the team execute more effectively while understanding the system well enough to help them debug when things go wrong. I write less than 5% of the code across our products, but I solve over half the outages. And I'm really proud of that. My understanding of the system has gone up, not down over time. And the AI tools are helping me maintain that understanding and guide my team better as we architect our systems. I would bet my ass that every single one of these four people has never had to manage a team before. I also see Chris Titus Techch in chat with a similar experience here. He's coming from a CIS admin and infra background. Most of his coding abilities used to revolve around scripting and now with AI, it's really opened so many doors. He could jump into low-level languages he would have never had time to before. And that's what I experienced too. There's a lot of things that weren't worth me learning because the friction of going through the long window of dumb and the burden I would be to other people asking questions was too high. AI has lowered that to the floor. So the right mindset here can make you way more productive and way more capable of learning in a way that's really cool. I'll say that less experienced engineers like junior engineers, career transitioners or people that just didn't get it before, they're getting a little more screwed here. My chat sent me this tweet from Austin Kennedy who's 22 years old. He thinks Claude Co is deteriorating his brain. Every single day for the last six months, he's had six to eight quad code terminals open, waiting for a response to so he can hit enter 75% of the time. It's done something to him. Apparently, his friends bring it up pretty frequently. None of them feel as sharp as they used to. Don't know if it's just us or others in their 20s who are feeling the same thing, but something he's been thinking about. Yeah, if you didn't get through the years of friction to really understand these things, you're going to get addicted to the slot machine. I've seen a lot of people falling for this in particular. Also, the the worst ones are the ones who have always been really insecure. The people who I'll I'll be real crude here. The people who don't have imposttor syndrome. They are just imposters. And I've seen some horrifying examples of this. People who have been like working at Google for four years that don't know how to use SSH. Like I've seen this a lot. And those people are now addicted to the slot machine. And they went from being kind of bad, but like at least they could write some Python here and there to just a net negative on the team. Funny enough, I've seen that same person instinctively go to chat GPT and ask where they left their keys because they're so used to asking it everything. This archetype of person, I would argue, was never particularly useful. And the AI is just continuing to erode their already eroded brain. The gap between the great and not great engineers has grown massively and AI has accelerated that. And again, examples from people in the industry saying that the juniors on their team are shipping really fast, but they don't know how to debug anything they didn't actually write the code for. This dev hired a junior who learned to code with AI. He can't debug without it. Don't know how to help him. AI debugging underrated. It's a specific skill separate from building with AI. I don't think you should be debugging without AI at this point because if you have a bug and it's affecting users, everything you can do to increase the likelihood that you solve the problem is something you should be doing. So, when I had an outage a few days ago on Ping, our video product, I haven't touched that codebase in 4 years minus like testing agents upgrading it every once in a while, but it was built on the OGT3 stack, a stack that I invented. I know the layers very well. I know where the failures can be. So, I grabbed the first error I hit. I pasted it into a codeex instance and immediately went back to the browser to try and get more info and debug it. I ended up figuring out roughly what it was. It was Prisma getting a null for an ID when it was trying to do a referential integrity check to get data for a join that it was doing and it just got a null and failed as a result. By the time I had figured this out, the agent that was running had its own entirely different theory, but I didn't care. I opened up another tab in T3 code and asked a different question. Which of these table referential integrity checks is the most likely to have a problem and I switched over to fast mode. While it was running, I was going through the code and going through our error logs. I saw the notification that the request was done. I went and checked and it said that it was very likely to be the link between user table and the join for which users are in rooms. I knew what I had to do at this point. I had to remove all of the links between the rooms and the user that were for this dead user. I didn't want to write the SQL query wrong. So I told it what our schema was and asked it to write the query to remove all the rows that match this particular pattern. And then it worked and then I was done. So I did around three AI agent requests while I was doing this debugging. I don't even know what the first one output. I just had it run in the background while I was debugging. found a much more likely root cause, asked a second agent about that. It correctly identified what table the problem would be from. I asked it to write a query to figure out what row was affected. It did. I pasted it. I got the right result. I pasted the results back into my agent. It told me what query to run to clean it up and then it was fixed. This was me and the AI going back and forth sharing our understanding of the capabilities of my system and what we were experiencing in order to debug more effectively. Note that this would not have worked at all if I didn't know the system already. I also would have taken much longer to solve this if I didn't have AI and I would have been more scared about my solutions because I didn't want to go write the [ __ ] SQL for this when I was at an event in Miami. Yeah. So, I am very okay with using AI in the debugging process. Anybody who says otherwise is either stupid or bad faith. The problem is people leaning on AI because they don't understand the system and their refusal to learn the system because it makes them feel dumb results in them only getting anything done with AI. Different problems and I want to be very clear with the distinction here because again these tools are really useful when you know what you're [ __ ] doing. Back to the article, Lars says it is actually different this time and I agree. This is different from a C++ dev moving to Python or Java because as he says, they didn't complain about brain fog when they made the move. They complained about the awful object-oriented programming patterns that they had to adopt in Java. When a CIS admin moves to AWS, they don't feel like they're losing their ability to understand networking. On that note, there's a lot of people who would argue Verscell cost them their networking understanding, but different conversation for a different time. A senior engineer losing their coding edge and becoming rusty over time as they move into managerial roles and practice coding less often is not a new phenomenon. This was the natural progression of expertise. An engineer would put in decades to coding experience lots of friction and their experience would be logged and that would result in them having the time and experience necessary to solidify their skills and wisdom and they could apply that wisdom when their job became less about syntax and more about higher level architectural decision. Those individuals are not only exceedingly rare, but you won't get the next wave of seniors if we're all abdicating the friction of writing, problem solving, and debugging. Ding, ding, [ __ ] ding. This is the concern I have. I am predisposed to being good with this [ __ ] because I already coded in a way that relates to this. And then when I moved to leadership positions and I was advising at many different teams while also coding and being in the weeds, too, I got to build these skills really early. And I'm super thankful for the people I worked with for letting a stupid [ __ ] 22year-old step way outside of their role to start steering architectural decisions and helping in outages that they shouldn't be near. Being involved in like P1 instances that they should have nothing to do with and learning not just how things work, but more importantly how they fail. I'm really [ __ ] proud of my debugging skills. My team's almost annoyed that like I can come into a codebase I've barely touched in years because there's some outage and say, "Oh, it's probably that." And I'm right like 70% of the time. And if I'm not, I get it right within the next three tries. Usually my systems understanding and my ability to like link one, two, and three to form the picture. That's what I'm good at. And I wasn't initially. I had to build that skill up because I was doing too many things at any given time. and I was lucky enough to be surrounded with people that were cool with me stepping way outside of what I should have been doing. I will say that with the right mindset and focus, I do think the skills I built can be built independently now if you build a big enough system and put the effort into understanding where it fails. On the other hand, the incentives for that are going down and the interest in building your fundamentals is going down with it. I used to be the person that would push back on learning like fundamental CS. Like I didn't think you needed to go memorize all of the different sorting algorithms and all of the different like big O notations for different solutions to various problems. Like I don't think most devs should need to know how to inverse a binary tree. But the moment you have to, you should be able to figure it [ __ ] out. And this is always my take. The fundamentals don't matter until they do. And you should be in the right mindset to learn them when it makes sense to. when you have a problem that benefits from it. If something is slower than it should be and you notice that your sort algorithm takes too long, it's time to learn better sort algorithms. I don't know if that take of mine will carry over to this era. I would hope that when you're running into problems because you're fetching all of your data through use effects that you'll look into how other people do it and you'll find React Query, but I think it's more likely the agent just does it for you and you never learn the difference. I suspect that because I've already seen a lot of engineers do that even before AI. The number of engineers I saw making their code comically worse by fetching all of their data in these weird patterns that were super inconsistent and unreliable across their codebase that wouldn't use React Query because they don't want to learn another new thing. [ __ ] you're basically unlearning. It is less code and less effort. It is a simple wrapper for async functions. It couldn't be easier. But the idea of another thing to learn was enough of a turnoff for the average low IQ dev that they wouldn't [ __ ] bother. And now that person's been given a lever they can pull to make it matter even less. That sucks. AI is going to take bad devs and make them atrocious. But I do still hold a little bit of hope that the devs who love this, who really care about how things work, not just the feeling of them working, will take advantage of these tools to accelerate how quickly they learn and grow. And I have already seen this. It's a small number of people, but I have them in my community. People who are doing way more than they should be at their young age without any real career experience yet because they're building things bigger than they have any right to and they're curious about why they're breaking and how to solve the problems at fundamental levels that I love. I like this framing here, too. What's happening now is a trend where devs who have never had the longevity or the 30 plus years of friction that lead to the deep understanding are being moved to higher level workflows requiring those same skills to manage the AI agents that the senior engineers took decades to obtain. Yep. I'll go a step further. You had to be a good engineer to have a big code base. Either you were hired to help in that big codebase or you built it yourself. There was a intelligence and competence requirement like a a friction that was necessary like a you have to be this tall to come in type thing. You had to be this competent and this experience to build real work before. Now Claude can get you into a position where you're maintaining a code base that is bigger than any engineer with 5 years of experience would be comfortable with and you learned how to code a month ago if you even actually learned. That is the biggest issue is there's devs who are way out of bound for where their capabilities are and they're not using the tools to learn to better their capabilities. They're using the tools to reach past their capabilities and then just throw up their hands when things break. Another great point from Chris Titus Tech. He's on fire. He'd expand this and say that if you do open source and have contributors, it will help point out the mistakes. I'll go even further than you are here, Chris. Open source maintainers are massively outperforming average devs in the AI world because they're used to slinging slop and dealing with random [ __ ] from various capabilities and levels of engineers, many of which know better than them, many of which know less than them, all of which are opportunities to learn. And this is actually why I think you're an exceptional dev despite not being a dev because you're doing something way harder. You're trying to fix Windows, but aside from that, you're also trying to like maintain these real legitimate open- source projects people rely on every day. I say as a person who relies on win till every time I do a new Windows install, which is far too often. That puts you in a position where you learn so much more and those details matter so much more. You've effectively let yourself do 5 years of real world experience every year with the stuff that you do, even though you're not an engineer in the traditional sense, which is silly and dumb. and one of the many reasons I want to help you get make more [ __ ] money. So yeah, if you're doing real open source stuff that people rely on and use, whether you're a contributor or even better a maintainer that is helping go through stuff, your level of experience for where you're at in your career will be significantly higher than you would otherwise expect. Like I know people who have one year of experience that is mostly open source work that runs circles around somebody with five years of experience at Microsoft. The author does remind us that senior engineers aren't immune. Simon Willis, the creator of Django, good friend of the channel, one of the best people in the AI developer space for talking about how this actually affects us as devs. He has 30 years of experience. He's been coding since I was born. He reported that he doesn't have a firm mental model of what the applications he builds can do and how they work, which means each additional feature becomes harder to reason about. And I can absolutely understand that. The skilled orchestrator problem buried in a recent study by Anthropic was a surprisingly honest moment when speaking about the risks of engaging with coding agents on a regular basis. One reason that the atrophy of coding skills is concerning is the paradox of supervision. Effectively using Claude requires supervision and supervising Claude requires the very coding skills that may atrophy from overusing AI. A director of software engineering at LinkedIn who has 50 engineers reporting to them has noticed it proliferating throughout the organization and requested his team to not use them for quote tasks that require critical thinking or problem solving. To grow skills, people need to go through hardship. They need to develop the muscle to think through problems. How would someone question if AI is accurate if they don't have critical thinking? Yep. I don't know what it looks like to build these skills now. It's very different from how it used to be. I'm sure there are new ways to and I'm sure they're even more effective using these tools, but I don't know what that looks like because I already have the skills. I can't properly reset to my brain before I knew all of these things. There's also the question of what constitutes this overuse. We already have evidence, both datadriven and anecdotal, that these skills can atrophy and dissipate rather quickly, sometimes even within months. This is the contradiction that has many AI boosters talking out of both sides of their mouths. The use of coding agents is actively diminishing the very skills needed to effectively manage the same coding agents. LMS accelerate the wrong parts. Oh boy, this is going to be a fun section. Contrary to the current narrative that is being espoused, we didn't necessarily need to write code faster, especially code that we didn't fully understand, and particularly in huge swaths that we couldn't review in reasonable time frames. Before AI, a good dev's priority list might look like the following. First, understanding of the code and its relation to the codebase. Second, if the code is aligned with the documented and efficient standards. Three, as few lines of code as needed to accomplish the goal while maintaining readability, and four, turnaround time. I don't agree. Depends on what type of good dev you're referring to here, because there's a lot of things that make a dev good, and everyone will disagree on what those are. Some people think a good dev is somebody who writes meticulously detailed specs and writes five unit tests per line of code. Others might think a good dev is the person who understands the user and the codebase well enough to make the two meet. Others might think the best dev is the one who comes up with a novel solution that is far outside of the way anyone else thinks that no one understands until 5 years later and then we're all using it. I think good devs are people who solve real problems. I like working with devs that understand the code base, that are writing code that is well aligned and standard, that are writing as few lines of code as possible, which is actually very rare. or there's a lot of great devs that don't even do this and have good turnaround time. But I have found that any one of these skills at the upper end is novel enough that it makes you unique. Like if you write really efficient code, that is a thing you're known for. Like I know Mark, my CTO, is the code golfer. One of his favorite things is to see if he can do a whole advent of code problem in one line of JavaScript. And then turnaround time, as I mentioned before, was one of my skills. And it was enough that it boosted my career meaningfully that I could come in, find the things that made the road map unnecessarily complex, helped trim and get like really locked into the thing we actually wanted to build. So we weren't distracted by all the things that took most of the time and brought none of the value. So I would say a great dev has all of these skills, but a good dev is somebody who has the right compromise between them. The problem is that agentic coding and LMS in general are inverting this list. Yeah, most devs should not be allowed to code fast. That I agree with. As somebody who coded fast a lot, watching others try to keep up resulted in a lot of pain. And the solution was often for me to slow down, not because my code was getting buggy, but to get everybody around me to stop trying to keep pace because when they did, everything fell apart. And AI now has everyone trying to keep pace, and now everything is falling apart. Their capabilities and usage tend to focus on speed by increasing the amount of code that can be generated in a specified time frame. Speed's a natural byproduct of high aptitude. When it's forced, it always leads to lower accuracy. Okay, I do love this framing. Actually, this is perfect. I earned my right to the speed. Y'all haven't yet. Okay, many of you have. I'm actually really impressed with how competent so much of my audience is when I meet you guys at events and things. I'm blown away with the [ __ ] so many of you are doing. Your annoying ass co-workers have not earned the right to ship fast yet. And the result is slump. Julius on my team has absolutely earned the right to ship fast. He's shipping faster than I ever have and is pushing the limits of what agents can do, but also is very meticulous about how things are structured more than almost anybody. And as the author says, the integration of AI tools does not tend to focus much on deeper understanding or conciseness. And I want to be really clear because every time I mention something like this, the response is, "Oh, we need better tools that do that. Those should make a lot of money." And then I get five people coming up to me at the OpenAI GBT 5.5 event trying to get me to invest in their company making AI education products to help people learn better with AI instead of just use the AI. Who do you think makes more money, casinos or people selling books about casinos? There's pretty obviously one that does much better. I have been burned on so many education investments, even the ones that were doing revenue, that I just don't believe that people want this at all. Given the option of a slot that makes you feel good and a lesson that makes you feel bad, 99% of people are going to pick the prior, there is no business building this, which sucks that human nature in capitalism prevent tools that educate you from being successful. But that's your guys's fault as users, not business's fault. Doesn't matter how good the business does. If users don't want it, it won't work. Believe me, I'm speaking from a lot of experience here. It is absolutely possible to use these tools if you're determined enough in a way that will actually make you learn more and understand systems better. But that's not how they're being used. Lots of companies have force mandates that you have to be doing more with AI and they have crazy leaderboards and stuff that are incentivizing bad usage. And all the hype around that is demonstrating just how bad things are getting. Coding equals planning. This is going to be a fun section. There's a divide between devs that isn't highlighted as much. Some of us plan and think better with code. Thinking and working in code isn't just meaningless drudgery. It forces you to think about things on a technical level that involves everything from security to performance to user experience to maintainability. I very much agree here. I like planning in code. A thing I talk about a lot is what got coined at Twitch's theo method. Instead of writing a spec, assuming we know how everything works, that is super long and tedious, and then building the product and realizing the spec doesn't make any sense, I would start with a really simple, minimal implementation that was the minimum viable shape of the idea. And in building that in just a few days, often I would learn a ton. Sometimes it would even be good enough we could ship that as is. Other times I would find all of the things that make it annoying to implement and then I can write a good spec based on what I learn. Death Fudge put it perfectly in chat. Write once, throw it away and then build it. Right. Absolutely. And if AI helps you do that, awesome. I did this so much it got coined under my name at my last company that I worked at traditionally. It is really powerful to get in the weeds first. You can't really do that in coding without coding. I write much worse plans when I'm not in the codebase. That's just reality. And when I'm working with my team, I'll often do a first pass before proposing the thing to the team so I have like my roots to base my proposals on. And guess what? Plan mode is not that. Plan mode is not that at all. In a recent interview discussing specri development, Dax, the creator of Open Code, which is an open source coding agent, no less, is quoted as such. It's also worth noting Dax isn't the creator of Open Code. He's a devril at Open Code. Here's what he had to say. When working on something new or something challenging, me typing out code is the process by which I figure out what we should even be doing. I have a really tough time just sitting there writing out a giant spec on exactly how the feature should work. I like writing out types. I like writing out how some of the functions might play together. I like playing with the folder structure to see what the different concepts should be. And this is all stuff I think most people, most programmers have always done. I don't agree there. You would be amazed how many programmers don't do any of this. They just wait till the ticket is cut properly and then they spend way too long writing a document and then hope they can get promoted before they even finish the feature. Most devs don't do this because most devs are roughly average or worse. Good devs do this, but most devs don't. I do agree with him that we shouldn't necessarily stop doing this. There's no good reason to, and it's how we figure out what to do and how to do it right. The one big part I'll disagree with here, though, is that when you get it wrong, it's never been cheaper to fix it. I always had a sense of dread when I would commit to a new folder structure or a new way to do type systems or a new way to do data fetching because if I'm wrong, getting it out as hell. Now I can get it out in a for loop and that's awesome. It has made my willingness to experiment with these things go up a ton and I'm trying patterns I never would have tried before cuz they had way too high a risk of being [ __ ] But they turned out to work really well and I'm surprised, but it never was worth the effort before. So again, there's two sides here. Back to the article. what you say is often not what you mean and LLMs fill in ambiguity with assumptions or even worse hallucinations which leads to more review, more agent revisions, more tokens burned and more disconnection from what is being created. Inversely, you can marvel at the most beautiful unambiguous perfectly structured prompt you ever written and the LM can still output a hallucinated method because it's fundamentally a next token prediction engine, not a compiler. You cannot replace a deterministic system with a probabilistic one and expect zero ambiguity. As a JavaScript dev, I am offended. Oh boy, this will be a fun section. Vendor lockin. When I was browsing LinkedIn during the cloud outage that occurred a bit ago, I noticed numerous posts highlighting that certain devs and engineering teams were at a standstill. Their workflows, their own coding abilities, had already reached a point where they were largely dependent on these vendors. What used to be a skill that they could execute with just a keyboard and text editor suddenly required a subscription to an AI model provider. I haven't selfplugged in a bit, so please permit me to do it this one time. If you're experiencing vendor lock in right now, that is your fault. There are lots of great tools, and I will obviously plug my own with T3 code, that don't just give you the ability to switch between models. They give you the ability to switch between subscriptions and providers. There is no world in which claude, codecs, cursor, and open code are all down at the same time. And it's actually quite valuable to hop between these things. I can't tell you how many times I've had GBT 5.5 debug Opus slop and how many times I've had Opus come in and make a slightly better UI for some GPT 5.5 code. A lot of devs are doing this wrong. They are relying on one thing way too hard. The same way that a lot of devs rely on US East1 too hard. The difference is that somehow Anthropic is less reliable than US East1, but it's the same problem. And there's lots of good open-source solutions to this problem. Again, T3 Code is a fully open-source desktop app and web app now, too, that you can use to run all of your agents on your existing subscriptions and hop between them at a moment's notice. I have had times where Claude was down and I wanted to use it, but I hopped over to GPT 5.5 and was fine or even hopped over to Cursor where they were using Opus models through other providers and also were fine. You can also deploy Claude on things like Amazon Bedrock, which has way more reliability and better uptime than you would get out of using Claude directly. So, I don't love this vendor lockin point. I think like all things, reliability is your concern as a dev. The problem here isn't that we're all stuck relying on one thing. It's that stupid devs have nothing else to do when their thing goes down, so they complain about it online. An interesting point here is that you can't predict your token cost. Model providers are heavily subsidized and the models themselves are built on shifting sands. Every new model released follows the same pattern of high benchmarks followed by hype followed by the reality of usage and everyone complaining about them being nerfed and burning through 2x or 3x as many tokens to get the same job done. This is a cloud complaint. This is absolutely a cloud complaint. Like this is not how it works at open land. You guys are just so deep on cla code that your brains are falling out. everyone, whatever side you're on, if you're a Claude Code Dieard or a Claude Code hater, Claude Code specifically has ruined the discourse about so many of these things. Oh, don't quote that. That Primagen video, not that video. Not the one I had to tear to pieces cuz it was so wrong. Oh, no. Don't quote that. This article was so good up until this point. You know how much your employees cost, but you have no idea how much your token costs will be dayto-day, monthtomonth, year to year. If all of your costs are fixed, you don't have a real [ __ ] business. I've never worked with a company that is doing well that had entirely fixed costs. Everything is flexible. Everything fluctuates. Computer costs skyrocketed because of tariffs and RAM being more expensive. Egg costs skyrocketed for restaurants. A fixed cost business is a failing business. Generally speaking, I'm just going to skip this paragraph because I don't agree. To be frank, I generally think vendor lockin arguments are competence failures. Good vendors make it easier to do less work and the move out of a given vendor is just the adoption of the work you avoided doing. So the example I often give is Verscell. Moving off of Versell is easy. You just have to write the code you hadn't written yet. I don't write code that only works on Verscell. I write code that works on Versel that is missing pieces that I didn't have to write because I was using Verscell and Verscell handles it. When I leave Verscell, I have to go add those pieces back in. Using cloud code means I don't have to write the code by hand. Leaving cloud code doesn't mean I can't write code anymore. It means I have to go write it myself again or use any of the 15 other providers. Like there's so many options. I just I don't like this argument. Vendor lockin is not the right framing for all of this. So let's go to the last section here. Demoting AI's role and certainly not advocating for typing all code out manually. Programmers have always been looking for ways to create code without having to write it. That's why we have things like emit the autocomplete system that is in VS Code for shorthand. I always hated EMTT personally but each their own. They also have traditional autocomplete snippets as well. I used to be like the biggest anti-nippid and template guy. AI means I don't have to care anymore, which is great. Even Cobalt was designed to encapsulate more instructions with less writing by using English-like words such as move and write. BigQuery's motto was write less, do more. LMS are another addition to this array of codegen tools. What I'm advocating for, though, is leveraging LM's and coding agents as a secondary process, a way that doesn't sacrifice the individual skill at the alter productivity. You can flip the script and lean on them to brainstorm the planning parts of the process while staying actively engaged throughout implementation. Delegating to them on an asneeded basis. You can leverage the productivity gains and mitigate the comprehension debt. I'm going to go through his workflow and then I'm going to discuss mine and my framing that might be a little different. He uses LLM to help generate specs and plans while he facilitates the implementation. It's an inversion of the orchestration flow. He's still manually coding anywhere from 20% to 100% depending on the task. I use LMS to investigate but not to write the plan. That's interesting. I very often am writing pseudo code when I do engage with the models. Closing the distance between the request and generate code. Yep, absolutely. I love writing pseudo code. I like doing it back and forth on pseudo code too where I'll have the model generate the pseudo code. I'll make recommendations. We'll go back and forth. I'll be like, "Okay, that's good. What does it look like in my codebase?" He also uses the models as delegation utilities for ad hoc code generation and interactive documentation as well as research tools so that he can constantly ask questions, iterate, refactor, and gain clarity. Okay, now we're cooking. We'll go back there in a sec. He never generates more than he can review in a given sitting. If it's too much to review, he slows down and splits the task up, manually refactoring when needed to ensure a comprehensive understanding of the end result. And he'll never ask an LLM or agent to implement something that he's never done before or couldn't do on his own, except perhaps purely for educational or tutorial purposes and often discarded afterwards. Okay, he's dancing around a concept here that I don't even know if he has embraced yet. The harsh reality is that most code wasn't worth writing before if it wouldn't be run thousands of times. Any task that code can solve, a human can too, just not necessarily as efficiently. But if I only have to do the thing once and it would take a thousand lines of code, it wasn't worth writing that code. I would just do the thing by hand. The important thing to do is to think about how important is each function that you're writing. Is the code that you're writing code that's going to be run thousands of times a day by your users? Or is it code that's going to be one once in a migration? Or maybe you're doing some data analysis and you want to like dig something out of 500,000 rows in a CSV and you're only going to run the code once. you just want to get this info. Understanding how frequently a piece of code is going to run is essential in understanding how important that code is and how important it is to understand that code. And once you've internalized this, you can go further with it. When you realize there's a lot of code where the quality doesn't matter because you're just running the thing one time and not thinking about it again, you're going to start writing code for things you wouldn't have before. I have written so much code for [ __ ] stuff like oneoff calculators for checking numbers from one specific task I did or writing a 2000 line of code JS file to manage assets on my NAS when I'm in the middle of transferring a bunch of files between [ __ ] Those things were never worth writing code for before cuz it would be faster to just sit there and do the task yourself. Now all of a sudden it is way more valuable to write code for those things. And that has been the biggest shift for me is using the AI to write code that otherwise wouldn't have been worth writing and at the same time using AI to better understand the code that is worth writing. The harsh reality is that most people who are working with agents can't tell you which code is running at all, much less which code is running a lot and which code is run once. And if you don't have that level of understanding, you need to fix that. But once you have this understanding and you can think through the work you're doing in this range, you're going to find more value on both ends of that spectrum. On the end where you're writing code to run personally just a couple times, you'll be doing way more in solving way more interesting problems in more interesting ways. And on the other side, the most complex code can be explained to you with an agent and you can find the hot paths and learn more about how it's architected. And if there's something wrong with it, you can use the agent to adjust things at massive scales like you couldn't before. And I have found that I use AI entirely differently for the code that is run a lot and is shipping to users versus the code that I'm using for one-off tasks. And realizing this and realizing you can do better in both areas with AI is the powerful unlock I've been trying to help more of you have. The author titles the next section, I'm not going faster, but I'm doing better quality work. Why not both is my challenge. Faster for the one-off things, faster to solve problems, faster to do stuff that wasn't even worth doing before, but improving the quality in things that required too much manpower, improving the quality in things that you didn't understand fully and now can. Getting more versions of a thing out before solidifying the one you want to commit to. The code that matters should be better quality because of AI. And the code that doesn't should be 10 times more prolific because of AI. We should get both. And when those two get confused with each other, everything falls apart. Let's wrap this up because this was an awesome article. I'll be real. I want to let him have the last word. The productivity gains from these models are real. And so is the friction and understanding that came from engaging with the work on a tangible and frequent basis. Despite countless failed attempts at trying to democratize coding while not understanding coding, we're faced with the reality that you cannot understand code without engaging with it. It's also become clear that if you don't keep engaging in writing it, you can lose touch with that understanding, which will in turn make you a less capable orchestrator in the first place, rendering this phase of AI coding a strange and needlessly stressful interlude. Perhaps I am worrying too much, but history contains lessons. This all feels similar, though, like another large experiment we're running on ourselves. We've been through a similar period with the introduction of social media without understanding the long-term implications, and we're now faced with attention deficit amongst many other issues on a wide scale. This time we're gambling with something much riskier. He ends with a quote from Jeremy Howard. People who go all in on AI agents now are guaranteeing their own obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent. I absolutely agree. This is a phenomenal article from Lars. Definitely recommend checking out his blog. It'll be linked in the description if you want to see his other posts. This is far from his only great blog post. Definitely check out his indispensable value in AI era post as well as everyone can delegate now just from this year alone. Very thankful to Lars for writing this. This is an awesome read and I hope you learn something from it. Don't let the AI take away your thinking. Let it advance your thinking and you can build better, smarter things as you learn more and solve more problems. If AI isn't making you smarter, you are using it wrong. And I highly recommend you reflect before you lose those valuable skills. It's a really important time to be on top of these things now more than ever. and letting those skills atrophy will hurt your career long term. That's all I have to say on this for now.

Get daily recaps from
Theo - t3․gg

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.