37,000 Lines of Slop
Chapters7
Exposes the narrative that AI equalizes code quality and productivity, and warns about the dangers of treating AI generated lines as valuable work.
AI can boost productivity, but 37,000 lines a day is a warning sign—code quality and accessibility must not be sacrificed for speed.
Summary
Syntax host Scott relies on a blunt example to critique the hype around AI-generated code. He highlights Gary Tan’s claim of writing 37,000 lines of code per day across five projects to illustrate how quantity can mislead about true value. The segment examines a production blog built with GStack, pointing out issues like large test bundles, uncompressed images, missing alt tags, and duplicated DOM rendering. Scott argues that this isn’t merely a niche frustration but a real risk to accessibility and long-term maintainability. Mario Zner’s essay on slowing down AI output serves as a recommended read, advocating limits on daily code generation and a more deliberate review process. The host stresses practical steps: use deterministic tools to locate dead code and hotspots, read every line of AI-generated output, and resist the urge to treat lines of code as equivalent to real productivity. In short, AI should augment human capability, not replace rigorous coding practices or accessibility concerns. Scott closes by reaffirming his own approach: slow down, review, and guide AI output rather than outsourcing thinking entirely to machines.
Key Takeaways
- Claiming 37,000 lines of code a day is not a badge of productivity; it obscures quality and maintainability (as evidenced by the Gary Tan thread and production blog scrutiny).
- The production blog cited by Gregorian demonstrates concrete slop: 300 kilobytes of test code sent to every user, uncompressed large images, and a DOM rendered twice for mobile and desktop.
- Mario Zner’s argument for slowing down AI output, including self-imposed daily code generation limits, provides a pragmatic blueprint for safer AI-assisted coding.
- Tools like which (dead code, duplication, and hotspot analysis) help reveal the hidden inefficiencies in AI-generated code and should be part of a developer’s workflow.
- The recommended practice is to actively read and refine AI-generated code, combining AI with human judgment to avoid “slop” and improve reliability, accessibility, and maintainability.
Who Is This For?
Software developers and engineering leaders who are evaluating AI-assisted coding practices and want concrete strategies to avoid producing low-quality, inaccessible, or bloated codebases.
Notable Quotes
""37,000 lines of code a day isn't a flex, it's a warning.""
—Core argument: volume does not equal value and signals potential quality issues.
""There is simply no one awake behind the wheel of some of these cars.""
—Metaphor illustrating dangerous automation risk without proper oversight.
""I'm slowing down. I'm reading the code and I'm understanding it. But I'm also guiding it and shaping it and controlling its output.""
—Presenter’s recommended practice for responsibly using AI-generated code.
""This should make you very aware that make it good, no bugs, here's some skills, is not enough in April of 2026.""
—Emphasizes that current AI outputs require more than basic quality claims.
""AI should enable us to do things that we couldn't do before by doing them better.""
—Balanced view: AI as enabler, not a blanket solution for sloppy code.
Questions This Video Answers
- How can I evaluate AI-generated code for production readiness?
- What are practical steps to slow down AI code generation without losing productivity?
- What tools help detect dead code and bloat in AI-assisted projects?
- How can accessibility be preserved when using AI to generate web content?
AI in codingCode qualitySoftware accessibilityGary TanGStackPI.devMario Znerwhich (tool)Lighthouse scoringDOM performance
Full Transcript
We're witnessing a rise in AI psychosis in coding where a narrative is being pushed that lines of code is equal to actual productivity or even worse quality where we're expected to believe that every single line of slop that the AI turns out is a valuable piece of any program. AI has empowered people who don't know how to code to create things and that's wonderful. But it's also empowered those same people to do unspeakable horrors on the web. Gary Tan, the CEO of Y Combinator and the creator of GStack, the seuite of markdown files that is so powerful that it built a blog unlike anything this world has ever seen.
Now, much has been made on Twitter about how productive Gary is, bragging that he is able to write 37,000 lines of code a day. And he's bragging about that taking place over five projects. Now, 37,000 lines of code is obviously way more than anyone could reasonably read if they were capable of reading in the first place. Now, this is the exact type of thing that somebody who has no idea what they're talking about would brag about because 37,000 lines of code, you might think and and we'll certainly get comments saying that, well, the AI can validate it or you can have tools in place to fix it.
But if you don't know what's going in there, who knows what exactly is going in your codebase? You sure as hell don't. And so that's why I found this thread to be so interesting from Gregorian. This user audited Gary's website after bragging about 37,000 lines of code a day and a 72day shipping streak. Here's what 78,000 lines of AI slop actually looks like in production. And this blog is well, it's a blog, right? What? Like it's a blog. What can I say? This site has some very funny stuff going on. For instance, the homepage ships 28 test files to every single visitor.
You can verify this just by filtering by test, clicking on any of these, looking at the file, and seeing, wow, this is a real test file. They've just pushed all of their test files into the production bundle somehow. I don't know how they did it, but they did it. Now, you might be thinking, Scott, 89 kilobytes, 19 kilobytes, whatever. This is 300 kilobytes of test code sent to every single user. That's crazy. Countless other odd files such as the literal rails hello world scaffolding controller being loaded. Other fascinating things are like classic like I'm talking these are issues that people had back in the early 2000s.
You're serving an uncompressed PNG that's two megabytes and another one that is 1.99 megabytes. There's like no excuse for that. Every single modern framework has a way to fix image rendering. But I've saved the best for last because we also have a 520 kilobyte tricks rich text editor which is uh a wizzywig which as this person speculates is probably spill over from the back end which again how how is that even possible? 47 images with no alt tag and the very best the entire page content rendered twice in the DOM. Once for mobile and once for desktop.
The Dom's so nice he needed it twice. Now, I want to be clear because this video isn't just an opportunity to dunk on this garbage. It's also a chance to answer the question that many people pose to this exact scenario. Does it really matter? Hey, the site works. It even somehow gets an 80 on Lighthouse despite having numerous accessibility issues, right? So, does it matter if it works even if it's hot garbage? Yes. Emphatically, yes. Because even though we all have fast internet connections now, it doesn't mean that we should just be pushing out slop to every single person that visits our work.
AI should enable us to do things that we couldn't do before by doing them better. not shoving 37,000 lines of code into something just because we can, like it's a clown car, but also just looking at the output here, knowing what went into it, and knowing that this GStack is being used as a series of markdown skills that are begging the AI to write good code and it still produces. This should make you very aware that make it good, no bugs, here's some skills, is not enough in April of 2026. Now, inevitably, there will be comments saying that in the future the models will all fix this, the better model will fix this or it doesn't do it good yet.
Yeah, no. I'm talking about right now. And right now, tools like this are Johnny Knoxville in a slop cannon shooting god knows what onto the internet. There is simply no one awake behind the wheel of some of these cars. And that alone should scare you. So, enough complaining. What should we actually do about it? Because there's no way that Gary didn't point his army of GStack skills onto this thing, including his QA skill. Hilariously enough, there's no accessibility skill in here, which is uh pretty evident just by looking at the missing alt tags. Now, that brings me to a blog post that I really enjoyed from Mario Zner.
Now, before I get into this, you should know that Mario is not some kind of AI hater. Like, many people who hear any criticisms of this stuff immediately go to they're a hater. They don't know what they're talking about. Mario built for my money's the best AI harness out there, which is PI at pi.dev. Pi.dev is an incredible project if you haven't tried it out. It's an AI harness that is really extendable. It's paired down in its scope, but in addition to that, it has a really great API for extensions. That means that you can build Pi to be what you want it to be.
And I personally have really enjoyed his work here. But his post here on slowing down is well worth your time to read. I'm not going to go over the whole thing by any account, but I'll give you the gist. His point here is largely slow down. Let the AI do the boring stuff, the stuff that won't teach you anything new, and then you can evaluate what it came up with, take what you want, and finalize the implementation either with an AI agent or yourself. He even advocates here for setting self-limits on how much code you let the clanker generate per day in line with your ability to actually review the code.
And then that's where I've largely landed on too. I've also taken some time to dive into more deterministic tools for helping me find slop in existing code bases. things like which analyzes your codebase for dead code uh duplicate lines uh complexity and hotspots in your code because we all know if you've been working with AI AI will solve a problem locally 100 times before it even uh considers solving it globally despite you begging it to do the opposite. I'm actually planning a video on right now detailing how you can use this with AI to reduce the amount of slop it produces.
But at the end of the day, the tools aren't going to save you. 37,000 lines of code a day isn't a flex, it's a warning. We need to be careful about who we're accepting advice from. There's a whole new world of grifters and AI coaches and people who do not understand what they're putting out trying to tell you what the best way to code is. For me, what I'm doing, I'm utilizing AI. Absolutely. But what I'm also doing is largely what Mario is suggesting. I'm reading every line of code that it produces. Still, I'm slowing down.
I'm reading it. I'm understanding it. But I'm also guiding it and shaping it and controlling its output. And I'm not ignoring the fundamentals of the web simply because I can slop away any bugs or issues that I might have. Will a better model fix this? Sure. But what am I doing right now? I'm slowing down. I'm reading the code and I'm understanding it. I'm not outsourcing my brain.
More from Syntax
Get daily recaps from
Syntax
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









