Maintaining a codebase with AI | The Standup
Chapters15
The video kicks off with a discussion of Cloudflare’s Next.js-like project (VNext), featuring introductions to Dane (Cloudflare CTO), Steve, and Dylan, and a tease of the topics to come.
Cloudflare’s V-Next project shows how AI can power open-source maintenance and Next.js adaptation, while balancing guardrails, community signals, and performance goals.
Summary
Dane Connect (Cloudflare CTO) and Steve Falner (Director of Engineering, Workers) join Dylan on The Standup to unpack V-Next, Cloudflare’s Next.js adaptation. The gang explores why Next.js users want Cloudflare’s edge-first approach, and how AI helps triage, review, and open-source collaboration at scale. They describe a path from a small intern project to a production-grade effort that now uses AI to triage commits, review PRs, and surface issues across more than 50 committers. The conversation highlights the role of V-Next as a surface over the Next API with its own runtime, staying aligned with Next.js while hosting on Cloudflare’s infrastructure. They discuss the tension between forking versus forerunning alongside Next adapters, and the way V-Next handles things like pre-rendering strategies, routing, and server components. A key focus is the guardrails around AI: relying on tests (end-to-end, unit, production smoke tests), desophification of complex templates, and human oversight for architectural decisions. The crew also makes room for broader tooling debates, including the impact of Vite as the bundler backbone and the ecosystem of tools that supports AI-assisted development. Overall, the episode paints a pragmatic picture of building with AI in a high-velocity, customer-driven open source project—while acknowledging the leadership role humans still play in steering direction and maintaining quality.
Key Takeaways
- AI-enabled maintenance allows over 50 committers to contribute by writing agent plans that implement features for V-Next.
- V-Next uses AI for triage, PR review, security checks, and issue surfacing by monitoring the Next.js repo and cross-posting relevant tasks.
- The project aims to match Next.js surface area and behavior while running on Cloudflare’s runtime, prioritizing one-for-one API compatibility at first.
- Guardrails include a strong test suite (end-to-end, unit, production smoke tests) and explicit human oversight for architectural decisions.
- Performance decisions (e.g., pre-rendering strategy) are under active discussion, with a focus on aligning with Next.js while leveraging Cloudflare-specific constraints.
- Vite is credited as the bundler backbone for broader ecosystem compatibility, with Turbopack seen as Next.js-specific and not widely adopted outside of that scope.
- The team treats AI as a fellow engineer—subject to the same reviews, linting, and code quality expectations to prevent unmanageable slop or “giant HTML strings.”
Who Is This For?
Developers and engineering managers curious about AI-assisted software maintenance, open-source governance, and edge-first frameworks. This is especially relevant for teams evaluating how to scale maintenance with AI while staying aligned with established frameworks like Next.js.
Notable Quotes
"The goal is pretty much everything we do we do it for customers."
—Dane explains customer-centric goals driving V-Next decisions.
"We maintain it with AI, right? We have AI bots that are doing triaging. We have AI bots that are reviewing all the PRs."
—Dane highlights the AI-backed maintenance workflow.
"If there's enough demand for something, we'll at least think about it."
—Dane on evaluating requests like undocumented Next internals.
"We ported a huge suite of tests... end-to-end tests, unit tests, and smoke tests."
—Dane/Steve describe the testing regime that keeps AI-driven changes safe.
"AI wants the same things as any human developer wants when they come into a project."
—Discussion about AI as a collaborator and its automation patterns.
Questions This Video Answers
- How does Cloudflare's V-Next approach Next.js integration on the edge with AI support?
- What are the guardrails used to keep AI-generated code maintainable in a large OSS project?
- What role does Vite play in modern bundling ecosystems compared to Turbopack?
- Can AI-assisted development scale in production without sacrificing quality and governance?
- What is the difference between forks, surface-area forks, and adapters in the Next.js ecosystem?
The StandupThe PrimeTimeV-NextNext.jsCloudflareAI in software developmentOpen source governanceAI agents in code reviewNext adaptersVite bundler
Full Transcript
Welcome everybody to the standup. We have quite the list today and we're going to be talking about quite the topic. It has been all over Twitter. The new uh NextJS clone from Cloudflare V. Next. And with us we have on we have DNE from uh Cloudflare CTO. Correct. Uh we have Steve. Steve, sorry I do not know your position. And Dylan, of course, most people here know Dylan, streamer extraordinaire. Uh if you guys could take a moment, maybe introduce yourselves a little bit better, that would probably be the best. That would be better. That would be good actually.
YEAH. Uh anyways, sorry. Yeah, thanks a lot for having me on. Uh Dane Connect, uh I'm the CloudFare CTO. I've been at Cloudflare for a little over 14 years. Uh and uh uh trying not to break the internet every day. Reasonable. Yeah, because when Cloudflare goes down, so does the internet. Uh I'm Steve Falner. I'm the director of engineering for workers uh containers, agents, SDK, bunch of other stuff, about 100 people. Um, so responsible for a bunch of our like developer platform products. Uh, I've been on it for two years, so Dane's got got a few on me.
I guess we're going in order of seniority. Dylan, go ahead. I am the, uh, the least senior by far here. Uh, I'm an engineer at, uh, Cloudflare. I've been here for four or five months now. Uh, and working one on one of our AI and agent teams. And as chat knows, I do a lot of streaming. Uh, so a lot of familiar. Also one of the founders of Tweet Mash and the real humans. So I mean Drew Dylan Mos he was he was definitely not moonlighting to be clear definitely not there was no nothing during that time period was involved in I think there's a lot of ways we could introduce this topic so uh I think it'd probably be best just for everybody kind of getting up to speed why don't you kind of give like a pitch of what is V next and what is or what was like the uh goal for you going into this shall we say experiment the goal is pretty much everything we do we do it we do it for customers uh it's you know for almost 5 years now.
been one of the biggest requests is how do you make next uh easier to deploy on cloudflare uh and you know we we recognize we have a slightly different architecture uh uh you know we have region earth deploy once goes everywhere uh we that applies different constraints to how you can build things on cloudflare I mean there's always going to be trade-offs uh so uh you know we we understand that uh you know next was built for more traditional environments and uh uh no issue with that uh And so uh you know we've we've always been trying to figure out ways to make it easier to deploy on Cloudflare because that's what the customers are asking for.
Uh and uh actually last summer Steve uh uh pitched uh the idea that uh uh let's have one of the interns uh just try this. Uh he kind of had the idea that you know uh the next the light surface area with with the next APIs actually could work. Uh he thought it through a little bit and uh uh we brought on uh an intern who uh you know he he gave it a good shot. uh you know interns only here for three months. He he got a lot of the page router uh surface area done uh and and uh you know we kind of put that on the shelf uh because we realized it was going to be quite an investment to actually the lava lamp shelf or a different one.
I don't know. Probably probably more in Steve's garage. Uh okay. Got it. Yeah. Yeah. Uh and and then I I guess Steve uh uh uh you know a month a month month and a half ago, two months ago was I guess in his garage and uh said, "Hey, I have a new I have an electric screwdriver now. Uh, I'm gonna uh let's see if I can give it give it a go. Uh uh and finish this project out. T production's down. I don't concern myself with such matters. What do you mean production doesn't concern you?
I've shipped 37 nay 38 features today. Teach always makes the mess and I always clean it up. I'm better than that. I don't have to be a janitor. Someone has to be the adult around here. Who Who is that? It's me, your Westerly godfather. John John Carmarmac. What? No, it's me, Trash. Oh, wait. Are you here to finally get management to understand software? What? I'm a West LA godfather, not a miracle worker. Oh, then why are you here? I'm here to tell you about Seir by Century, the world's first AI debugger that actually has access to all of your logs, stack traces, code, commit history, and more.
Code breaks, but you can get the fixed faster with Seir. Who are you talking to? What? Back to the standup. Build with AI, fix with century. Yeah, it was it was really for me it was all about AI, right? I mean, next and sort of applying it to VET was the problem that I ended up picking, but it was more about like how can I push all AI in all these ways and these limits and can I can I just take it further than I've taken any other project. And so it just sort of the the right problem at the moment.
Can can we go back to the intern for a quick second? What was the intern's reaction when you're just like hey make Nex.js for Cloudflare no mistakes like what what was the what was that experience like? He he was excited. It was one of our you know interns was more ambitious uh than most and he really wanted something big and juicy to try. And so I said hey here's a big juicy thing. Um you know see how far you get and and honestly like we scoped it initially just to pages. I didn't think he would even get to touch app router before you know 3 months.
And yeah, he he made a decent amount of product on uh progress on pages like it it did kind of work um at least the basics and so yeah it at least showed me that there was something there like a thread to pull and then it was sort of like okay now AI is here and can actually you know do the rest of the work. How do you make the decision to put something on the shelf at that point because you have a somewhat working version of it. you have a bunch of customers wanting to use it and but you have to maintain it afterwards like at what point what was the line that said okay we're willing to maintain things at this point because obviously you did not go forward with the interns work yeah well first off this thing was just pages router right which is you know there's a bunch of people out there that still are on pages router and love pages router and want to keep using it so um you know I I could see a world where we would have just like done a pages router only version um but again this is where AI just kind of keeps coming back into the story uh if you go look at the the V-EX repo, we maintain it with AI, right?
We have AI bots that are doing triaging. We have AI bots that are reviewing all the PRs. We have AI bots that are doing security reviews. We have now AI bots that track the next.js repo and then open up issues back into our repo when we find commits that are relevant. So, it's like part of why we can keep doing this is because of AI, too. It's it's why we could do it and then it's why we can keep doing it. I mean I I mean in some ways the fact that we're also doing this uh as open source uh it's a kind of a bigger experiment on on how you make open source work with in in the AI world you know a lot of open source maintainers are really struggling with uh um uh all the pull requests to uh you know to have to deal with the completely comments and uh you know is there more sustainable way u and so far this has been pretty sustainable like I think there's over 50 committers uh who actually when we say they're committers, it means they they wrote a plan for an agent to uh implement something.
Uh you know, they're still using all AI uh implemented code, but you know, they're still committers and they're helping move the project forward. uh uh but then uh you know then you have their agents talking to the GitHub comments uh and and so you know we don't have it all figured out but I I I feel like there's a model here where uh uh you know you're applying uh uh AI in any sustainable way to open source and and um yeah I think that's and that's kind of the beauty of open source I had a so sort of like related to that in terms of you know what so I did a lot of stuff uh with Neo Vim and like maintaining a fork that like still cared about like we were trying to still make it so that it wasn't like completely different experience compared to using Vim still applying like patches and a bunch of other stuff.
I think actually, you know, Neovim is experimenting with some stuff in some of the same vein of figuring out like, hey, how do we not have to spend all of our time like merging over patches from Vim? You know, like some people that's like their thing, Dylan, I think, you know, like like there's a few people who are like, how does this person even do all this? It's crazy. Um, but like how, you know, deciding like, okay, we're going to have this this fork and everything. Like how do you decide what features from next make sense to go into this?
How do you, you know, triage what things are most important, etc. Like I, because that that part for me is the really difficult part about like having a fork of something is not so much like, hey, we have this new piece of code, but you also need to, you know, decide how it continues to be a drop interface, is a drop replacement for certain aspects or what? I'm interested on that like part of the maintainership. Well, I mean, I think that's part of how you define the mission of it, right? Like the mission is it to be it's to be a uh it's this not really a fork.
This is a you know it's just the the next API service the official API surface put over a different runtime really the runtime. So we actually are are not accepting uh uh poll requests and feature requests for things that fall out of that. Uh we we are following Nodeex completely. um versus today uh uh uh we launched uh uh Mdash which is a I would call that you know uh probably name these different kind of forks you know I call that more of a spiritual fork of WordPress where you know uh it can import WordPress it looks a lot like WordPress in a lot of ways uh but you know it has a lot of unique things that uh go off the traditional path and it you know and I think its road map over time will diverge uh uh diverge more like more like a traditional fork um you know I mean there's a long history of how you know different forks evolve like you know you know you know webkit came out of uh the you know Linux ecosystem and then Chrome obviously uh made it you know blink and uh sometimes they diverge and I mean honestly if we didn't have forks we would still be all on IE probably right uh uh you know or the the nodes ecosystem right in that case uh node you know obviously had the fork to IO and uh um you know it made some points and uh it actually brought the community back together in a better place uh when it kind of reemerged back together.
Um uh so I mean in general I think it's all healthy. I think uh uh it just you can do it at such a different rate uh and cost uh uh today that that's really the the biggest part of the story. What are what are you guys going to do if you have to kind of compete against some sort of surface area that goes against what you're doing? cuz I know you made like uh building was one of the big things where you made building work better because it doesn't build every single artifact if they have some sort of static thing.
Instead, it analyzes your traffic. It's like, "Hey, we're only going to build 10% of your assets because if we do that, we're going to cover 99% of traffic. It's super fast and your build time's not 45 minutes." What are are there things they could add that you would have to say no to? Like, is there a world where Vex is actually a like a different fork? it no longer maintains the same surface area or are you like hard on same surface area always? I would say right now we're hard on same surface area always, right?
I'm I'm not trying to create like drastically different things. I would I would say like never say never though, right? Like I mean we're going to see where this goes. If they introduce something that completely you know is like against what you know like we want to do or or against you know maybe like what we consider a best practice for architecture. I mean we'll consider it right. It's it's going to really depend on the thing and what what is needed. I I'll be honest, I've been surprised how many people have come in asking for us to fix bugs in Next or things that Next has said it won't do, right?
Um there's actually a pretty active discussion right now, a feature from Next 9 that apparently a lot of people really liked uh that they deprecated. There's a bunch of people that like eight versions ago or something that is the feature like what is the request going on? They they switched from get initial props which ran on the client and the server to get server side props. I I think I'm hopefully getting the details right there. And a lot of people still like get initial props, right, for various reasons. So there's a a vocal community that wants us to add get initial props.
I don't know if we'll do that one since it's it's deprecated and we're we're still trying to keeps true to next at least next 16 like where next is at today. Um but it's the kind of example of things that are coming in that people are asking for. So there's like smaller ones like little behavioral tweaks where somebody maybe says, "Oh, next should have always done it this way. Will you just do it that way instead?" Um, but so so far it's we've been really holding the line of like, "Let's just do it the way Next does." Because we wanted to just, you know, match one for one.
So there's there's a uh there's a kind of a funny law that in programming. We have a lot of laws in programming, right? One of them is Hyram's law, which is effectively that the internals, there will eventually be features that aren't documented that you rely on that people rely on. How have you ran into any Hyrams law that's like a non-documented feature, but just the way that it was programmed that people are like, "Hey, this doesn't work." Like Verscell, I can launch this on Versel, but I can't do it on Cloudflare because of some funny internal if statement that you just don't have captured.
I'd say some of that's coming from the community packages, right? So things that are people have plugged into the next internals, right? Like other vendors, um other people supplying like you know things that plug into your next.js app automatically and hook things in. That's where we've probably hit the most friction is that they either were knowingly plugging into undocumented next internals or they didn't know they were and now they're finding out right when they try to come use V- Next. Like when someone imports like do not import this or you will be fired or something like that and they're like well bro but you guys aren't supporting do not import this or you will be fired.
That's literally the key to my workflow. You'll you'll see it ne if somebody imports from next disc, right? Like that's like next ex you know internal distribution right that's where they that's where they they usually end up into trouble. So do do you guys support importing from vinexist or is that just a something that you're like no we will not do internals. right now. No, we have not done it yet. But I again never say never. Um we D said at the beginning like we're doing what customers want. This whole thing is about like how can we give people a better experience uh running Nex.js like everywhere and on Cloudflare.
And so if there's enough demand for something, we'll at least think about it. The the spike the spike on new new users that day was, you know, one of the biggest uh one day spikes ever. like uh um I mean you can see that there's there's pent-up demand uh and you know that that's why we why we do things here. What's the like you know obviously right now it can't do literally everything? What's the path or like what would make you guys feel like it moves out of like an experiment into something that's like not experimental or you would say is like oh we're we feel good about this?
We're we're working through that now. So, probably the big one you already mentioned is is proper um uh pre-rendering for everything. I mean, some people still want that, right? They don't want the percentage pre-rendering. And so, uh we've got some stuff going on right now for that. Um there's some ways just some like a lot of little like things that Next does a little differently and we're trying to figure out like do we want to do it a little differently? You know, some of these things don't map cleanly between Vit and Turboac, right? So, there's stuff where we're trying to still work out some details.
Um it's all small stuff where like like a hard navigation might happen in our case but it wouldn't happen in next because you know they do some sort of like soft navigation you know hijack into what the browser is going on. So there's a few examples of that but it really mostly works right like if you look at like the bulk of what people are you know doing with next and that's sort of the long tale of new API features but you know routing and hydration serverside rendering all works. I'm I'm interested a little bit in sort of just like you know you said a lot of it is you know people with their AI agents like I'm interested in knowing like how do you guys manage that from like a hey it's a fastmoving thing we're experimenting people are running this in production how do we make sure we don't just like insify our entire like app or release like yeah I mean I know that it's all the rage these days to add 37k lines of code a day and you're like if you're not doing that, you're getting left behind.
But I'm wondering how you guys, you know, prevent uh like that situation from happening from some of the stuff you said at the beginning. Yeah. So, I'll give a couple examples. Um I mean, number one, the we rely on the tests, right? The, you know, we ported a bunch of the next.js tests, not all of them because they didn't all make sense to port, but we ported a huge suite of tests and we're still porting more. So, it it's just having that confidence that these tests are doing what they think they do, so we're not breaking users.
Uh we have endto-end tests, unit tests, and then we kind of have a whole suite now of uh like smoke tests that we run against production deployments. Uh the other thing too is there's been a couple times where I've had to sort of go in and and unslapify things with AI. So um probably the best example is that there was a part that was about a 2,000line uh template string in there that was like a lot of logic got like you know like clobbered into this thing. Mhm. I'm I'm not going to lie, it was pretty bad.
And I and I just sat down one day and I was like we cannot have this in here. this is unmaintainable for humans and AI. So I spent I spent the weekend kicking off a bunch of PRs and just bit by bit got stuff out of there, split it out into its own modules. Um and so like just going through and finding where the slop happens and then sort of saying no no I'm going to spend time desopifying that part. I've actually seen that exact thing many many times even using Cloudflare like hono just saying like hey AI agent let's go build out this thing.
It's just like gotcha dog. It's just one. It loves giant HTML strings and or a JavaScript strings and then it becomes impossible to start debugging because then it's just this crazy cycle of how do you actually know it's it's tough. It's a tough world. It no linting, no type checking. It's just like the wild west. Yeah. So yeah, just interpolating strings. Yeah. I'm like that's my favorite way to do this. This is this is why I write full stack JavaScript is so I can throw away JSX and I can use template strings. Steve, I actually have a a good question for you because like you probably Hey, Dylan, you're you're a guest here.
You're not an interviewer. What the heck? No, I'm just kidding. I'm just kidding. Go ahead, Dylan. I'll allow it. We'll allow it. We'll allow I we've talked about we talked about this a little bit early when you started the project, but I wanted to follow up and ask the question again now that you've like had you probably have more experience than most people in driving a large codebase with exclusively AI and like finding slop like this template string literal that you were just talking about like have you found good strategies for like building tooling for either the repo CI/CD or like harnesses to like prevent vent the same mistakes over and over again or like what what are like your tips for like like there's a whole conversation to be had about how to properly do like agentic development like I'm I'm very much in the camp and like I want to keep my work small I want to keep it isolated and I want to review every single line of code that goes in but like VX being like a good experimental repo and like how can we like really maximize this like what have you found to work on putting good guard rails on the AI and getting good results I mean mentioned already test lint uh stuff around well formatting matters a little less but I think it still helps right so some of the diffs aren't you know ridiculous so we we've put a lot of effort into that um I honestly like I would say once a week I just go ask AI like hey what's the sloppiest part of this codebase and how can we fix it right um I think that's an important part we also I do have a a scheduled process that updates agents.mmd every few days so it goes and looks through PRs looks through the comments on those PR ours where we had AI uh you know finding stuff and saying okay how can we make sure we don't you know make these same mistakes again.
Um I this stuff is pretty wild. Like every time I I think I'm like oh I'm kind of hitting a limit. This is like it can't figure itself out. The answer just kind of seems to be throw more AI at it and it it sort of like rescues itself right from the brink. Um I don't know. So and this has sort of become like a lab for us for you know uh how we do things internally. I mean internally you know we are still very much in the you know engineers write the code uh uh with AI they commit it and you know much more traditional still uh you know pull request uh uh process via via human um uh but with AI assistant of course um and you know we have strict rules like no vibe coding you have to read every line of your code and you you basically attest to it when you you do it um uh but you know we obviously see that things in this project where we want to apply internally and try to think of how you do the guardrails not on a you know a project that has one main maintainer and uh versus like you know a 2,000 person engineering org.
Uh and the things that we found that work real well there is you know there's linting but there's also uh uh you know how you build and so like we've been working internally a lot on what we call like the engineering codeex which is a set of RFC's that get kind of rolled up into a kind of like a ten commandments of a lot more than 10 on how you build everything and then uh you know an AI reviewer internally uh doesn't just do a security review doesn't just do a uh you know performance review uh um it actually goes over the codec and says you know is this how you build things at Cloudflare and kind of reviews those and and some of those things are are things like you know kind of Steve you know no long HTML strings uh you know obviously those of things it would catch and uh hopefully kind of put the the AI back to the the PR reviewer back in you know back back at it to to review it.
Um, but you can see that those things can kind of get applied to more purogentic workflow in the future as well too. Those kind of guardrails, but I mean you have to treat the AI like as if it was another engineer. I mean, a AI is kind of just making us all clean up a lot of the tech debt that we had where we our readme weren't up to date, our comments weren't good, our code wasn't structured great. Uh, um, you know, AI wants the same things as, you know, any human developer wants when they come into a project really.
Uh um so realistically you're obviously approaching Vinex much different than you're approaching everything you do internally, right? Uh I forget I forget the name of the the big the big edge. You had a name for it. It was in Lua for version one. It's now in Rust for version two. I forget I read so many of your blogs at this point. I know there's a name for it. But uh could you ever would you be okay with people trying to take the same approach that you see in Vext into that area? I think it I mean today probably not quite just because I mean uh but but I think you'd have to do a whole different set of uh test suites that that uh that uh just the level of detail of writing something that is so uh uh specific to our hardware so that we get you know every CPU cycle is you know perfectly timed.
So we I mean I mean it's the only way we can run at our scale and offer so many free things is if everything is kind of perfectly aligned there. uh that takes uh um a lot of innovation that uh you know that that you know I think the AI is still good at at at how you copy existing patterns and you replace it. I still haven't seen AI, you know, invent a new algorithm, uh, uh, or invent a new way where like, you know, building FL, we also have patches that go all the way into the kernel that changed how IPv6 hash look uh, IPv4 hash looks ups happen, right?
Like AI wouldn't know that in order to build this, we also need to do a kernel patch uh, to uh, uh, to to change how we uh, you know, do do these socket lookups, right? Um, so I mean I think the level of complexity and is a little different in those cases. Do you think any of it also is different? Like for this one a lot of it is there's already a welldefined like shape and stuff that you're trying to match against versus like hey we're starting a brand new green field project. Like I've seen it go off the rails a lot more often when I'm like, "Hey, like let's just write this." And I say like, "Okay, go." And it spins for a little bit and it comes back.
I'm like, "This is so bad. I can't believe that this generated this as opposed." So I'm wondering like your thoughts on I is this like a particular kind of repo where it's special that this kind of strategy would work compared to like a regular I'm driving my own product. I'm driving my own thing. we don't have this set of constraints like an API. I I think I think today that it's still very much that that is a different process, right? Like I mean in some ways there already is a spec for this out there that AI knows how to kind of follow and uh go through.
Um I mean uh uh you know I can think of a few internal projects that uh have existing shapes that we largely can do that with but you know so many things that we do are just uh you know they they don't have comparables out there right I I I'll like middle ground this one which is I I do think having the next test suite was a really good starting point but now now that we're into the territory of having to really make decisions about sort of more architectural things where maybe the the surface area is the same, but like underneath the hood, what are we doing?
Are we doing things optimally? Are we doing things that make sense for VIT and not things that make sense for Turboac? Like it's it's almost like we've had to bring the human back in a little bit more in the loop, right? Where now we're actually having more discussions about like, okay, is this the right way to do it or not? And not just letting AI go wild at the test suite. So, um I definitely if there's anything I've learned from this, it's that humans, in my opinion, aren't going anywhere in software development. Um but our role is going to change.
I mean I still think there's going to be over over the long term I mean more engineers not less. Uh uh you know we see people wanting to add engineers in our legal department in our finance department. Uh um uh you know uh uh the number of places that you need engineers to do you know the thinking the architecture. I met with a bunch of interns this this morning and I said, you know, we talked about how like how what they do versus what someone came in five years ago is going to be different, right?
uh uh you know uh I encourage them to work on their communication skills, their their writing skills uh um uh their you know their their problem solving skills uh as opposed to just you know expecting to be in front of a computer and taking a jar ticket and and and you know uh uh solving it like uh uh you know you got to be ready to do high higher higher level things. Uh but I think that there's definitely going to be a role and a need for that. Yeah. I mean, very little of my career has felt like I had a clearly scoped piece of work that I could just take a jer ticket off and like do it anyways.
That always seemed like I guess there were people doing that, but I always felt like that's weird. I've never had that experience like at work where it was just like, yeah, you just here's the well- definfined task that's obvious what it's going to do and how long it's going to take. Like just go spend your time on that and come back to us later. Like that didn't I don't know. For me, that was like not really part of any of my jobs after like year one of working. It felt like I've had I've had that happen once and uh I realize now like looking back on that that's also the same thing that I did uh TDD with is because it was so well defined that we had to sit down for so long to come up with it.
It's like oh TDD also works in this situation which probably highly aligns with this type of approach. is like, oh, we already can build out all the constraints and the interface and everything. Therefore, everything just works and now it's like if I had AI back then, I could just said go now. Um, all right. So, I guess I kind of want to switch gears a little bit in the more interesting side of this discussion, which is uh the receiving of of this uh you know, library slash what what do we call framework? I'm not really sure.
I never know when to call I don't want to, you know, use the f word when people get really offended about it. Uh I know it's a or meta framework. It's an MF. It's an MFER. Uh, we do not say that on this podcast. This podcast. So, with that with that in mind, um, how how was the receiving of it all because it's hard to kind of sus out what people actually feel when you just go on Twitter? And so, I'm actually curious what that was like for you guys. Uh, I I can go uh I honestly like I I honestly I saw a lot of the Twitter stuff, but I tried not to pay too much attention to it.
I think when we launched this, I actually like just turned off my phone and like went on a walk with my kids or something like that. I was like, I'm not going to pay attention to that the rest of the day. Um, I I want to build something that people like. That's it. At the end of the day, like I want to build software that's useful for people. So, I've been very focused on like the people that are actually getting value out of this, right? I mean, Dane said, you know, 50 plus contributors. I think at this point, we've merged like 400 plus PRs in a month.
I mean, this is like an active project. And so, the reception in my mind has been really good because I see people using it and getting value out of it, right? And and people outside of Cloudflow to be clear, right? I mean, from day one, this has worked anywhere. It's just a VIT plugin. So you can v start and run it on any server in the world and it just works. Um so I I think I've been very focused on that aspect of like if I can make software that people enjoy using and I'm going to keep doing that.
I mean as you said it does go back to the customers for us. I mean uh uh obviously I mentioned that uh it was you know the number of new users new customers came that day was it was a just a huge spike uh so which you know that's gratifying you know that people are using it and uh uh you know the all the different metrics on you know what could be success there you know number of active develop voters for us is really important so you know that was a success from that you know um uh all the Twitter uh uh you know it is what it is uh It's, you know, it can be a little fun.
Um, uh, you know, uh, uh, but, uh, uh, you know, it's shortlived and you move on and get back to work. Really, the I think the interesting thing here for for me too about like the reception has been around like VIT, right? I mean, basically all the frameworks except for Nex have standardized on V. I mean, I think that's just how it works now. And so you you Nex has sort of been telling everybody for years like well we'll use Turboac because we have to because it's you know whatever better or faster or all these things and I think there's just been like this pent up sort of latent sense of like well what if you had just used Vit and this answers that question.
So when I built this I definitely did not spend any time on performance uh or sort of like making build times fast. That was not what I was trying to do. I was trying to get the breadth of coverage with the API. So it's a testament to VIT that just literally the first version of this was six times faster, right? Like that is like that that's how powerful Vit is and how good it is as a bundler. The I mean yeah I mean what Evan built uh just the you know uh how many people have been able to build great things off of it.
I mean uh including like you know Astro right uh uh um is is this is really more of a story about how great V is right like none of this would have been possible um without m you know maybe it would have just you know been a lot more tokens but uh uh you know I think the fact that a design like that has been able to uh kind of get you the ubiquity and uh and still remain performant and uh is pretty amazing. We've got I would say like a wide a wide range of devs that watch the pod.
Can you explain to people who are not in the webdev world? Why is there a thing called vit? And then why is and also why don't people use vit and they use other things. Oh, I thought for sure you were going to say and why isn't it pronounced vite? I was I saw I threw that up for you prime because I knew that was your one and then you didn't even go for I'm trying to be polite. It's clearly vit. Okay, keep on going. All right. So it's it's funny you asked this question for like people who don't understand like webdev and then I'm like can do I we have an hour for me to spend talking about like the history of bundling on the web because Grunt and then I want okay go back to Grunt Browser Pie Gulp I would love to start with Grunt Gulp was one of the greatest build tools of all time and I will die on that hill.
I agree with Dylan. Dylan is correct about this. Was a revolution. We can't just use make for all this to be quite honest. I love there's a group of people that agree with that. Now we're cooking. I do have a hot take about make. I think make is overutilized in developer ecosystems outside of webdab outside of webdev and severely underutilized in webdev. That's my make hot take. All right. So for those that have no idea what you didn't you didn't want to follow up on the hot take. Oh, we're not. We are so far off.
We need Okay, people are still like threearters of our audience are going to be like, "Okay, so what is Vite and why do they keep talking about Gulp when I asked him about V?" It's a very That's what I'm saying. People actually don't know. I will do my best to do this. Okay, so when you build a web application, you need you have things that need to all get bundled together and deployed, right? So you have your HTML, your CSS, and your JavaScript, right? papered over like 10 other things, but like let's just say most of it is that now you need something that understands all of these things and the relationships between them and then is capable of basically emitting something that is just here's your deployable site and you can deploy it to the internet on anywhere and it'll just work right.
So over the years there have been many takes on this and for and many iterations and things come and replace the thing before it and the the last big thing was Webpack, right? So Webpack for a long time was sort of the king and everybody was like, "Okay, you use Webpack to take your raw source files and turn it into something that is optimized to be an actual website, right? If we deployed raw source files to the web, we would have all kinds of problems, right? Which there's people that think you should just do that, but we're going to like ignore those people for a second.
Let's go DH." All right, keep going. Sorry. Sorry. Keep going. So then uh Webpack has got you knowh you know again I'm like offending people left or right there are people that think Webpack got slow and you know was hard to configure and had all these problems right and so there was a next generation of things that kind of decided to try to replace Webpack. Uh there's things like parcel RSpack various things. Vit is one of those things and vit and again I'm going to just sort of like blanket statement this kind of just one right.
So Vit is the underpinning bundler for most frameworks today. I would actually say any popular framework besides Nex.js uses Vit. Now uh what Nex did was build their own thing in Rust called Turopac that they saw as like a spiritual successor to Webpack. I think one of the original Webpack maintainers is very involved in Turopac and so they sort of had their own take on what this has looked like but nobody else ended up using it besides Next. Um and so that is where VIT comes into the picture. And so there's been this like sort of very simple fork in the ecosystem of whether you use next and turbopac or whether you use some other framework and you use vit or you just use vit by itself.
V is actually very capable on its own. You don't need a framework. You can just like use vit has very good sort of primitives and APIs. You can just deploy a react app on vit and it works really well. Does that make it a meta framework? Uh, I mean, that MF talk Dane, we only have we only have one rule, Dane, and you broke it. We don't say that word here. Okay. Uh, to put it maybe help color some people's perspective. If you have a language that was invented beyond 2008, you typically the language itself also provided all the tooling.
So, Go has its own tooling. Rust has its own tooling. Zig has its own tooling and all that. Some of the older languages, you don't get all the tooling kind of also thrown into it. So, it gets really complicated. Everyone knows about, you know, Java. Java was very, very difficult back in the day. AC is very very difficult uh as you get huge build systems devoted to it and JavaScript is the exact same thing. There is no you know committee that owns the web HTML, CSS and JavaScript. These are all individually you know developed items all falling under the W3 consortium and some NTC39 multiple you know multiple things are all doing it and so they get a lot of stuff done fast.
Yeah. So that means tools there's not like a department of tools. So everyone kind of makes up their own tool and then some of them got popular. There in fact was even snowpack at one time which I think got renamed to parcel potentially. I can't remember but there's been a lot of a lot of bundling in the web world. If I if we just make a new one do you think everyone will use ours then? Yeah if there's a there's a small chance TJ I fully support it. So just one more just one more framework bro just one more framework.
If you do that, I will fork V- Next and then I will rewrite V- Next in the new thing. V even next. Nice. Well, wait, we have to change the prefix, right? Cuz the the VI port is for V, I'm assuming, right? Yes. Be X next. We X a XNAD. Y I know. There you go. All right. So, uh how about some of the sentiment? Like did you have any uphill battles in the sentiment change or did was there enough social pressure that you had to did you feel any of it like not just like hey we put our heads down we moved on we had a great customer day was there any fires you had to put out or things you had to take a little bit more time on kind of explaining your positioning so so I mean we have a ton of respect for the nextjs team uh we we working with them closely you know a blog recently came out about next adapters that uh uh uh we helped work with the team on uh I mean they built a great product.
I mean the testament to just the number of developers that use it. Uh you know uh um obviously my tweet with a little tongue and shake um you know um it was not meant to uh offend uh anyone because I mean we do have that huge respect. So, you know, we did spend a lot of time with the team and uh making sure that, you know, they knew we were committed to uh uh both the next adapters um uh uh and that we were going to be still committed to that because we've been working with them on that.
Um and you know, uh uh and we're still very excited for that to come. I mean, uh uh and Steve and his team and uh Fred from the Astro team all spent a lot of time uh uh with with them on on that. Um and and and we think they're doing you know great work with the adapters and uh um you know uh and you know a lot of this is this is how open source works like uh uh you know uh um and you know when adapters gets there obviously then you know maybe we won't we'll just support VNEX and uh uh uh and pure next as opposed to also uh there's a whole another thing we haven't been talking about open next uh but you know I think this is the pressures and for forking is how you kind of push back innovation when there's not uh you know complete uh community support for the direction that you want to take something in.
All right. Uh and then I I did want to talk a little bit more. I know TJ you kind of alluded to it earlier kind of about this whole like using AI on a purely on the project. Has there been any downsides besides for some sloppification? Has there been any downsides or things that you're finding frustrating in this more purely AI approach to building something that is largely customerf facing? I Yeah, I mentioned I I would say sometimes I just feel like I'm a little babysitting everything too much, right? I've got like maybe 10 different workspaces going and they're all kind of going along in the background and then I'm I'm losing track of what's doing what and then I go check in on it and I'm like, "Oh, like what did you do here?" You know, you just went down some weird path.
And so it it it definitely feels like right now I'm I'm sort of this glorified AI babysitter. And I and I want to believe that like as these agents get better, the agents can do more of their babysitting for themselves, but right now it feels very much like I still have to be like sitting there like minding them every day. Um I describe managing your own team, Steve. It's uh it it it's sort of interesting because like I I gave a talk at a at an event in SF about this and and sort of the contrasting the difference between managing humans and managing AI.
I mean AI is better at taking feedback than humans are, right? I mean you can just tell an AI it did a bad job and it it won't get mad at you, right? Um but also like so it it's somebody asked me they were like oh you know is your management skills translated and I said not really right? Like I mean humans h humans are like squishier, right? And we need like sort of more like thoughtful feedback and AI you can just kind of say like this this sucks. Just do it again, right? You know, and it'll just do it again.
Um so it's been interesting. There's less overlap there than I thought there would be. We were just laughing about this recently. Prime and I were working on something and he like told like, "Hey, move this thing to the left." And then it did it. He's like, "No, put it back on top." And I'm like, I'm just imagining if it was a person on the other end and you're like yelling at some junior dev and it's like move the couch over an inch to the left. No, your other left kind of situation. But the AI doesn't care.
It doesn't care. Doesn't care at all. It'll it'll just, okay, I'll put it back on top. No problem. It It's interesting to see the range of outcomes, right? Like like if I if somebody writes a document for me and then, you know, I say, okay, this isn't very good. You need to make it better. You know, maybe it'll get a little better, right? But somebody's not going to like rewire their brain overnight. But AI really kind of can, you know, you can literally say like, "No, this is entirely wrong. Start over and and it can do something entirely different, which hard humans are they it's hard for them to make that level of adjustment in relative to feedback." So, um, it's a bit rambly, but like there's there's like I don't have a takeaway other than I'm still trying to figure it out too, right?
there's this kind of process where you know they they say that writing like how you write is how you kind of express your opinions or how you think through stuff like it's kind of like one of the big signs of intelligence is your ability to write out your thoughts uh because you have to organize them and actually go through stuff without the actual activeness of programming and then a lot of more and more feedback about say the interfaces and all that of how you interface with stuff kind of just also just being vibed out where people can't really actively know what the interfaces are.
Maybe you're not as familiar with them because a lot of it is vibed out. How do you kind of come up with better ways to work when these layers are becoming more and more fuzzy because you don't look at them every day? You kind of look at them well, we can all agree we look at them less than we ever would say 5 years ago. So how do you know that you're even building the right direction at this point without like that constant daily everyday thinking about it like in the trenches, not just from an eye level view?
that that's where like experience probably comes in, right? Like I it I'm almost at a bit of an advantage there where I've I've been in this space and working on these problems for so long that I have a lot of strong opinions about what should be done and how it should be done even if I'm not writing code every day, right? Like it hasn't been my job for a while, but I still have opinions about the JavaScript ecosystem and how things should be working, right? And so there has been times where the AI will happily go off and build something that is wrong if you just let it.
and we've had that happen on this project and then I have to step in and say like no no no no I am 100% sure that we should do it this way because I have you know 15 years of experience that tells me that's the way we should do it right and that's where honestly some of the other people who've been involved in the project I think have you know like we we have some fantastic contributors but I think there's been a couple cases where they've just went and built a thing because somebody asked for it or AI just said oh just do it this way and they don't have that like depth of experience to say like no no no this is wrong.
We're not going to do it this way. How do you think people are going to get that experience it like in the in the future? I think I think about this a lot and I don't know the answer, right? Like I I I have thoughts on this. I So I have uh a more mid-level junior dev. Oh, sorry. We're out of time. Uh no, hey guys, if you like this episode, you can watch the rest of it on Spotify. And don't forget to LIKE AND SUBSCRIBE. WOO! SEE you later. Boot up the day. Voting errors on my screen.
Terminal coffee and living the dream.
More from The PrimeTime
Get daily recaps from
The PrimeTime
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









