Rebuilding Next.js with AI in One Week: 4x Faster Builds (vinext Explained)
Chapters16
An overview of Cloudflare content this week, highlighting posts on post-quantum encryption, ASPA routing, and a call for a modern Streams API, plus related blog entries on security tools and UI redesigns.
Cloudflare’s Steve Falner demonstrates an AI-assisted one-week rebuild of Next.js on Vit, delivering faster builds and smaller bundles, with open-ended potential for AI-first development across ecosystems.
Summary
Steve Falner of Cloudflare takes us through an ambitious week-long experiment: rebuilding Next.js on top of Vite (Vit) with heavy AI assistance using OpenCode and related tooling. The goal wasn’t to herald a finished product, but to explore what AI-first development could look like, including how dependencies could be rewritten or migrated with AI help. Falner candidly shares how the effort started as a gut check on toolchains, then evolved into a functioning prototype that achieved a 4x faster local builds and a 57% reduction in bundle size in early benchmarks. He emphasizes Vit as the enabling foundation and notes that the bulk of work revolves around integrating AI-driven workflows rather than reinventing the wheel. The project remains experimental, with ongoing PRs, community contributions, and plans to broaden deployment targets beyond Cloudflare Workers via a pluggable provider layer. Falner also discusses costs (roughly $1,100 in tokens) and the behavioral shifts AI can drive in both coding and code reviews, illustrating how AI can take over many repetitive tasks while developers focus on higher-level design and strategy. Finally, he reflects on broader implications for AI-assisted software development across ecosystems and outlines next steps, including server-side generation approaches, traffic-aware pre-generation, and broader ecosystem support. This video is part of Cloudflare’s broader exploration of security, performance, and AI-enabled tooling.
Key Takeaways
- Vit provides a strong foundation for AI-assisted framework experimentation, enabling Next.js-like work to run atop a different core toolchain with practical gains.
- The prototype achieved 4x faster local builds and a 57% smaller bundle size in initial benchmarks, highlighting potential AI-driven optimizations in tooling.
- OpenCode plus Cloudflare's 4.6 release and a multi-work-tree setup enabled rapid, parallel iteration (30–40 work trees, several agents).
- Cost efficiency was surprisingly favorable in practice, with an approximate token spend of $1,100, underscoring the economic viability of AI-assisted porting work.
- The experiment remains experimental and iterative, with ongoing bug fixes, community PRs, and a plan to build a pluggable deployment layer for multiple providers.
- AI-driven code review and generation are central to the workflow, with Falner noting that asking AI to review code multiple times can improve context and quality.
Who Is This For?
Essential viewing for frontend engineers and platform teams curious about AI-assisted migration and building (or porting) frameworks. Great for teams evaluating AI-first workflows, Vit/Next.js users exploring alternative toolchains, and developers interested in rapid prototyping with AI guidance.
Notable Quotes
""I just sort of started throwing open code at the test suite for Nex.js and said, 'Let's just do it on beat. Let's make it work.'""
—Falner describes the spontaneous, AI-driven approach that kicked off the experiment.
""All the code is reviewed by AI in sometimes multiple times... if you just tell AI to review code like three times in a row, it actually just does a better job.""
—Highlights the heavy reliance on AI for code review within the workflow.
""The biggest benefit right now is probably around the developer experience... better local dev experience, build speed and things like that.""
—Points to practical gains in day-to-day development, beyond raw benchmarks.
""We’ll keep investing in it... ensure we have people working on this and make sure we fix bugs as they come in.""
—Falner outlines long-term commitment and maintenance plans for Vinext.
""If you want an AI-first approach, you can spend tokens and make it your own... there’s no magic in the prompts.""
—Emphasizes accessibility and realism of AI-assisted porting and customization.
Questions This Video Answers
- How does Cloudflare’s Vinext approach compare to traditional Next.js migrations?
- Can AI actually rewrite a framework like Next.js on Vit in a week?
- What are the cost implications of AI-assisted development for large projects?
- What is traffic-aware pre-generation and how could it work in practice for SSG?
- Is OpenNext still the recommended path for production readiness and stability?
CloudflareSteve FalnerVinextAI-assisted developmentNext.jsVit (Vite)OpenCodeCloudflare WorkersAI code reviewOpen source tooling
Full Transcript
Hello everyone and welcome to this week in net. It's the second episode of the week. It's all about those thinking of building with AI. So we have this amazing use case of how we rebuild Nex.js with AI in one week. in this case was one engineer doing that using open code and other tools and for that we have Steve Falner the engineer in this case engineering director that was the one actually building this in only one week we talk about how AI was important how this is actually an experiment part proof concept and part glimpse into what AI first development looks like there are improvements uh taking place we also answer a few questions that were done on social media So stay tuned for that as well.
On the CLER blog this week, also worth mentioning, not only we had these two blog post, but also Clarur one uh became the first sassy offering modern postquantum encryption across the full platform. Of course, postquantum encryption is really important because quantum computers are coming and those will bring new challenges in terms of encryption. So having a SAS offering is quite important with postquantum encryption in play. Also this week already on Friday we started a a set of blog posts about security and why not start with cloudflare radar new added tools for one is for modering postquantum adoption quite important in this day and age bequantum ready.
So there's more details on that. There's also key transparency logs for messaging and also ASPA routing records to track the internet's migration towards more secure encryption and routing standards. What is ASPA you may ask? That's a good question. It's autonomous system provider authorization and it's the industry adopting a new cryptography standard that is designed to validate the entire path of network traffic and prevent routly. quite important and now in addition to security insights on Coffler radar. Also in the blog today this Friday we have a very cool blog by James Snell called we deserve a better streams API for JavaScript.
So the web streams API has become ubiquous in JavaScript runtime but was designed for a different era. So this blog post shows you what a modern streaming API could and should look like. There's also a very interesting one about the most seen UI on the internet, redesigning turn style and challenge pages. So, we serve 7.6 billion challenges daily. And here's how we use research AAA accessibility standards and an unified architecture to do redesign the internet's most seen user interface, which is the horrible capture that no one wants to do and replicate. And last but not least, a blog post called toxic combinations.
When small signals add up to a security incident. So minor misconfigurations of requests anomalies often seem harmless in isolation. But when these small signals converge, they can trigger a security incident known as toxic combination. This blog post explains how to spot the signs. Without further ado, here's my conversation with Steve Falner. Hello, Steve. How are you? I'm doing great. How are you? I'm good. I actually recorded a a segment with um Matt Carrie about code mode this week. He's he's in Lisbon, so closer to me. Yeah. Cool. Matt's great. Matt's on my team. Exactly. One of the things for for those who don't know, can you give give us a a quick run through of your experience at Cler so far when you joined and what's your role really?
So I've been here for almost two years now, just about to hit two years. I am the director in charge of workers. So that's the whole workers platform and sort of the a bunch of bits and bobs kind of a workers adjacent like containers and agents and things like that. So I I've lost track at this point how many teams but it's about 80 people. I love working here. It's been a blast. Yeah, it's been like workers is a very cool product and that's why I joined. So I'm excited to be working on it. This week you had a very viral blog post uh on Tuesday that was published and it was a blog post for a work that was done over the period of a week by you specifically with definitely help from AI.
A few weeks ago we had salso here also talking about project that took a week in this case markdown for agents in that situation this also a week. What can you give us in terms of how this idea of rebuilding nextjs came to be and why is it relevant? Yeah, definitely could talk about that. So this this I think this idea has been like floating around in the back of my head for you know quite some time. So Nex.js it's basically the most popular React framework uh out there almost synonymous with React at this point.
um has its own bundling tool system uh turbo pack uh kind of own its own bespoke tool chain and versel has invested very heavily in this in the past you know 3 four years even going back further sort of in that time we've seen sort of a different take on that tool chain evolve in vit and most other frameworks use vit and so uh I think at at some point you know people start asking the question well what if you know what if nextjs was just on top of vit you know vit has then all these like plugins kind of a whole ecosystem around it versus you know turbo packages sort of used by next and now next is a very you know there's a lot there going on there it's you know a complex framework um there's a big API surface area and so the idea of this just kind of seemed impossible right like we we actually discussed it internally maybe about a year ago we were trying to figure out how to best support next users on cloudflare how to make them happy and you know this idea came up we kind of batted around and you know said that's just not going to work right like we're not going to be able to invest that level of resources in it and fast forward to now AI just got really good, right?
That's kind of what happened. I've been using AI for a ton of things at work lately, as I'm sure, you know, everybody has. I've been pushing the limits of what it can do. And so, I think it was literally like Friday afternoon, Friday evening of uh about a week and a half ago, I was sitting there. I said, you know, like I don't know, let's just see what happens, right? I mean, like I'm always interested in these like new ways of using AI like vibe coding things or like Ralph Wiggum loops and I've been trying like all these different, you know, new technology.
They're like new skills and things like that, right? And I just sort of started throwing open code at the test suite for Nex.js and said, "Let's just do it on beat. Let's make it work." Um, and then about 24 hours later, I was like, "Oh, wow. This this might actually just kind of work." I mean, there's, you know, there's edge cases and it's a week old project, but I had app router working within like a couple days, and I was like, "Okay, this this might actually be something that, you know, has legs." One of the the interesting things is it was since a few months ago we've seen this upgrade in terms of the tools what the tools especially for coding can do and can you share with us like the the setup that you typically use in terms of what are the models what are we use for example open code here specifically as a UI as well and and what difference those really make make.
Yep. So open code is the primary way I do this. Um I' I've pretty much used all the tools out there. Um I actually think like I I don't have like a lot of preference right now. It's just you know what we use at Cloudflare because we kind of have our own open code setup. But I mean I've used codeex and cloud code and those have some great things too. Open code combined with cloud opus 4.6. Actually I think I think 4.6 came out during my me working on this. I think I started with 4.5 and then sort of moved to 4.6 when that came out.
And that's pretty much the bulk of it. I mean I've I've dropped in codecs 4.2 to here and there to just like double check some things or maybe to have a different take on you know a problem but largely that's the setup I use open code desktop especially like it has really good like work tree setup where you can sort of just set up work trees really easily and sometimes I've got I mean 30 40 work trees going maybe five six seven agents running in parallel all working on different problems so um it makes it really nice to sort of like a nice interface for working in that style when when was the period where you saw okay these things are really uh making a difference real work, making things work properly.
When when was that? Because a year ago, I think was not as it is right now. I agree. Right. Like even a few months ago, I I would say like really similarly to a lot of other people in the industry and at Cloudflare like kind of mid December into January. Um you know, I kind of, you know, came back from the holiday time and I I got into play with it a little bit over the break and I was like, "Wow, these things got really good." I mean before that I had used a lot of this stuff but I just never was like that impressed.
I would you know say I was kind of in the mild AI skeptic camp and then once I started using it like I said like mid January I was like whoa right these tools are really like capable of doing a lot and I made a lot of mistakes and did a lot of stuff that it didn't do well but I started finding out what it was good at and started doubling down on that. So, and I would actually say that my first probably month was using it for a lot of non-coding things, right? So, I'm a manager, right?
I actually, it is a little bit odd that this project came from me, right? It just sort of happened to be my idea and I ran with it, but I use open code a lot for tracking meeting notes and keeping track of, you know, various writing I'm doing and, you know, projects. And I have essentially a folder full of markdown files that is all organized to like keep my brain intact. Um and that that was really probably the first my few weeks with this is like non-coding use cases and then once I started doing that I was like ah like I have some extra time let me try and see if I can do some other stuff.
One one of the things that is quite clear from this project is how quick it was implemented and of course we have a a warning specifically in the blog that this is experimental under heavy development. So it's it's definitely like a proof of concept where we show what was able we were able to do with AI specifically and the team is actually adding improvements. So it it it's like a continuing work, right? Specifically. Yes, correct. Yeah, we've we've merged a ton of PRs already uh fixed a ton of stuff and all still via open code.
Like I think a lot of people are focusing on this because of Vit and Next and I get why. I mean, it's a very like interesting story and it's sort of a simmering question that I think has been out there for a long time in the front end space, but it's frankly not the part that's interesting to me. The part that's interesting is like what does AI first development look like? I mean, how can you like what does it mean to live in a world where like your dependencies are easily rewritten or replaced or migrated or any of those kinds of things, right?
Like what abstractions are going to fall away? What abstractions are we going to keep? And I don't know the answer to that. I I think not anybody does right now. The line is going to get really blurry. So that that's the interesting part to me. And then also like we we've really gone like AI first for this like entire experience. Like this isn't for us to see as not just for me but like as a team how we can push AI super far. I said all the code is pretty much written by AI. I think that's just sort of true like with very few exceptions.
All the code is reviewed by AI in sometimes multip some cases multiple times. This is maybe something that is a little unobvious to people, but if you just tell AI to review code like three times in a row, it actually just does a better job, right? Like it picks up more context every time and then sort of like lands on something good. So yeah, uh AI code review even like when I publish a a a new package, I use an AI thing to do that. I don't go click a button, you know, right? So it's it's really like this exercise on just like how can we use AI for everything and what does that mean for building and maintaining software?
Makes sense. One of the things NextGS is the most popular React framework framework as you said and millions of developers use it. So there's good reasons to to make it better and have this VX um perspective. Can you run us through the numbers in terms of improvements that were we were able to to achieve like four times faster builds, 57% smaller bundles. Those savings first, who they serve and why are they relevant and then how we did it really. So they're they're relevant because you know like actually let me let me start at the beginning.
I I don't think any of this was quite the goal of the project, right? I was just literally trying to change tool chains and see what happens. I didn't know it was going to be faster in any way, right? And and I'm going to caveat these benchmarks, which is like we spent a lot of time trying to make sure these benchmarks were fair and accurate, but you know, benchmarks are always going to be tricky to get right. And so I'm sure we're going to see these change over time. I'm sure we're going to see both projects improve.
But I think the big thing that I took away once I started, you know, working on this was this is really about VIT, right? VIT is an excellent foundation to build on. I think when I finally did, you know, probably like four or five days into this project is when I actually started doing benchmarks and said, "Oh, let's let's see how this performs." And I was like, "Oh, wow." Like I didn't know it was going to be better. Uh, and I think that's that says a lot about Vit the project and sort of all the work they've put into making it a good foundation for all these frameworks.
Makes sense. Makes sense. So those were not intended, but they definitely are helpful for those that are using it. Really spec specifically, right? Exactly. Yep. Um and so as far as the actual numbers, I mean the the blog post covers it and like I said, I assume we'll see the numbers change as sort of we tweak things and make improvements uh on both sides. The the biggest benefit right now is probably around the developer experience, right? I mean, it's just got, you know, sort of better local dev experience um in terms of like build speed and things like that.
And so I think that's like a good starting point why like why would I use this? Well, try it locally and you know see if you have a better time. I'm encouraged by some of the production stuff, but again that's not like the primary reason I'm doing this. At the end of the day, Nex.js is still React under the hood. This is still React under the hood. This is still RSC's. I assume that, you know, both projects will learn from each other and we'll get to some point where probably they'll get pretty similar, right? Because it's not like they're doing anything drastically different.
You know, at the end of the day, they're doing they're doing very similar. The other thing is the cost, right? $1,100 in tokens specifically that is also relevant in these types of uh deployments, right? Yeah. Uh 100%. So that's uh that so that I that number comes from me just going back through um using open code to go back over all my open code sessions and then figure out, you know, what we spent. So uh it's not perfect, but it's a you know, pretty decent estimate, give or take maybe $100. I was definitely kind of shocked myself at how low it was.
I was doing this and um you know we have pretty pretty lax policies here in terms of what you can spend where we encourage people to you know use open code for a lot of things here and so I wasn't really worried about the cost. It wasn't again something I was concerned about and I kind of expected it to be like 10k or something like that. I was like I I was running a lot of sessions and I was expecting that number to be and I was kind of expecting to have a send a message to our CTO day and be like hey you know I spent a lot of money over the weekend.
Sorry about that but it was on a cool thing. Uh so when the cost ended up being so low I thought it was cool. I I think the cost is it speaks more again to like just the the literal dollar cost of doing these kinds of projects, right? Like if I I said this in the blog post, if if I can spend $1,000 and, you know, sort of build an entirely, you know, new implementation of an API surface, you could spend $10 or $100 and build your own framework or build something that's, you know, unique to you or migrate to a framework that, you know, you, you know, think might fit you better, right?
That's kind of how I would encourage people to think about this is that that you know don't necessarily think like like I have to use V-Nex, right? Like take a broader view and say like well all this stuff is cheap now so like what's the right decision in terms of like what technology we're using and also the the perspective of deploying to workers with a single command also the easy part is also interesting. Yeah. So that that part we can talk a little about that and the you know the plan there. Um, so I started actually the project as a generic project to just run uh in note.
That's actually how I started. I I sort of got down a little bit of a rabbit hole with that. It was working out, but it was sort of proving to be a little bit difficult to make it work with that and workers, you know, like and try to like keep parody between the two. Workers has great node compatibility now, but there's certain things like native modules that don't work yet and you know, things that are on our road map. It was one of those things where I was like, okay, well, let's just take a step back and I I really want the demo to be cool, so I was like, let's just focus entirely on workers.
And so that's where where I took the project. Um, and that's what it does now. Meet deploy or sorry, V-Nx deploy just deploys straight to workers. We're going to change that. We've already got an open issue and I'm soliciting feedback about what that's going to look like because 97% of the work here is actually about making it work with VIT and not about making it work on workers. And so we're going to add some sort of like pluggable layer where like any provider can come in. We already have Netlefi has PR a proof of concept.
Puya who's on the next team. Parda uh Nitro plugin. So Nitro is a another framework like sort of lower level framework that can is a V plugin and can make this basically allows us to deploy to almost any host. So we're going to work through this. We're going to figure out how to kind of make it work everywhere. Um and to be clear it does now we have several people that have you know privately DM'd and said oh yeah it's running on my node server like no problem. One of the things that uh we could see is first implementation, people trying to implement and with that feedback, what was the thing that surprised you the most in terms of feedback, but also people just implementing it like people who are actually using this like now that we've seen it.
Well, I mean obviously there was a lot of response to the the post. You know, I I think I I knew it was going to be controversial when we did it. So I there's not surprises that it's it's caused a little bit of controversy. Um, I I think people want to read into this like as if there's some big, you know, sort of grand strategy here. And it really is just like a week old experiment that I said, let's see what happens. But the results ended up being really good. And the feedback we've gotten is pretty good about people actually using it.
So we we talked about this in the post. We have like one large customer that's already using it on one of their beta sites. We've gotten tons of interesting DMs from people that are already trying it on their sites. Some other customer things. I mean, like other stuff I can't talk about, but I'm I'm pretty encouraged that people are really trying this out and they are using it and having some success. Um, you know, I I will say this again, there are definitely bugs. There are definitely things that don't match Next right now and we know that and we're going to go try to fix them.
Um, but right now if if you know, if you have like a I would say sort of a relatively um uncomplicated Nex.js app that's pretty straightforward, it it does kind of work. And we have examples here. We've been working with national design studio as you mentioned that's trying to modernize every government interface. So there's already things uh in terms of implementation. Uh but as it's a very cloud flaring thing of iterative making it better improving sharing with the world for even people to collaborate with us and actually help us make it better right and that's uh we've already seen that too.
We've seen a bunch of PRs from the community uh people filing issues. Uh I lost track but you know we've probably you know merged like 20 or 30 PRs over the the last couple days so lots of people finding bugs and stuff we're going to fix. Do you see in terms of even the future and ecosystem do you see this pattern like framework core uh and AI assisted rebuilds becoming more common across other ecosystems? That is a great question and I I think probably yes. Um I I think that we're going to see this for more projects.
we're going to see more things rebuilt using AI or um you know built differently uh again like what one of the things I've gotten is we've gotten some people that have come in and said oh can you can you tweak this next behavior we want it to work differently and so far I've been kind of holding off on anything like that I've said you know hey look the goal here is par onetoone we don't want to break people's apps when they move over we want that to work as it as it worked before but uh you know my response to that is if you want an a framework to work differently you can again just can spend some tokens and make it yourself so I I think this will happen to others open source projects.
Um I and even non-opensource projects too, right? Anything that has like a good well specified surface area or test suite. I think this project coming from me and from Cloudflare is just sort of frankly a coincidence, right? I mean maybe because I, you know, had this problem in the back of my mind for the past couple years and I I knew it was something I wanted to try to tackle. But if I can do this as an engineering manager in my spare time over a week, anybody could have done this, right? And there's no magic in the prompts, right?
Like I I can't share them all, but they are not fancy. I do a lot of voice to text. They are me just yelling to the computer to make it work better, right? That's what it is. And so people are going to keep doing this kind of stuff. How do you do that voice to computer? Uh Super Whisper, I'll I'll give them a shout out there. Uh uh Super Whisper is the the app I use. I do a bunch of stuff with them. I use their local models. So, Parakeet is the model. There's a a good voiceto text local model that you can parakeet is by Nvidia, so it plugs into a bunch of things.
There's other apps out there, but Super Whisper is the one that I use. I'm a fan of it. To be honest, I also love to talk with it, explaining what I want, like the structure and things like that. And it's crazy the the voice to something to act to something speaking many minutes like you're explaining what you want with detail. It's better, I think, than writing. Well, it's very good at taking LLM's in general very good at taking like unstructured raw input and then structuring it. So, I I can sit there and talk for 10 15 minutes about a problem and just kind of brain dump my thoughts and and if you look at what's written down, it's never something you would write by hand, right?
It's just like almost unintelligible. But the LLM can take that and say, "Oh, okay. I I understand what you mean and like I'll go do that." Yeah, it's quite amazing. Apparently, Open Claw, the the founder of Open Claw, uses also the dictation a lot speaking, which is interesting. I did know that. That's cool. Like, yeah. Nice. That's true. Apparently, he uses his phone for that. Oh, well. What's next for V next? Uh, is there like something that we can share what's coming? Although it was a week, it's so short, but is there something we can share?
So, we're definitely going to keep investing in it. I mean, one of the things that, uh, I said this in one of the GitHub issues because we we got a lot of questions about this, but I actually can speak with authority here. I'm not just an IC engineer who's gonna have to like beg for time to work on this. I am in charge, which is kind of a nice position to be in right now where I can say, "Okay, we're gonna carve out a plan to actually like make sure we have people working on this." It's provided enough value for for actual customers that I think we're going to keep working on it.
We're going to keep making sure that we fix bugs. We're going to keep working with some of the other providers to make sure that, you know, we've got this nice some interface where you can deploy other places. Um, but yeah, we want to keep going. We had I already asked a few questions that we got from social media. one was that the the plan to keep this maintained long term. So apparently that's in the cards. In terms of the implementation of this in production environment, do we have any advices to folks on that regard? Uh I would say that you know be be cautious.
I mean I've I've said it's it's a week old, right? Um so we are already you know finding vulnerabilities and we've already had some reported and we've already fixed some. Um and so like I would say you know just like any new project you should exercise caution. All software has bugs, especially brand new software that's a week old. Um, and so I I am encouraging people to be be cautious about what you're doing. Um, and you know, what environment, especially like you know, what things your app is doing. You know, if your app is entirely static, you have a different surface area of vulnerabilities than if you're doing like a lot of server side actions or things like that.
But you know, security is something we take very seriously and we're going to continue fixing bugs as they get reported. Uh, another one, how can we take this to next level by having SSG by default like Astro? Yes, I addressed that in the post. This is probably like the biggest gap between uh next and and I you know acknowledged this in the post is that SSG doesn't work. It says um part of the reason there is V is is not have SSG right server side generation out of the box. Um it doesn't really have an opinion on that.
It's not part of what it's designed to do and there's a couple paths we have there. Number one, I think we will just implement serverside generation at some point. Got to think through a little bit how to do that talking with the V team because there's other frameworks that do this. So, we'll kind of look at best practices and make sure we're like lining up with what everybody else is doing. But then we also talk about in the post uh this other idea that we've come up with that is under an experimental flag called TPR.
So, uh traffic aware uh pre-generation. So, this is basically like a an alternative take on how to do server side uh sorry static is using your live site traffic in order to inform what pages need to get built. So, I'll recap what I said in the post. If there's a 100,000 pages, instead of building all of them, which may take a long time, let's build the 500 pages that we know get 95% of the traffic, right? And then everything else will be generated on first hit and then uh cached, right, from in the future. So that's uh something that it was part of the release.
So we have like I said very early, very expens experimental version working with our zone analytics API, but I want to probably flush that out and make that work with other providers too. That doesn't need to be a cloudflare specific thing. I think it can be a generic thing that if you deploy to someplace that has those analytics, they can make that available. Makes sense. The other ones were wom support and will this eventually replace Open X? Uh so Wom support um so that is going to probably be something we'll have to figure out. Worker supports Wom so you know it's like in our interest to do that as well.
We got to just figure out how to like make that work with V and kind of make it work generic and everywhere so that it works in all the different runtimes that this thing could could be run on. Uh and then for open next still investing in open next that has also been like a project honestly if if you're worried about like production and like sort of battle tested harden software you should just go use open next right uh we've invested a lot of resources in that in the past couple years we did a huge release of this last year that really got it up to par with sort of where it needed to be and so I would say if you're if you're like looking to deploy a production site and you're worried about this go next it's great um we're going to continue to invest there too last but not least uh if we're active really trying to use V-Next in in our consumerf facing products.
That's a great question. Not at the moment. We just don't actually do a lot of Nex.js. So, you know, our stuff is basically most on either just React itself. We have some Astro sites. We have some Tanstack sites as well, but uh we just don't have a lot of Next in our stack and so there's not really a thing there to replace yet. Before you go, just one question. Where should people start if they want to interact with this? and also a key takeaway you think would be the takeaway of this blog of this project.
for starting I will tell you the best way to get started with this is use AI right so if you go and say you know v-x in it manually I'm sure like the the JavaScript ecosystem is so complicated there's so many packages out there there's so many various config things and options and runtimes that I know there's bugs I know there's cases that don't work so if you start with AI will navigate those for you and then the best part is you can just tell AI please file a bug on the v-ex repo and we will go figure it out and usually AI will provide a good reproduction of that so I would encourage people to use AI to start with this and just open it open up your project, open up your next app, say point it at the repo and say migrate to this and then it will work through the issues and that helps us and I think it's the best getting started experience.
Big takeaway I I covered it kind of earlier but I think the big takeaway here is really around AI like AI is going to change how we build software and how we maintain software. And I don't claim to know how but I think this is just another data point in the journey we're kind of all going to go on this year. many things to unpack this year with that. So, thank you. This was great for our project. And it's done. It's a wrap.
More from Cloudflare
Get daily recaps from
Cloudflare
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









