Cloudflare’s Next.js Slop Fork

Syntax| 00:47:12|Mar 18, 2026
Chapters20
Steve Falner, Cloudflare’s engineer, discusses the motivations and process behind porting Next.js to V by open-sourcing and explaining his role and approach to leading large engineering efforts rather than writing code daily.

Cloudflare’s Steve Falner explains how he AI-assisted ported Next.js to Vit as a “slop fork,” enabling Next apps to run at Cloudflare with Open Next and V-Next tooling.

Summary

Syntax host Scott and guest Steve Falner discuss Cloudflare’s bold project to port Next.js to the Vit runtime, effectively creating a “slop fork.” Falner, Cloudflare’s director of engineering, details the motivations, planning, and execution behind V-Next, including leveraging Open Next for battle-tested production code and using AI to accelerate planning, testing, and code generation. He walks through the weekend plan that evolved from initial test-suite analysis to porting tests and adapting the Next.js test suite for a Vite/V-Next environment. The conversation covers the hands-on use of OpenAI-powered agents (Opus 4/46, Codex, and agent browser) to draft plans, port tests, run end-to-end validation, and manage context with tools like Context7 and Exa Search. Falner also candidly addresses code quality, guardrails, security triage, and the broader implications of AI-first frameworks for the industry, including potential migrations to alternative frameworks and the future of AI-assisted programming languages. He emphasizes the importance of DX within Cloudflare and teases forthcoming blog content about AI-driven security triage and vulnerability discovery. The exchange demonstrates both optimism and caution about AI’s potential to reshape how software is built and deployed. Finally, Falner shares personal “sick picks” and hints at the broader impact on industries like healthcare as AI accelerates progress across domains.

Key Takeaways

  • Falner ported Next.js to Vit by executing a test-driven plan: porting tests from Next’s 8,000+ test suite into a Vest/Vitest setup to validate compatibility.
  • Open Next remains the recommended production-ready path; Cloudflare’s V-Next is presented as a testbed and option for those who want Next.js on Cloudflare today.
  • AI-assisted workflows can dramatically accelerate software porting and testing, with plans, test suites, and reviews managed via markdown plan files and agent-driven iterations.
  • Agent capabilities include browser-based debugging, context-rich reviews, and overnight task execution, with Open Code storing session data for insights and reproducibility.
  • Security is a key focus: Versel reportedly contributed vulnerability reports; Cloudflare plans to publish a blog on AI-enabled triage, fixes, and validation of security issues.
  • Code quality can be variable in AI-generated output; Falner notes the trade-off between rapid progress and human maintainability, with a plan to extract and lint generated code later.
  • DX is a priority at Cloudflare, as teams aim to design tools and surfaces that are agent-friendly, not just human-friendly, to support rapid AI-driven workflows.

Who Is This For?

Software engineers and web developers curious about AI-assisted porting and framework interoperability, especially those considering Next.js on alternative runtimes or evaluating AI-driven development workflows.

Notable Quotes

""Next.js has this great test suite. What if I just throw it at that problem?""
Falner describes the initial spark for porting Next.js by leveraging the existing test suite.
""If you like Next and you want to use Next, then this is a good option.""
Falner explains the practical choice for users considering migrating to V-Next.
""AI is an amplification factor... the human still needs to set the direction.""
Falner discusses responsible use of AI and the need for human direction.
""We port the tests, right? you can just literally go test by test and figure out which ones.""
On the strategy of porting Next.js tests to Vest/Vitest for compatibility.
""AI triaging security vulnerabilities... finding its own vulnerabilities.""
Falner teases the security blog and AI-driven vulnerability work at Cloudflare.

Questions This Video Answers

  • How can I port a full framework like Next.js to a new runtime using AI tooling?
  • What is Open Next and when should you use it instead of Cloudflare's V-Next?
  • Can AI-driven workflows realistically replace traditional porting efforts in production projects?
  • What security benefits come from AI-assisted vulnerability triage in open-source projects?
  • What does DX mean in an AI-first developer platform, and how is Cloudflare addressing it?
CloudflareNext.jsVitV-NextOpen NextOpen NextAI-assisted codingOpusagent browserContext7 (Upstash)','Exa Search','TypeScript LSP','MCP servers','DX in cloud platforms
Full Transcript
Welcome to Syntax. Today we have Steve Faulner on and he is the creator of V-Ex. If you didn't hear about it, basically Cloudflare took Nex.js and they created what is Levvenly called a slop fork um which is they uh ported the entire Nex.js framework to Vit and then they posted it and it was this big thing and we're not here too much to talk about like like the drama of of all of this and whatnot and whether Nex.js JS is hard to run on Cloudflare and whatnot, but I think we're more interested here of like h how did you do this? You know, like like how how was this created? How what's your process for tackling something that is not like oh cool you made like a little app where you can like drag and drop stuff around or you made a photo booth app, but like you literally took like an existing I'm not going to say spec, but like like test suite and and replicated most of the software behind it. So super interesting one. I'm super happy to have him on. Welcome, Steve. Thanks so much for coming on. Yeah, happy to be here. Like I said, uh I'm I'm I'm excited to tell the world more about this. Um I think it's a cool story. It's a cool story about AI and just the world we're living in right now and how rapidly it's changing. I I think this is going to be a wild year. Wild, maybe wild five or 10 years with for Yeah, buckle up. So, give us a quick rundown of of who you are and what you do. My name is Steve Falner. I am uh the director of engineering here for workers. So pretty much own kind of the whole workers or at Cloudflare. It includes other things like our agents product, uh containers, uh some of the stuff around Wrangler CLI. So I I didn't list all the teams. Apologies to to who I missed, but it's roughly like 80 people. I've been here for a couple years. Um and yeah, that's that's my role. Um I'm not writing code every day. That's not what I do. I've seen a lot of people have said a lot of stuff about what I did and about the blog. And I think the only real correction that I want uh to make is a lot of people have been calling me a 100x engineer or something like that. I would I would use a title 100x engineering manager is how I would I think I think that would be the correct the correct term. Yeah. Given the state of AI, isn't that like kind of where we're landing though? Those are the people with the uh the superpowers are the uh 100x engineer manager. Yeah. And we we'll jump right into that. Like I think AI is an amplification factor, right? If you know what you need to do, I think you can use AI to do it faster and better. But it it works in both directions, right? I mean, I' I've seen it negatively amplify things when you don't know what you need to do or or you have the wrong, you know, kind of direction, right? It it just helps you go, but you the human still needs to set the direction. I was curious about the the term uh slop fork that has been thrown around given the nature of AI writing it. How does that term like hit you? How do you feel about the the term slop fork being used here? I think it's a funny term. I kind of like have embraced it. I mean, I've talked about uh slot forking other things now. Um sort of after this came out, I mean, somebody said, "Oh, like we should, you know, uh jokingly somebody was like, "Oh, we should slot fork Kubernetes and rewrite it in Rust." Oh my god, that's an amazing idea. So, I think all the terms that are coming out like Vibe Coding, uh like Clanker is the latest one, things like that. I I don't I find all these things funny, I I presume there's like I don't take any offense in it at all. When I saw slot fork, I like almost dropped my phone. I was like, "This is the greatest term ever coined." And I was like, I'm going to start slob forking stuff just so I can say it. So, like, set it up for us before before we get into how you did it. Um, let's just talk real quick about like why you did it. Like, why did you essentially fork Nex.js to get it to run on V. Let me rewind time a little bit, which is that, you know, maybe about a year ago, we were trying to figure out how do we better support Next.js on Cloudflare, right? Like that was just this hot topic internally. I think there's been a lot written about this hopefully not too contentious but you know Nex.js JS has problems hosting especially on other like runtimes other serverless providers right it's sort of very some of the things are very like bespoke to node and to versel and so you know generally you know you you can host it a lot of places but I think when you see the edges of the problem right and like you know certain features or certain things don't work quite as well like that's where you get into trouble and so we were looking at like paths forward here and one of the paths we were talking about at the time was actually well what what if we just wrote our compiler for next, right? Like what if we just took the next API and just did it, you know, this API service and did it ourselves. Like this is not the idea was almost a year old, probably even a little longer than that. And we had somebody actually go try to do this for a bit and and just do a PC and the answer was ah this is going to be six months of work, five engineers, you know, replicating so much. It's just going to like take so much time. It just it just wasn't feasible, right? And so at that time we really doubled down on Open Next uh which we we still are very involved in OpenX project and still moving on to OpenX. I keep telling people like if you want battle tested production code that's not 3 weeks old like please use Open Next. It's great right? Yeah. So that's kind of like where the idea probably started and then funny enough we actually we tried very briefly again we had an intern who actually tried this because he thought it was a cool idea and so I said oh just do pages router and let's just see if we can get it working you know very talented intern and you know let's throw this at him couldn't get that done either right so we tried twice and then I think everything really just changed when you know like everybody else December January suddenly these models just hit this next level and yeah I was doing a lot of manager stuff with these models, right? Like you my my job here is not to write code, right? So I'm I'm using uh these models to like summarize meeting notes and to track Jira tickets and to pull in summaries from channels and you know at Cloudflare we have a lot of like internal MCPs now that we're using uh and so I I was kind of using AI for manager brain and I I was also using a little bit of code here and there and I was like you know what these are really good now. I was like I wonder if we could just do this with AI like what like Ralph Wigum was kind of a thing. I did a couple Ralph Wiggum style projects. I was like, "Next.JS has this great test suite. What if I just I just throw it at that problem, right?" And that's what I did. Uh I think I started on a Friday. It was like a Friday afternoon, Friday evening. Did a plan. Spent a couple hours on that. That was the the start of just going back and forth with Opus and just saying like, "Hey, here's how I think this should work. Here's the things I think we should cover." And I think I woke up the next morning and like was clicking around in the app router demo like the app router playground and I was like whoa like this this kind of works right like it wasn't working perfectly but enough that you're like this this is there's something wow and so like let's talk about that planning stage as well like that's if someone were to tell me all right take NexJS and implement it in Vit like what is your process for tackling that do you do you say like oh take the tests or or do you ask it, how would you approach it? And like how much time do you spend building up this plan and and how much of that is that you actually understand how software engineering works? I definitely I think I was uniquely suited to this because I of just my background with you know Nex.js JS and so understanding the problem and and we've now we use V uh by the way we have our own Cloudflare plugin for VT and so we use V other places for other frameworks and so I I I understood the shape of like what needed to happen right so it's it's a couple hourong process for me to come come up with that plan and and it was iterative I'm working through it with open code open code opus is you know the default stack for me I definitely spent a lot of time going back and forth I'm also a big voiceto text guy I use super whisper I don't think that there's like some magic really good prompting I did here. This is me just sort of like brain dumping at the computer using my voice for you know 10 minutes. it comes back and interprets what I said, kind of fills in the blanks, and then me saying, "Ah, that didn't don't do that, right? Don't do that. Don't do that." Like those kinds of things, right? Like at some point it even suggested, I think, sort of like ripping out React or something like that. And I was like, I no, I don't want to do that. It's like out of scope for this project, right? So, so when you're doing this type of planning work, are are you creating just a bunch of markdown files? Are there any processes or skills that you're finding work best for you in this? All markdown files. I think this is the best tool we have at the moment. I'll be honest, I think it's kind of a local maximum. I my guess is is that in, you know, two, three years, you're not going to be writing a bunch of markdown files to a repo. Like I think LLM seem to be very good at it today, but I think as they sort of learn, we learn new techniques. I think we'll we'll figure out something that's a little more like LLM native, right? So, it seems strange to me that this is like the best we've got, right? But right now it actually works pretty well. We had like a plan markdown file and then I had a one specifically around testing too. So probably one of the parts I spent the most time on was like guiding it on which tests to pull in from next. The next test suite is is huge. I mean it's 8,000 something tests and a lot of it is testing either like what next itself emits or uh testing things that maybe were just like not day one features I wanted. And so I did have to spend some time guiding it. One of the maybe like unlocks I had was instead of trying to get the test suite running, like the next test suite running, I actually just said just port the tests, right? Like you can just like literally go test by test and figure out which ones. And then it used like a tracking document to like track each one of those tests along the way. And by porting the tests, you mean like like moving them to Vest or like like actually implementing the code behind each of these tests? Yeah, like moving them to Vest, right? And well, both, right? Like you know, so so moving it to like my own vest setup. So the the test from setup from day one was V test and play right? Um using those together and then you just like what let it rip overnight or or like like how much of you was it just like it came back after 20 minutes and you had to like type more into it? I actually asked Open Code about this. So I was curious because I I had I had the same question and and the people or the app. No, no, I the app, right? The app. Okay. Sorry, not not the people. I I did a little internal session that's similar to this about like how I did this um for Cloudflare folks. And one of the things I did to prepare for that is I actually told Open Code to go look at all its session data for the last week and to analyze it and figure out what I did when. Uh and it had some really interesting things. So it said my my peak token usage was at 3:00 a.m. which I am not awake at 3:00 a.m. right? So I definitely did a lot of like setting it up with a lot of tasks to do overnight. And I I wouldn't say I was like full Ralph Wiggum like with a bash loop and stuff like that. This is more just like giving it a document that said okay hammer out these 10 things and then you know just keep going. And my experience it's pretty good at that, right? I mean sometimes it gets stuck but pretty good. And then it said my my habits were very barbell shaped. So there was either really short sessions that were like 2 3 4 minutes long or really long sessions that were like uh 1 to 2 hours. And so I think that's where like if I if I think back to that weekend that kind of matches my recollection of what I was doing. Either these like little short like go just go fix this thing. This thing doesn't work or like go off in the deep end and explore something super deep. I got two young kids at home. So like I wasn't spending like all weekend on this. I was like literally like I remember kind of being like h like we you know going to the playground and get home and I like run to my computer and just like let me just kick the LM for a second and then I'll like go back and do kids stuff. Right. How are you tracking that usage? This is all the open code data. So this is open code has all its own session data stored. Um I think they use SQLite and they store all this like information and so I just told her to go look in the SQLite database and figure it out. So there was no formal process like you said you just had I mean for the the loop right you were just pointing the agent at it which agent or uh which model were you using for that? Opus 4 uh 465 45 so 45 46 actually came out when I was working on the project. So about halfway through the project, I just switched all into 46. 99% of the code was written by Opus uh 46 or 45. Near the end stage, I started doing a little more like reviews where there was like times where I was trying to get the code reviewed a little bit better and then I would throw sometimes codeex at it just to get like another opinion. Yeah. How do you find the difference there? Or have you noticed too much? That's a good question and I I've actually I know people online will say like oh like Opus writes the code and Codex reviews the code. I did that for a while. I actually have kind of backed off that. I just haven't noticed that switching the model provides that much difference. It's almost just as good to just have the model review itself. And and I also find that reviewing the review is helpful, right? Like sometimes I'll just put it in a review loop where say like review the code, fix the problems, then review yourself again, fix the problems. And like it'll do two, three iterations of that before it, you know, doesn't find anything. And while you're doing all this like like what's your actual like open code setup? Do you have plugins, skills, agents, files, um, MCP servers hooked up? Like I often think about these guys that just spend all day tuning the knobs on their setup. That's me. Knob. Yeah. I just started getting into Pi and that's like constant knob turning. So yeah. Yes. So very minimal. So I use the uh desktop app mostly. Um, I've kind of gone through the full iteration where I I I am a desktop app kind of guy. I'm not a a terminal UI kind of guy. That's just like my background. and I use VS Code. I kind of got deep into the terminal UI for a while because it was just nice and good. And then the desktop app got better and so I switched back to the desktop app. So almost all of this was desktop app. Uh as far as MCP servers and you know special skills and stuff like that. Nope. Don't really use any of that. We have I said our internal MCPs but I didn't even those I didn't really use for that. I had those turned off. No special agents. Uh we do now have a specific V-Next sort of like agent that we use for some of our reviews on the repo. So we we have found that getting an agent with like a bunch of context is helpful. Uh the agents MD file for this repo was generated by the agent itself when we started. Um and you know I I along the way I would tell it hey please go update agents MD make sure it's got everything it needs. I will add there's two MCP servers that I have had a little bit of luck with better results than just like not using them. And that is uh Context 7. I think it's by the Upstatch folks that really has a bunch of stuff indexed around uh open source libraries and things like that. And then also the uh exa so exa xa search. Yep. I have found that provides just a sort of general better search experience for the LLM. So I pretty much have both of those when I'm working on this project. I have those on now by default. It's not like this massive bump, but I would say, you know, my total vibe take is 20% better just being able to have those two. And while the LLM is is testing this stuff, that's that's all just like V test stuff or or at any point did it start opening up the browser and clicking links and doing things like that? Definitely. So I can talk about that. I talked about this in the blog post, but coincidentally agent browser by Versell, which is sort of a wrapper around playright, but it's like a nice CLI tool that lets you do everything. Agent browser very good. I used that a lot. So uh at one point I think I installed that skill. I should mention that. I don't really have a lot of skills either, but the agent browser skill is very good. Agent browser itself very good. And I would say here's the production, you know, app router playground. Here's something I'm trying to do. Here is the V-Next app router playground, something I'm trying to go do. And I would just give it instructions to go replicate and figure out what's going on. Definitely did a lot of debugging with that. And I was honestly a little blown away by some of these things. Like I remember one time I said, you know, the scroll is janky, right? Which is like, yeah, is the LLM going to figure that out? And it did. it it was like, "Oh, I I see it now, right?" And then, you know, like sending that I was a little blown away. I was like, "Oh, okay." Along the lines of ancient browser and opus, I have an issue with Opus specifically always like just totally failing with screenshots from agent browser and it coming in with like a this screenshot is too large for Opus and then I have to started a completely new session because it pollutes and kills that session. This has happened to me a couple times. Okay. I was going to say I'm like is this something that is only happening to me but seeing as you use these same tools I figured that was worth like if I have a longunning process and I have agent browser it can kill that process and like ruin a long running thing for me. Yeah. Just huge bummer. I've definitely hit this myself and and it really corrupts at least an open code for me the whole session and so I have to like start over and I will say that um these sessions sometimes get really valuable. There's been times where I will stop and say, "Wow, what I'm doing here is actually like I really want to save this." And I will say to it like save like a compaction to a markdown file of this session so that I can kind of come back to it later because certainly like I've built up in the context like enough interesting stuff that I'm like, "Wow, there's there's like good good things going on here." Yeah. And how closely are you monitoring that context? Are you sending things to sub agents or anything like that or is it just Not really. I I I definitely had I'm not going to say it's perfect. I'm going to had some days where, you know, you hit context compaction and then you're like, ah no, it's gonna going to just like, you know, it's going to start off on a weird foot and and that happens. I would say this is where I've noticed open code seems to have improved a lot in the last few weeks even where I used to have more problems around this and then now even just like in the last week, I don't really have much problems around context comparing where it just sort of does it right and and whatever internal prompts they're using for this seem to be pretty good. Another thing I'll I'll forget that I I forgot to mention is I I started this file with the LLM called discoveries.mmd at some point and and even it may have even suggested this to me. I don't even know if it's like my idea. It was all just the list of things that were like oh like this version of React Webpack, you know, DOM renderer doesn't work with the CJS module that doesn't load with the V stuff, like all that kind of stuff. And it just kept a log in there. So when it hit these bugs or hit or hit these like ecosystem issues, it's to sort of like figure out what to do and move on um rather than like just hitting it over and over again. I love that that move as well. So I'm I'm working on a tenstack start app right now with the Cloudflare V plugin, all that stuff. And it it keeps hitting this like loop where it realizes that it imported some serverside code into a client module and then that triggers like a warning because it it it's trying to tree shake it out and then it it tries to like make it into dynamic imports which is a nightmare and then like it's just this constant spinning and I hit it like three or four times and I I was like been like no we like we solved this already and like you obviously don't know what to do here because you're just going in this crazy loop, you know, where it goes like, but wait, they already said this. Oh, but looking at it again, uh, I did X, Y, and Z. That that drives me nuts when it just spins forever like that. And at that point, I was like, all right, I'm I'm starting to throw I was just throwing it in my agents.mmd, which is like these this is how you solve this specific problem or even just like posting in a gist or something like that, linking out to it. kind of like piggybacking off that, one of the insights that I've had is that like agents are really good at taking feedback, right? Like humans aren't, right? Like if I if if somebody writes a document and I say that's a bad document, you need to write it again. The next iteration is not going to be like light years better, right? But agents when you tell them to do something differently, you provide them new context, they really get better. And this is where this is where I think a lot of people there's a lot of people still kind of coming along on this journey, right? Like I I have developer friends who sort of don't really even want to touch this stuff. And what happens is their their first interaction is they look at it and they say, "Oh, well it it didn't do the thing right, so therefore it cannot do the thing right." And and I'm like, "Oh, you just got to hold on a little bit longer because if you just correct it, then on the fourth or fifth loop, it'll actually will just it'll it'll start doing it right. It'll stop making that mistake." That's where I think it really trips people up is like wrapping their head around like how good these agents are at interpreting course correction. Yeah, I do find that too. Like people people just send a a prompt, it outputs something they didn't expect and say this tool is garbage. Uh rather than Yeah. Well, as as programmers, that's how our brains are trained, right? Like you got you got humans on the one end which are can take feedback but are sort of very squishy and bad at dealing with it, right? And then on the other end you have programs which are you know like if you write a program and it does something and it fails you expect you run it again it will fail right and then LLM sit in this weird squishy in between non-determinism thing and this is where the non-determinism is a feature not a benefit right like it it's like yeah it output garbage terraform that took down your production database but like you can tell it not to do that again and it probably won't probably probably maybe Yeah, probably maybe. Well, I'll say like I I'll add on like I I am not like this AI maximalist. I think I have had my skeptical moments. I have also been coming on this journey the last few months that everybody else has. I I'm really excited about where we're going, but like I'm also simultaneously terrified, right? Like I I see the mistakes people are making. I see the mistakes that I'm making with it. And I I know there's real gaps here. And so, you know, there's people saying all these terrible things are going to happen because of AI. And I'm like, you're Yeah, you're probably right. I don't know. like I know that's but it's also amazing. It's both at the same time. Well, let's talk about like code quality a little bit on that as well. Like the code that it was kicking out, was it of good quality? And did you ever hit these like areas that it would just go down and and start doing awful stuff, right? Like I I spent the other day I spent like an hour writing up the most beautiful dock. It was a Friday night and I was like 5:00. Hit the button. I put it on the cursor longunning agents which can go for 10 hours you know and I was like man came back on Monday and it had written like direct SQL queries. It just sidestepped all of my OM and like I was just looking at it like man I I spent like 3 hours just undoing a lot of the like bad stuff that it did and I was like I thought I did a good enough job planning for this and like did you ever hit that where just the code quality or the direction that it went was just not it? for sure have definitely that and I would say that I every time I tried to look at the code I I definitely was not super thrilled. That's not code I necessarily would have written directly. Like it it's verbose. And part of me on this project was having to let go of that a little bit, right? And say, okay, what's the goal here? The goal is to get compatibility and the goal is to get the test passing and like have confidence that that, you know, that is the answer and that it's going to work, right? I mean, I I framed this as an experiment. It's still an experiment. This is an experiment to how far you can push AI. And it's uncomfortable for me, but I've had to sort of let go a little bit of like, yeah, the code is not maybe a great, but does that really matter? And if if it does matter later, we'll fix it later. I'll give you a very specific example. One of the things that went down the rabbit hole on was it did a lot of um code generation here. So, this is true today. So, if you look at the V-NEX codebase, what is actually contained in like the client bundle is actually generated from template strings, right? like interpolated uh template strings and it it really rubs me the wrong way, right? It's not type checked. It's not linted. There's tests for like the end to end behavior, but like there's not like unit tests for little bits and bobs. And so this is something I've actually been working with all the other contributors on is that we're trying to extract this out. So we're saying, hey, no, LLM, you kind of did a bad thing here. You went way too down this path of like doing something that's I think hard to maintain for humans and for LLMs, right? cuz now it's like, you know, massive interpolated strings of code is also hard for LLMs. And so we're trying to like extract that out and get those into like proper like, you know, linted type checked code that is then like, you know, pulled in in the right places. And if you want to see all of the errors in your application, you'll want to check out Sentry at centry.io/sax. You don't want a production application out there that well, you have no visibility into in case something is blowing up and you might not even know it. So head on over to centry.iosax. Again, we've been using this tool for a long time and it totally rules. All right, so I've been working with Pi lately and writing extensions and orchestration loops and all that kind of stuff and like the the thought process is like every single feature it takes different passes at it. Okay, here's a, you know, linting pass. Here's a whatever pass. Here's a a styl pass. Here's a UI pass, a UX, all that accessibility stuff. And each of those is like it's its own separate thing. It feels right now like a lot of extra work when like I could probably just say, "Oh, hey, this wasn't good. Try try it again with these." Uh, so I'm still trying to figure out like what that optimal workflow is for like preventing those types of things because it ultimately you do continually go back and forth and back and forth with it until it gives you something that's decent. This is where like the I think guardrails are really important for AI. I mean the test suites are really important and you know linting formatting those kinds of things but then they sort of can also box it in a little bit, right? So, it's like you you almost want to like have like these small nice tasks that are very easily contained and have those guardrails, but every once in a while you want to give the LLM sort of like a free pass of like saying, "Okay, well, what would you do if you could just redo this entirely or do something differently?" Yeah, I found that to be helpful, too. Yeah, I I regularly run audits like audit this, audit this, tell me this, what you're you know, what you would change about it here. And sometimes it's just like always looking for things to change even if there aren't things that it should change. But a lot of times there is like valuable insights in there, things you didn't think about. What about security with this type of stuff? So I I don't know if this is true. You can maybe tell me, but like the Verscell folks put in a bug bounty against the Cloudflare bug bounty program. I'm sure they took their whole backlog of security vulnerabilities and ran it against this. Can you talk about security a little bit and tell us was that actually true? Did they did they pay out Versel? Some of this is is is still like in process, right? These things take a minute to like for us to go and fix everything and triage reports and do all that, but we got a lot of reports outside of herself, too. So, they did send us reports. I'm very appreciative of that, honestly. Like, some people were framing as like sort of a gotcha. And I'm like, it's a weak old project. Of course, there's security vulnerabilities, right? Like, yeah, this is great. I I please send us more. I want to know what's a problem so I can get them into the LLM. Yeah. Shuffle it right in AI. This again, this whole story is about AI and AI and more AI. So, we have AI triaging security vulnerabilities. We have AI finding its own security vulnerabilities. I can't share too much about this yet, but we're we're going to have a blog post about this where we we actually did build our own AI agent and harness to find some security vulnerabilities in this project because we we we saw what was coming in and we were saying, well, we could probably just find these ourselves because these are clearly AI written, so we should do this ourselves. And then we did and then we took that thing and started pointing it to other projects and it started finding other vulnerabilities and we were like, "Oh, this is really good." Right? So, we're going to have some more coming out about this in the future. But, oh, cool. We're using this as a learning opportunity, right? How do we use AI? And what we've learned right now around security is that it's actually pretty good at some of those things, too. So, AI is triaging security vulnerabilities. AI is fixing them. AI is validating them. AI is responding to reports and going back and forth with people on, you know, what's going on. So, we're trying to use all these tools everywhere. To be very specific, like, yes, very aware there's some security problems. Um, I mean, you can go look at the repo and see the ones we fixed. I think we've done something. We're on our like maybe 26th or 27th release since uh we launched this thing like about, you know, two weeks ago. So, we're we're in there fixing stuff. We're maintaining it. Uh, we're keeping going. I would say everybody like I I'm I'm trying to map this out right now like how do we get to like a will we we'll remove the experimental label and call it stable or something like that or call it beta. Yeah. Just to give people a little more confidence that yeah like this thing is ready ready for production. It's like this is something that the end game is like you will hopefully release this thing and it will be like usable for people that want to run and we have customers doing that now right? I mean, we've told customers about, you know, the where it has gaps and where the risks are and especially like if you there's a lot of people out there that use Next in a very limited way, right? Like they sort of make a static site with Next and maybe they've got a few random API endpoints or uh they do maybe one page is you know dynamically generated but the rest of their site is all static and so we have people that have sort of that I say more like kind of like narrow feature set of next and they're having a pretty good time with this so far. Do you think that makes sense to literally port the whole Obviously it does. This is a dumb question to ask you, but does it make sense to port the framework versus just like telling it to move the website to to like another framework? That is a great question and I I think I said something about this in the blog post. I said this exact words to customers. I said if you like Next and you want to use Next, then this is a good option. If you don't like Next, if that's your problem, then you can spend $10 in tokens and be on a different framework, right? Uh and we've seen customers do that. I mean, there's so many options out there. Astro, Tanstack, solid, right? Like they if you want to be on another framework, like the cost of switching is so low thanks to AI, especially if you've got any kind of good endto-end test suite. So, I I have said this to people and and like I said, I I didn't do this because like I love Nex.js JS and I wanted to like that's not why I did this. I did this because I was like interested in AI and I've I've definitely said to people if you don't want to use Nex.js spend the tokens, go try another framework. Totally. I man I I've been trying to move off of a fairly a fairly large project off of Express for a long time and I've just never been like Express is one of those things where it doesn't bug me enough uh to want to move off of it, you know? And then finally I was just I let this thing rip for like an hour or whatever and and it was totally moved over to Hano and I was like man that is amazing. The portability I think we should Scott we should probably do an entire show on that is just like migrating is the the barrier to entry of of migrating is so low right now. Totally. Yeah. I do wonder how that's going to I mean I talked about this in the post too but how it's going to change the incentives around like what we do and how we write software and where abstractions matter and where they don't. I don't claim to know the answer. I just know that the line is probably going to change. Yeah, it's all very fascinating and and who knows like with the the way that frameworks are changing. Uh CJ and I over here have been like talking quite a bit about like frameworks and right now, you know, frameworks are designed to be authored by us, but like what types of changes would we make uh in like a next generation of frameworks that were designed specifically to be authored by AI? uh like what types of optimizations would be needed there for a framework to perform or to for the AI to be able to write framework code better. It's just like an interesting thought experiment. We don't have anything to show for it, but we're both hacking on some stuff. Just seeing if we could make anything interesting. I I think we'll see some AI first frameworks. I think we'll probably see an AI first programming language, right? Somebody will come up with a way for interesting something they think AI is can just be better at. Now all of these will suffer from the bootstrapping problem of well it's not in the training data so you know it won't get recommended or won't understand how to do it but I I think that's just going to get solved. That's that's actually my personal take is that we cannot live in a universe where we're always like two years behind on what LLMs are telling us to do. And so I I think there will be some part of the technology or the training or the post- trainining or or something that will get solved here like some somebody will come up with some technique to say like we're going to inject in you know all the relevant data for this new language because it's super critical that AI uses it. What would a AI programming language look like? What do you think? I talked about guardrails, right? So, so probably something that's typed something um honestly if I look at the programming languages now you know like Rust is is verbose but has like all of these guard rails famously rust people will say well if it compiles it probably runs right it probably works so Rust kind of seems close to what you want but almost like you want something that's verose and simple which is more like go right like go is sort of designed to be you know well there's only one way to do thing or a couple ways to do things And so, um, I don't know, something that has sort of more the guardrails of Rust, but then like the simplicity of Go. I guess that that's what I would say. It's like my guess. And do you think we would have like a like a rigid syntax similar to how how those languages are, or do you think it would be more free flowing English? I'm guessing rigid because I think it needs like those guardrails, right? Like a limited syntax. Uh, I I say all this too is I I love I'm a Typescript guy. Like that's my background. I I was a front end dev historically before I kept getting put in charge of infrastructure teams. Uh and so I like writing TypeScript and so if Typescript, you know, doesn't make it in the AI world, I'll be a little sad about it. That's good. Actually, that was one more question I had about your open code set up. Did you have the TypeScript LSP hooked up in in Open Code? I I think it's on by default or does was it just GP and strings? No, it's on by default now. And so I think it just was working and in the background. I don't know if it made much of a difference at all. I will say that it gets caught up on the LSP sometimes where the LSP is out of date. I don't know if you ever see this in open code where I'll say like, "Oh, there's an LSP error." And then it will catch itself. It'll say, "Well, I I know that the type check passed, so the LSP is just not caught up or something, right? Like in the background and I notice it gets it gets caught up on the and hiccup on that sometimes." I don't have any data that says the LSP stuff is helpful, but I don't have any data that for me that says it's not helpful, right? like it was running and I got the project done. So maybe yeah, I'm curious to see when the um like the TypeScript Go stuff becomes a bit more mainstream where they can type check your entire project in like 20 milliseconds. I'm curious to see if that will help it at all. We do use that here. So we use um TypeScript Go, uh Oxlint, uh OX format. Yeah, like we use all three of those. So I definitely prior and then V uh V test I prioritized like all the latest and greatest speedy tools because I I knew that that would be important, right? Like you you want that like feedback loop, right? Yeah. So yeah, I agree. It's if it's like a like a 3se secondond tsx compile every single time it does something to check that there's no errors. It's it really slows you down. Yeah, I do feel like Oxlint and Oxford, those those types of tools are I think once they hit more people, I think people are going to be really impressed with how how great they are. I mean, that that's all from Void Zero from the V team. I mean, they they are cooking over there, right? They are. V8, which uses roll down instead of roll up, is not the default yet. It's still V7 if you so for V next it uses V7 by default and you can opt into V8. I believe they're going to have something, you know, kind of beta soonish. I don't I don't want to commit to any timelines cuz they haven't told me, but you know, as soon as that's ready to even be in like a kind of public beta, we're going to swap the default because it it improves build times by like another 2x or something. It's It's pretty drastic. Smokes. Yeah. Crazy. Yeah, the Vit folks are cooking lately. They definitely are. It It feels like the Cloudflare folks are are cooking, too. Uh, and also I I've noticed a lot of Versel uh folks are are even being scooped up by by Cloudflare. I know Versel often has the reputation for having like really good DX and and Cloudflare has always had the reputation of being like great tools. Is Cloudflare taking that DX part of it more seriously now? Is that the message? 100%. So, uh, when when I joined, uh, Cloudflare, that was a big part of like what the goal was, right? And so, how do we get the right people in that can sort of like have this good taste for DX and how can we just let them go? And, you know, it's, you know, it's it's a big company. It takes time. Um, I I often, you know, people ask, you know, what what's your job at Cloudflare? And I use the analogy of like I I'm not responsible for fighting fires. I decide where fire stations get built, right, at my level, right? Like that's the kind of timeline horizon I'm on. So for me to understand if I'm being successful is you know I have to look back two years to see like well did I make a right decision there that resulted in you know a team being formed that created the right conditions for this to get better. I do think that is where things are headed at Cloudflare. We've invested a ton in hiring the right people and we and we've given like we have a whole new design engineering team that's responsible for our dashboard and they are just burning through pages trying to like make them better. Uh, so we got work to do, but I think it's way better than it was a couple years ago. So I I hope other people think that's true. Every time I load up Cloudflare, something looks a little bit better here and there. So I I think it's noticeable already and and then that effort I think will continue to, I think, pay dividends for y'all cuz that was, you know, I think a rub with Cloudflare before was navigating it even. And we got we have a a bunch of projects that I I can't share necessarily in the works, but we got a bunch of stuff in the pipe that I think is going to make this even better. I think we're tackling it from like sort of multiple layers. We have people that are like go fix the dashboard today, make it good, make it faster, make it better, right? And so we have people focused on that. And then we have even some people now focused on well what is the entire future of the platform look like? Like how do we reinvent like a surface area of our platform such that it not only is it good for humans but good for agents, right? Where you know like how do you build tooling that is like agent first and agents because it's different, right? Like an agent, an agent needs good DX, but agentic DX is different than human DX, right? It doesn't need to look nice, but it needs to still be like intelligible to an agent so it understands like, well, I need to click on this thing or I need to go follow this path to figure out what to do. Cool. Um, anything else we didn't touch upon before we wrap this up? I'll give you just like super high level, right? Like I said it before, I'm I'm both simultaneously like thrilled and terrified about all this stuff and where it's going to go, right? Like I mean it feels like we get to be alive at this this like what feels like a big technological revolution, right? I mean this is like the printing press or the steam engine or something, right? I mean the closest thing we have in our lifetimes is mobile, which I think is still an order of magnitude smaller. Maybe the internet, but even the internet, I mean, it took a while for us to get all the pipes to everybody's houses. You know, it it it takes a while to roll out and you get the latest and greatest on your phone within like 24 hours, the same as anybody else, right? when these things get released. And so, not only is it like the magnitude of the revolution that's happening, but also the compression of how fast it's happening. Some days I feel so ahead of the curve with this stuff and then the other days I'll look at what other people are doing and I'm just like, "Wow, I I barely scratching the surface." Yeah. It Do you have any ideas of like what industries are sort of like next for this type? Like obviously it's doing very well in coding and it's doing very well in like like white collar office jobs you know like accounting Excel that email it's being abused in marketing and and whatnot. What's next? Like what do you see? Probably the next big thing I imagine is I have to think that that medicine right is going to get you know pretty upended by this. I think it'll be very similar to what's happening in in coding where there's a lot of things that can be done by an LLM, but then you need like an actual doctor with experience to like steer it along the way, right? You know, you can't just do all of its own stuff. And so I have family members in medicine and I I just sort of see what they do and the direction they're headed. It's already there. I mean, some of the these hospitals are already using LLMs and like have like, you know, LLM based voice transcription and things like that. like some of the tech is there and and in use. I think there's a lot of regulation in that industry. So, it'll take a while to sort of, you know, permeate fully, but I think medicine is something that could get very very upended just in terms of like how fast you can understand what's going on with a patient, right? Mhm. So, I I I'm kind of excited for like obviously there's there's horrifying things that could happen in regards to like the wrong people having that data and and not being able to cover your your payments for things like that. But also like I have a Apple Watch on that's just like I all of this data of how I slept, my heart rate, all of this stuff, you know, and then you you take a blood test and then take that and then compare it against what 20,000 other people, 100,000 other people of all these different cases. There's there's got to be something there, right? And I'm sure people are working on that. I think this is where like we we as technologists and the people sort of inventing this stuff around this have to try our best to sort of steer things in the right direction because the technology gets used for good and bad stuff. Like I mean I I I brought up the example of the printing press before. I mean the printing press changed the world. Printing press started wars and revolutions, right? You know like a lot of bad stuff happened too. And so I think both will be true in this case and and we just got to try to keep our thumb a little bit on the scale of like use this for the right things to help people and make things better, right? That's and it's it's going to be hard, but that's what I'm I'm trying to So now's the part of the show where we get into sick picks, which is just things that you're really enjoying in life. Uh cuz all this stuff is scary and crazy, but we got to have things that we like. So, do you have something in life right now that is just you're enjoying? Could be a a product, a a show, a podcast, anything. Sick pics. Here's the thing. I'm I'm I'm exploring becoming a watch guy. Oh, this is a this is a relatively new thing for me in the last like few months. So, yeah, I actually I inherited a watch from my grandfather uh that he got for retirement. So, it's got like some sentimental value for me. Now it is gold and flashy and so it's not really my style. And so that sort of begged the question, well what is your style if you were going to be a watch guy? And so this is this is where I landed. This is the uh IWC Mark 20. Uh it is uh um made by well IWC. Um and so this is their just one of their kind of like standard pilot watches. It's it it's not like the flashiest uh most out there watch, but um I've been very happy with it so far. Yeah. How do you find out what your watch guy style is? How do you How do you develop that? I It probably took me like 3 months to decide to buy this particular watch. So, I went tried on a bunch. Just kind of felt what felt right. Honestly, I I didn't know either until I went and tried some on. And then uh there's a few watches that you kind of just leave the store and you go like I feel like I left I like I forgot something there, right? Like when I left the store and I was like, what? What am I missing? And then I'm like, oh wait, no, I just really liked how that watch felt and this is this is sort of one of them. But you can tell I've got I've got a couple others in mind that maybe will come through the pipe. But this is this is my first like proper watch purchase. So yeah, uh that's kind of been a fun thing. I I also just like so I'm actually a mechanical engineer is my that's what I went to school for. Uh before I bumbled into software engineering sort of by accident, which you know, we didn't cover my background, but you know that there's a whole story there. I worked on like the Ford Mustang for a little while at Ford. I kind of did some stuff around mechanical engineering. So I really like the mechanical stuff around watches. I mean, the fact that it is like on your wrist and there's little spinny things in there that just run all day, like it's it's just very very cool to me. Oh, that's cool. That's actually something I was building the other day. I use this 3D library called Manifold. Um, it's it's like a it's like a C library, but there's there's TypeScript bindings for it. You could just write JavaScript and essentially it will just like create 3D shapes that are like watertight, right? And I I do a lot of 3D printing and whatnot. And and probably about a year ago, I made this website called bracket.engineer, engineer, which is is for making brackets that hold up like power supplies and whatnot. And then I I came back to it a couple months ago when not once the the models got much better. And I was like, I wonder if it can do things like um like planetary gears and and whatnot and and do all the math behind that. And I I thought that was a kind of interesting. It made all the like actual mechanical engineers mad cuz they're like like we have calculations for this already. But it it worked pretty good. It took a little bit to get there, but I was very surprised. I've seen a little bit of that of like the LLM's kind of intruding in the like traditional engineering category, what you know, like mechanical engineering, and it's it's kind of been fun to see cuz it there's a lot going there, right? Like, so I I love being a software engineer. I love writing software, but when you look at what it is like to engineer for the real world, right? It it is it's it's just like entirely different, right? like it it's very different in terms of like what you have to think about and like sort of the the all the trade-offs you have to make right around like materials and you might do these things to make something like.5% better right and that's like a huge success right and um you know or you have so many more constraints and software just feels like this like sort of boundless like do whatever you want right just whatever you want on the web page put put more RAM in it and we'll be fine it's part of why I got into software engineering but I I definitely missed some of the mechanical stuff Um, totally. I I haven't dumped into 3D printing yet because I know when I do it will be bad for me and I will get way too deep into Oh man. Oh yeah. Oh, you will. Yeah, that was my life. I didn't get one for probably 5 years cuz I was like, I'm going to be obsessed. And I've had it for a year and I am obsessed. We're going to be moving next year. We're moving. We have like this like lofty condo thing now. We're going to move into, you know, a big proper house and I will have space for something like that where I can have like my dedicated like here's the 3D printer off, you know, in the in the corner workshop. And I'm going to do it then. Oh yeah. And then it won't ever stop printing. Yeah. It'll just be printing 24/7. Yeah. So, is there anything you would like to plug before we get out of here? Uh, obviously Vext, please go try it. See if it works. The best way to try it is to just point your LLM at it. Just say, "Migrate me to V-Next." And point it at the repo and there's a skill and it will just it'll just like figure it out. I would love more feedback about what works, what doesn't work. I mean, I know it's not perfect. Like so we're trying to you know fix things as fast as they can come in. I think I think we merged like 150 PRs in the first few weeks. I mean we're we're in there. We've got people working on fixing stuff. Right. So that's shameless plug one. Uh shameless plug two is Cloudflare. I work at Cloudflare. I don't know maybe I haven't mentioned that enough. You should go use Cloudflare and Cloudflare workers and it's great. And there's so many other things beyond just like host your website you can do on Cloudflare. We're really building out like an entire developer platform. It is like the cloud of the future and it's going to be amazing and you should come play with all the stuff. Sick. Big big Cloudfire fan here. Hold on. I have a I got a Cloudfare blanket behind me. Nice. Big Cloudfire fan. Cool. I appreciate all your time. Thanks so much for coming on and uh we'll have to catch up another time. Yeah, of course. Uh I really enjoyed it. So I hope turns out really good. All right. Peace.

Get daily recaps from
Syntax

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.