Next.js Vendor Lock-in No More

Syntax| 01:04:18|Apr 13, 2026
Chapters22
Tim and Jimmy discuss Nex.js adapters, hosting Nex.js on different providers and runtimes, caching, and the broader architecture behind the adapters platform with insights into performance and internal decisions.

Next.js vendors lock-in is breaking apart: Syntax sits down with Tim and Jimmy to unpack adapters, multi-runtime support, and TurboPack’s role in run-anywhere Next.js.

Summary

Tim and Jimmy from the Next.js team sit with Tim and Jimmy on Syntax to unveil the adapters platform and what it means for hosting Next.js on Cloudflare, Netlify, or other runtimes like Bun or Node. They explain how adapters create a stable API layer to simplify cross-provider hosting, and how the ecosystem working group with partners like Google, Netlify, and Cloudflare shapes adoption. The conversation digs into caching, partial pre-rendering, and the role of TurboPack as an incremental, parallelizable bundler designed to scale with ever-larger apps. They address performance questions, such as why Next.js didn’t decouple to Vite, and outline how runtime choices will evolve, including dev-time parity with runtimes like Workerd. Tim and Jimmy also discuss the testing approach for adapters, minimum viable features, and how to extend Next.js’s reach beyond cloud hosts to Kubernetes and edge runtimes. The chat finishes with practical guidance on getting started with adapters, a refresher on the multi-layer caching story, and exciting hints about AI integration and future runtime parity. Overall, the episode blends roadmap clarity with hands-on details on how adapters, caching, and TurboPack power Next.js on any host while keeping development fast and predictable.

Key Takeaways

  • Adapters API provides a stable, testable contract that makes it easier to host Next.js on providers like Cloudflare and Netlify, and on runtimes such as Bun or Node.
  • The adapters ecosystem is governed by a working group with partners (Google, Netlify, Cloudflare) to share ideas, feedback, and ensure cross-platform support.
  • TurboPack is a standalone, incremental compiler designed for large Next.js apps, delivering fast subsequent builds and strong dev experience; it’s currently in the Nexus repo and being prepared for a public API.
  • Caching in Next.js is multi-layered (client, runtime, build/CDN) with useCache and cache components enabling selective, progressive caching across pages, components, and data fetching, plus potential offline capabilities.
  • Next.js dev/production parity goals include running runtimes like Workerd or Bun in dev, with a plan to unify APIs and maintain consistent debugging and performance experiences across environments.
  • There’s no hard minimum feature set for adapters to be blessed; instead, adapters run a test suite and publish results indicating which features are supported, allowing teams to opt-in based on their needs.
  • The team acknowledges Vit’s existence as a potential path forward, but emphasizes that TurboPack’s architecture and incremental compilation were built to handle Next.js’s scale and edge cases today, with ongoing exploration of cross-runtime parity.

Who Is This For?

Frontend teams evaluating Next.js at scale, platform engineers integrating with Cloudflare/Netlify, and developers curious about running Next.js on alternative runtimes or Kubernetes. This episode is essential for those wanting clarity on adapters, TurboPack, and multi-runtime strategy.

Notable Quotes

"We launched the adapters like API as stable, which has been a long time coming."
Tim and Jimmy describe the stability milestone for the adapters API.
"The idea right is that we kind of Nexus by itself ... but when it comes to scaling it, it's always been slightly more difficult because Next.js features require synchronization across multiple nodes."
Explains why adapters help with scaling Next.js features like caching.
"We created that API ... so you can basically tap into this and have a stable contract wherever you want to host Next.js."
Adapters provide a portable hosting contract across providers.
"Maybe one day we’ll run Next.js on Vite, but there are architectural and infrastructure complexities to solve first."
Addressing the Vite question with a未来-oriented stance.
"TurboPack is part of the Nexus repository and is a fully standalone tool you can run today."
Status update on TurboPack’s availability and standalone nature.

Questions This Video Answers

  • How do Next.js adapters simplify hosting on Cloudflare or Netlify?
  • What is TurboPack and how does incremental compilation help large Next.js apps?
  • Can Next.js runtime environments (Node, Bun, Workerd) run in dev versus production the same way?
  • What are useCache and cache components in Next.js, and how do they affect multi-node caching?
  • Will Next.js ever default to Vit, and what are the trade-offs involved?
Next.jsAdapters APITurboPackCachingUse Cache/Cache ComponentsEdge runtimesCloudflareNetlifyBunVite vs Next.js discussion
Full Transcript
Welcome to Syntax. Today we have Tim and Jimmy on today. Tim is the of course the lead dev of Nex.js and Jimmy is the head of Nex.js and we're they're here to talk to us today about some cool stuff that they're rolling out around their adapters platform which is is going to allow you to host Nex.js on different providers like Cloudflare and Netlefi as well as like different run times as well. Meaning like if you want to use Node or bun or whatever you can you can do that. um conversation takes a little bit to get warmed up, but trust me, it's going to be worth it because these are some super smart guys and I actually really enjoy this conversation. We talked about caching next.js performance. We talked about Turboac and all the Turboac internals and the whole process of building that. We of course asked them why don't you just use VIT like everybody else is asking and they had a really interesting answer to that as well which is a maybe maybe one day and uh infrastructure just like in general like like Nex.js of course is is code but it's also if you want it to run like versel it it requires a whole bunch of infrastructure so we kind of go through that as well. Let's get into it. So the other day you guys launched what the adapter platform tell us about what that is. Yeah. Um so we launched the adapters like API as like stable uh which has been like a long time coming as you said uh we've been working on it for close to a year um at this point and um so the idea right is that we kind of um Nexus by itself um you know it runs super well like with with like sort like next start you know like on a simple sort of like uh VPS you can sort of like tap into like all capacities of Nex um but when it comes to kind of scaling it, it's always been maybe slightly um slightly more difficult just because like NexJS capacities like the features like caching and ISAR just requires like you know synchronization um across like you know multiple nodes if you need it. So uh or with your CDN if you require it and we basically introduced u a layer so that you can basically tap into this and you know have like a somewhat of a stable contract so that you can easily adopt it wherever you kind of want to host it like it like the adapters API it's useful for yourself if you're just like hosting like you know just across like five u simple servers you you might not even want like a VPN uh like sorry like a CDN or something but it's also useful if you're someone like Cloudfare or Netifi and you're like hosting NexJS in a specific way with like serverless uh lambdas and like you want potentially like to host your middleware in in a different way and so yeah we created that API um as a as a need um to answer the needs of the community. We also created like an ecosystem working group uh which is basically composed of like us and partners across Google, Netify, Cloudflare. We used to have AWS in the group but they they dropped that at some point. But um the idea is that we since a lot of like the NexJS features require that level of um you know a certain level of like infrastructure work um yeah to make it work scale we basically uh basically creating groups so we can like share ideas in advance. um collect their feedbacks and kind of you know make sure they don't they don't you know dislike supporting Nex on their own platform. Yeah. Yeah. So so you mentioned that you've been working on this for uh quite a while now. Uh so I I take it then that this wasn't a knee-jerk reaction to the slop fork, right? Yeah. Not not really. No. Um if you if you look back, we published the RFC a year ago and um maybe we I don't the timeline is quizzy to me at this point too because it's been so long but like we had started engaging with like netfly and cloudfare and the openx guys even a few months before that uh so not really now uh can't say hasn't like um slop fork term yeah I think one of the main points there right is that like um the main reason why cloudfare did this is that they kind of like wanted to avoid like you know some sort like a vendor look and they wanted to make like somewhat easier to support some version of neck next on CloudFare. Um and so we did want to show to the community that we also well we had so had been cooking on this. Uh it's a great timeline though because we internally this match up with like our testing our own testing we were like the reason why it wasn't stable is just that we wanted uh all of our own websites to be working on this adapters API first before before we published it. Yeah. Yeah. So we've been do footing it for uh for quite some time like outside of um like this release right like this release is marking it stable but uh we we introduced the I think it was beta in uh at Nexus Con last year in October uh but the the reason that we hadn't marked it stable yet is that also we had been docing it on our own applications. So like we built the adapter for forcell uh and we had been rolling it out to like all like forcell's own applications like similar to how we do it for Nexus features like in general and uh and that took some time because like we have a pretty large application and like uh we wanted to iron out all the uh like issues that we had with this like because for reference like this um like think about this like adapter that we had before was already like it had been battle tested across like uh like eight years or so like since we introduced like now v2 when we like introduced the serverless functions into versel of that like this was still the same adapter from back then. So all the like possible edge cases that that you could find in like forcell applic uh of nexus this new adapter was like from the ground up built on the adapters API. Uh so we're really like dog footing this like in the same way that like every other platform is uh and okay and because of that like we we had to do like some extra checks to make sure that like we can roll it out to everyone uh in in this way. So yeah like that that's like why the timeline was also like spread out over a year instead of uh like hey we have this new thing because in practice like if you think about it this like the adapters API inside of Nexus itself like that's not that complicated. It's like a couple of functions that have like a typed contract. I think the real point is like you know have a integration uh layer that uh you don't have to like reverse engineer on any side of things like not on sale not on any other platform uh because like to be clear like you could already host like uh Nex on Netlify or Cloudflare or like through open X on AWS or things like that. Uh but basically like this new API makes it a lot easier for these platform teams to just integrate with NexJS directly instead of like having to uh like find which files need to be included in the serverless function or things like that. Hack around it. It's like I I was trying Open Next on Cloudflare for for quite a while and I was like digging into it. It was during the early days of the Openext Cloudflare. So, I was doing like bug reports and whatnot and I was looking into it and like a lot of it was just like rejaxes and taking the bundled output and like uncompiling it and like I was like, "Oh, this really is just a bunch of hacks." Yeah. Finding the routes like everything that Nexus like knows like basically uh like all this like metadata around like what is a route, what is a like how does it match like that kind of thing. That's not all part of the adapters API. Yeah. And like let's talk about like the different pieces to to this because I I think one of the hard reason one of the reasons why it was historically hard to put Nex.js JS on other platforms is because it it kind of blurs the line between like like this is a node app but also there's this all of this infrastructure that that needs to come along with it and like like you said that that special sauce of why it was so easy on Verscell was because like you guys have had your own adapter platform and and have have made it like line up with all the the different pieces right you got CDN and databases and caching and and all that type of stuff. If if somebody's like just looking at this right now, like what are the different pieces of infrastructure that are needed for like a an Nex.js app that uses every single feature under the sun. The thing is um something that like I kind of want to try to dispel a little bit is that like a lot of the what we're talking about like the the sort of like complexity that comes with like some of the Nexus features, those are like mostly sort of like additive um compared to like the other frameworks on its own. If you're just like, you know, if you're just looking for a framework that kind of just like, you know, server side renders and or just like, you know, serves API routes and serves some like cached data, that stuff just kind of works out of the box, right? There's there's nothing you don't need like crazy infra or anything. So, you just need like a server that can return to your response. Where we go a bit further um is when it comes to basically everything caching related. And we made kind of updates to our docs to kind of clarify this. But those things are mostly I'd say performance optimizations, right? Like partial pre-rendering, which is like one of the features that allows you to kind of serve like a static page and then kind of compose with with like some dynamic contents. I I love those. I want those features. Those are great features. Yeah. Yeah. Yeah. And it works with next art, right? Like you you're going to hit your server. It's going to return like the without rendering. it's just going to return you the the static HTML and then it's going to do like the render right um and when it comes to versel the way we we kind of optimize for this is that this now this this this CDN this sorry this this HTML shell can be served from the the CDN rather right like closer to the user and then we'll like invoke like the server in the background this is like purely just taking the the primitives that we've taken that we've put in the framework and then just like optimizing them but so that I think that answer your question a little bit right like um I think to get like the first cellish experience. Um what I would want what I would recommend um anyone who's like adopting the our adapters to to get those features is you want your server close to your database, right? Um if you're doing like any research like is the thing that makes sense in general and then you want your static content always close as close to the user as possible and you need something that kind of handles the connection between the two. Um, and it could be like on clarifier could be like a, you know, like a a worker, right? Um, and then that worker would just like execute at the edge, return to you the CDN content, um, the the static shell um, and then like invoke another worker that is closer to the to the origin basically. Yeah. And even for this case or like if you're talking about PPR for example like the this static shell like even if you serve that from a server that does dynamic renderings like if you serve it from next start like it's still a win cuz the uh like you still have this like immediate response. Uh the only difference is that it's not this response is not immediately coming from like ACDN per se. Uh but it's instead uh like served from your like server that you have hosted. Um and that's the same way if you go to any other platform. Uh it works in the same way. Uh but what you can do like editively to that is like you can make it so that your CDN supports this uh like new primitive like VPR for example and then it can serve that shell from the uh from the CDN itself while stitching the the dynamic render as well. There is um like maybe yeah the PPR part is like an optimization. Um I will say probably like um something that I would consider kind of important and that we still want to make improvements on is um the cache. So like synchronization story, right? Um let's say you're having like five servers are using next and like you might you know you might like start it with like next start uh on all your like instances. Um if you're like NexJS caching like the the the pretty powerful thing about it is that like we allow you to kind of invalidate it uh whenever you want and we do it in the background. Uh and of course whenever you you you call this or like revalidation methods that one thing that is important for your provider um or yourself is that like a validation is like just just shared across all nodes. And so that's that's one of the things that we want to we're making better with the adapters API, but we still, you know, still got some some work to do, I'd say. Okay. And like when when someone's like having a cache like like when like a page or component is cached? Um where is that typically stored? Is that thrown into like a key value to file or does it does it matter? Um, so we have multiple layers of of caching as you as you might see from the the memes about our ducks, right? We um and I don't know if you have how familiar you are with like the or cache components. Um Oh, yeah. And all I' I've many times said like I I want that in every single thing. Like one of the reasons I'm such a huge React server component fan is the ability to like fetch and cache and just do everything in the component rather than uh like a everybody else's like route layer except for spelt. Um spelt is along with the synchronous now. Yeah. uh and and see the idea behind use cache and the idea and why we chose like you know a very almost like generic kind of name for cache components is that we kind of want you to think about caching at every possible layers. So it starts like purely through the the client. So if you use use cache um you can actually add a property that will just say you know cache it on the on the browser so that like across my session the the page is like stored here and I and I can configure like how long it should be kept around then it can also happen at like runtime uh when you server side render your page. Um now this cache lives also can also live well on your on your server across your request and you can add like a time that that you want to keep it around. We can also have this at build time and for static pages where now this this cache page should live in your CDN too. And so so there there are multiple layers to it but but we think basically it allows you to build like you know the the sort like most composable kind of app some some pages for example yeah it's fine if they live on the CDN like and they're they're never really sort of like revalidate and if you do so you're doing it like you know kind of random not randomly but like occasionally like a CMS or something but those things don't matter right if you're like a page like in an app like chat GPT where you would rather want on a cache on the on the client instead. Um, one thing I'm really excited about, for example, is that we could we want to extend the the use cache like layer so that we can have also like an offline kind of kind of layer. So right now like we can cach it on clients across sessions. But the idea is like what if we just like also kept it across reloads uh seamlessly uh like like and then you would just be able to um reload your page while offline um and then still have like the data from or something out of the box, right? Without having to do like potentially like a um a synchronization um layer. Oh, that'd be cool. I like that a lot. So, a lot of the APIs that we're adding are like like added to like especially with cache components like there's uh like many things that we haven't like really talked about like this offline thing that Jimmy is talking about. Uh they're like right now they're building the foundation of uh of what this like caching layer will look like. The other thing is like while Jimmy said that um like we want you to think about caching in every layer, uh the default is actually to uh be like as dynamic as possible. So if you use like dynamic APIs uh it will basically like it feel like uh like how you you were using like a server props before things like that. if you use uh like date or things like that things become dynamic still and then you like basically the the biggest change from like Nexus 13 and like the the first like iteration of all the caching API is that we want you to just like build the app first and then you can start optimizing whereas previously you were already in this like optimized state by default and that was like kind of confusing so we really want you to like see these APIs as like additive right like we want to add to use cache instead of like it's cached by default or like you have to reason about that that kind of thing. Uh and then like when you want to add use cache like Jimmy said you can do it everywhere uh in in the sense that like you can add do it at the page level you can do it at uh the individual function level or at like components even uh and that's something that was uh previously uh like not even possible like even in the first version like we we like it was either it's fully static or it's fully dynamic uh and uh like it started with the the study thing uh and then like everything is cached, but now it's uh you have to opt into it and uh and it feels nicer in in that way as well, like when you're building the app. And like say somebody builds out an X.js app, you know, bunch of pages, bunch of components, does a whole bunch of stuff, and they say, "Okay, like I'm I feel like the app is in a really good spot now. I um either want to move to use using less resources or or make the app a little bit faster. Now I'm going to start looking into cache." like what what would what would be your first like couple steps of attacking like the lowest hanging fruit? The nice thing about our model is that we kind of we still try to nudge you toward like making the choice as you develop. I think I think other different frameworks kind of have like this other sort of like less maybe less opinionated way of of developing. you can like sort of like you can start with like just client side fetching and uh and at some point yeah you do need to kind of invest into like uh optimizing your app pulling it potentially into the the server with a loader or something. Um and uh well we don't cache for you by default anymore. a big part of like the programming model now on cache components is that like you you'll write your your code as like intended but then like next will kind of mention like with with like a little like nice um warning window like hey either you need to add like a suspense boundary here so that you keep this thing dynamic or you just cache it now uh with like use cache uh and then you you know it's like a simple drop in directive and you sort of like our setup from the get-go already right um cuz and it allows us to progressively do it. And so yeah, I think I think this is kind of like the the nice part of um with that's why I really like NexJS by itself right now today, right? If you have like a slow NexJS app, um the first thing I'd suggest, right, is like you're probably not on cache components just because like it's somewhat somewhat fairly recent like we haven't like put, you know, as much effort as we uh as we could have into the the documentation and experience yet, right? Um, yeah, but uh that's the first thing, right? That's that's what ups you into this, hey, I'm going to warn you about this little thing. And uh really take it take it from there. Read the the use cases, right? Like no one no one likes learning having to learn about like a new directive here and there. And you know, for most people performance is also somewhat somewhat fine. Um uh but I think you know the our idea is just we whenever you're ready, we'll sort of like we'll be there. Oh, let's see. Waiting for us. That's That's great, Jimmy. That's I like that. Uh, yeah. I'm I'm curious about with with your whole adapter pattern. Um, is there like a minimum subset of features that adapters need to support before considering like before considered to be like a blessed adapter by the next team? Yeah, it's a great question. Not really. So, so the the idea is that um I don't like I don't want to get kip necessarily on like whatever you like you know someone's platform is supporting or not. Sometimes like the thing with like PPR is that like it does require like a little bit of wiring if you want to do this at scale and there's no right and wrong here. It's more like just like uh the users kind of should make their own choice. But one thing we do require um as part of like the adapters uh working group if they want to be blessed is that they should just run our test. So we've made some work to make this available for um everyone. In theory, you just like need, you know, you just plug in your API keys and like just like the the pipeline on like how to just deploy it with like um a few scripts here and there and then you can just use or test it in the same way VX has used it and um and just basically assess how good your platform is at supporting Nex. Uh and even then it doesn't really cover like um it's not very like opinionated about like whether or not you're running PPR from a CDN or not. Uh we don't really sort like check for performance expectations here rather than just like um you load the page and it's just working as intended. U it's not but maybe it could be taking 20 seconds. I don't I don't really care but it's the as long as the test pass. And the other thing is like uh we're also not specifically gatekeeping on a amount of test passing. So uh what we did is we we we give you the test suite. uh the only thing we ask so there's two uh sets of adapters like there's adapters that that anyone could build and there's the the ones that are like part of the docs that we uh like explicitly mention uh and then we also publish the results from the test suite for so this doesn't mean that you have to uh say like build a an adapter for for like any platform that doesn't pass like every single test but it will split out by like in the test suite uh like we we keep track of like which feature like which that's relates to which feature and then uh we publish like this adapter supports all features except for VPR for example or like it doesn't have like some other feature and um then it will like then you can choose for yourself like if if you want to use the adapter or not. Um okay. Yeah. So that's like basically like how uh we set it up. uh like everyone in the in the working group is uh already working towards like they either published a adapter already or they are working on an adapter because they want to roll it out gradually as well. Uh and and that set of adapters will be part of the this uh this set that we're going to call out in in the docs. Uh and if there's like other uh platforms that want to do the same like we we have like a clear set of like this is what you have to do to set up the test suite and uh how you can like get your adapter part of this list as well. And this is not just for like different hosting providers as well. It's it's also for for different run times as well, right? like there's a already a bun adapter which will then allow you to kick out like obviously work with buns in I think it uses buns internal server um it has a bun sqlite database for caching things like that um so this is not just like different like only to run on netify or whatever but it's also like if you're if you literally have like a bun server sitting in your closet you can use this yeah yeah and so that's one case uh the one that is really interesting to me as well is the kubernetes one that uh Google is right like basically that would allow you to to scale the like the Nexus server uh like as is using Kubernetes. Yeah, we built together with Jared from uh from bun this like bun adapter also to prove out like that you can use the adapters API like for something different than just uh like a a specific platform per se. Yeah. I'm curious to see what uh what like people will build using this because like you can do like more like I saw someone try to uh use the adapters API like right after it came out to to like create like single uh like single file executables uh that boot up the server and things like that. Part of like why I was pretty excited about like um pushing for the adapters API is that we we kind of want to give you almost like full control over like everything NexJS can do and and what it allows you to do like um so that at first you know that that is targeted toward like platforms. Um but one of the things for example that we are planning on working on next with the the cloud for folks is um yeah going deeper into the runtime story. Right now it it works currently mostly for build. Um but when you're working on like something like well with workerd for example um there's a bunch of API are like not necessarily available from like based on based on node right and so we want to uh I think vit has done something similar with their environment API. We kind of want to do something right allow you to to have indev as close of an experience as possible as what you're going to get when you deploy it. So that's plan because that was my next question of like like one of the hugest pain in the butt with Cloudflare is that like if you are running node locally and then you deploy the thing to cloudflare there's always this pain in the ass little things that come up and and the solution to that is like debugging it sucks because you got to like sit there commit deploy wait for the thing to build you know um and they've solved a lot of that by allowing you to run like workerd locally um which means you can hit those issues right away before you deploy the thing. But you're saying so that eventually will come to this where you can use it different run times in dev as well. Yeah, that's the idea, right? Because as a as a user, I also also hate this, right? Like um like what we're driving really hard within XJS experience is that like um dev start and when you push it on Versell is you know behaves exactly the the same like debugging capacities are same and like like I just I just kind of want to push that forward. um in that too like it's what we're selling you know with with NS by itself is not like I'm not I'm not married to the idea of like node I'm not married to like uh workerd or like bun as a runtime itself like what we're selling is APIs just just nicely done APIs and like if like hopefully we don't like we don't like a lot of the stuff we do obviously is very targeted towards node um just because that's the main default but like if it came a point where like bun becomes like you know the the natural like uh the the natural like platform to to which to build on JavaScript apps then I would want to I would want next to support that uh yeah as well right does that mean you have to have all of Turboac running on these other runtimes as well I think not depends on like there might be some some limitations there um right in terms of like obviously if you run like on a on a crazy OS that just doesn't support like uh like our build of like turopug then than then then we do we do still kind of need um some things uh but turbug also turbug is mostly written in in rust. Oh yeah. So doesn't matter, right? Yeah. Yeah. So Turboac like itself is uh you can see that it's like a separate binary that we're running and we're compiling it to every like mostly supported like OS basically, right? So if you're using like a convoluted OS then like that that might be a problem. But like for uh like like every like majorly used operating system. So like we're only talking operating system, right? runtime like for example like bun uh like you you can use bun for for nextdev today because you replace it with no with node um and that knows how to run this uh like it has the nojs APIs to like interface with uh nappy so we can just like run everything as is uh if that needed some slightly different or anything like that we could still run it uh that that's not a not an issue uh but yeah Nex like Nexus itself it's all mostly script, some JavaScript and then uh the the bundler is basically 100% Rust at this point. Um but yeah, to be clear, do we do rely on like pretty sort like loadbearing APIs too like um async local storage for example? Um and those things were potentially either like they're like you know supported pretty well or they might not be supported fully sometimes. And so in those cases like that's where we need to do some work to either like try to abstract away some of our usage so that like then a platform could then plug in their equivalent or something. Okay. I see. Yeah. As I think you know as we work with like cloudare um on on supporting it supporting cache components better I think we'll obviously run into sort like that kind of issues. Yeah. Oh yeah. Yeah. There was one weird thing with Cloudflare like their implementation of a sync local storage doesn't I'm just looking it up enter with you can't use or something weird where you can't like bootstrap something with an existing object which is kind of a bummer but like that's also not really next it is it is next problem but also like that's that's more of a cloudflare problem you know if they don't support those APIs. Yeah. Yeah. The idea is that we make you know as much as possible as we can on our end. uh but in the end like it's still up to the platform to you know choose to invest into making it work they they control the runtime right so they can also uh do some work in advance here it is similar across frameworks as well because like if other frameworks use asynical storage they you run into the same problem right um one question everybody has and I'm sure you're sick of hearing this right now so feel free to to punch me um but everybody's like why doesn't Nyx just run on Vit And do do you have any any response to that? Because like I know making Turbo Pack was was a huge feat and I know it's edge cases all the way down and it's a very frustrating. I'm sure people think you can just boop boop boop add to it. Um but like what what are your opinions on that or your responses to that? If you're not watching I just set up straighter. Uh the yeah so we started building uh I can't remember which year. Uh it's been some time. The reason we started building it uh is uh like for context like we were we were using Webpack before like most other frameworks at the time and basically Webpack it has some limitations. The main thing that the main limitation it has is that it's uh single threaded so it runs in uh it's mostly JavaScript not TypeScript. Uh it has a very extensive plug-in API but really the biggest like problem that we ran into as applications kept scaling. So like the like think about it this way like if like West or Scott starts building a new application today they're probably like pulling in like 10x more code than they were 10 years ago when we start next. So over time like basically you you start hitting this uh this problem of like there's so much code to compile now everything gets slower and slower. There were some takes on how to solve that problem. Uh which included uh and then like next itself uh uh also made this uh like this problemish 2x worse by having two compilers. So you're not running one Webpack, you're running two Webpacks uh like one Webpack for the browser like your browser code and one Webpack for your server code. So for like server set rendering and all of like we had to do that because like otherwise you can't you can't run browser code that's like bundled for the browser in the server like your like type of window would be replaced or things like that. You couldn't do as many optimizations like all of that. So you were basically like what Nexus was doing was like it was uh like trying to orchestrate multiple instances of Webpack and trying to make sure that they compile at the same time, they finish at the same time, they run in parallel as much as possible. But in practice like you can't really do that because like one might depend on the other. Uh and then when you have uh react server components now you have this uh like it's a scheduling problem basically like you have use client and use client could import a use server like a server action uh and this like the first file that you find is a uh server component. So like that is server code. So that's running in the server compiler. But the moment it uh finishes compiling, it has found all the like it collects all the use client and then it injects it into the this like second webpic instance and it's like hey you know what now we're going to compile that and then when that second instance runs uh and it finds all the server it now has to run uh the like the the first compiler that we like triggered again to then like compile all this new code that we found. And basically like that means that you're going from like server client server client and and it can effectively recursively go through more than that but we we blocked you from doing that. In practice you should be able to do that though like you should be able to import more client components that have more server actions and and like and so forth. Uh and this might be uh like this. The other thing that we want is to have uh server components that you can like import and render from uh like using server for example in the future. Uh and like this would basically make that like a a compilation problem, right? So like we had this kind of like slowness problem and it it only got worse like as we added more like compilation uh work to it and like this this orchestration basically. So like that was the like really quick that was like the problem that we had besides like the application getting a lot larger and like that also being problem. So you said really quick you gave a whole explanation of of the server of the server component architecture in a way. Yeah. Yeah. So so that's like the problem like where we came from right? Uh I just want to clear that up because like it might sound very simple to say like oh yeah just like drop the bundler and do something else. Uh so at that point like we were like okay let's have a look at like where things are at in the overall ecosystem like what is everyone else doing and what we found is like effectively every other framework was doing the same thing that we were doing which is uh regardless of what lender you were using even if you were using more modern ones like uh uh like feed or uh rollup or like anything else uh effectively they were all like if you're building a a a framework that multiple layers so like server and client or things like that. They were all doing this orchestration issue like they were all compiling like like you had two V instances instead of two Webpack instances or things like that. I have my personal Wu website which is React server components. I think it has four V instances. Yeah, it works in the same like like everyone did the same thing because like that makes sense, right? if you uh like if you're not building the bundler yourself like you you hit this thing where you're like oh yeah I need to run multiple of these because like I have multiple output targets or things like that when I explained this like I was only talking about server and browser but you could also think about like edge runtime or like all these other like output targets right uh button maybe or things like that you would have to like like it basically becomes a problem where you're running like many instances so yeah so we looked at it at the time which uh is is multiple years ago now before there was like any like the only other uh I I will give credit to parcel parcel had this API where they could do server client server client server client like that that was a thing that they had already added you have a webc application like your your like existing nexus application is like completely geared towards uh like you have all the webpex specific API so this means like their name works in a certain way new URL works in a certain uh like import meta works in a certain way. Like all of these things, all the resolvers of all the files work the same way. Yeah, you've probably hit this thing like in the past where like uh like feed for example like they introduced uh we're going to use ES modules for everything. So now when you're migrating like a like a uh app that was built using Webpack and you're migrating to feed like you would have the same problem where you had to rewrite some of your code or things like that. Yeah. Yeah, there are more reasons for this, but like the the problem that we had was like there's multiple compilers. There's uh like compared with the like the previous Nexus apps that we like and we had millions at the time already, right? So, uh this isn't like a small scale problem like we like if we gave you a lot of breaking changes like now like millions of people have to do work to actually get to the latest version or like to use the this like uh improvements. There were like no solutions out there at the time. Uh so effectively it it was a question of like do we take existing bundlers and try to cram this like new features into them to make it work in this way uh and then uh like work with this uh like like other bundlers or things like that uh or do we uh start from like take everything that we've learned from all these like the other bundlers because like the like a lot of the like say feed is like built on uh like a lot of this like knowledge of like mistakes that were made for Webpack for example, right? Or or things like that. So can we take like all of these learnings from building all these different platforms and then like a lot of people that that came into forcell like people that came from Swelt and uh and all of that uh like take all of this knowledge and then build something new that uh takes like learnings from from everything that came before similar to how most new open source projects are built. So that's like where we started and that's how we started building turbo pack. Uh and then uh we had this like pretty quickly we had like a a fully functional bundler. Uh you take inputs, you create outputs, it runs transforms like all of this. That was actually not the hard part like building a a a um like bundler. Uh like we we know how to do this and uh yeah we we hired all the right people to work on it. Uh and and we we had that pretty quickly. Uh so a like create create react app type app was uh like we we got to that point pretty quickly. The problem was that um and why it took like quite some time to to get like fully to like 100% of Nexus test passing is that uh Nexus has a lot of tests like like I think like 11 to 13,000 tests if you take like both and learn knows about those tests. Yeah. Uh so like this test suite is super large but it tests all edge cases that includes things that bunders would usually be testing. So it includes HMR, it includes uh fast refresh, it includes like all of these edge cases, every bundler feature cuz uh when like when I was building Nexus like very early on, we were just like every single edge case that we ran into like instead of uh just like upstreaming every like test that we have for that, we also added a test just in general to make sure that it keeps working in X cuz like we were doing this orchestration thing like I explained. Yeah. uh and you would hit issues with the orchestration itself, which means that it's not a blunder issue, it's actually a Nexus issue. So we already had this like very extensive test suite of like how a bundler should behave. We basically have to go through and like fix every single edge case so that it behaves in the same way or super close to what it was doing before uh including a lot of the behaviors that Webpack has. So an example here is there's a comment called Webpack ignore true that you can add into an import. I'm not sure if you ever seen it. Uh that makes it like ignore the import when uh when bundling. A lot of npm packages have that for example. Uh I had never seen it before myself but as we started building this like obviously ran into it because like Turopac would start bundling it for example. I found out like other frameworks also start or like other vendors also start building it. But then in Webpack it would like just ignore it and like there would be no compiler whatsoever. So we have like inaccur like for the most part your application like you don't have to rewrite like a lot of uh things to to get to this newer version. So that covers this like millions of people case. At the same time what we've seen like while we were building this like like other uh like vendors they they kept innovating as well. There was RSpec that tries to like implement the uh the whole like we API uh interface uh while giving you this like paralization and like better uh like like slightly better bundling and like paralization and uh at the same time also you saw the feed team build roll down which is really cool as well. uh which has like a lot of the uh like it takes a lot of the same learnings that we had for for toop pack. So like in practice like the the way I feel about it today is like that all of these vendors have started to sort of like converge onto a similar type of feature set and the way they work is actually very similar. The only difference is like trade-offs that you make around uh like caching uh like incremental compilation like things like that. So for something I didn't talk about for Tropac specifically is like we went all in on this like incremental compilation architecture. So similar to uh like salsa and rust or or things like that which is like you can annotate functions that are cached automatically and paralyzed automatically. And this was especially important for us because like we were also seeing besides like the new apps are 10x larger the existing apps became 10x larger as well. uh and we want this incremental compilation like basically want everything to be incremental compilation. So this means if you're in a build like the first build like the very first time might be slow or slower uh than uh than like any other builder for example but after that we have every single thing that it did cached and if you make another build it's a lot faster. So it's uh for example uh like if you have an application that's like super large like 100,000 plus modules or things like that and it it might take like a minute to build or something like that the incremental uh compilation would be a few seconds to get to like this the the like the next build after that basically and it would be like every build after that is just faster than the very first one and you like rarely hit the this like I'm starting from scratch case. Yeah. And similarly, we want that to be the same for development, right? So like if you boot development, you uh make your changes, you do like hundreds of HMRs, you quit the server, and then the next day you boot it up again, it should be as fast as uh it was like getting a HMR for example, like a fast refresh. That always drives me nuts when people post like, "Oh, it took V booted up in like three milliseconds and the first load of my next app took like 20 seconds or something like like that." Always drives me nuts when people don't understand how how that works. Um like like my own personal Wiku website takes like 30 seconds before I can actually they even the console log the URL and then after that it's fine, you know. But uh uh that always drives me nuts. I'm sure it drives you nuts as well when when people post stuff like that. If you compare frameworks, if you're looking at the you boot up and you see a uh URL, so for example, uh you you boot up uh like any other uh framework and it says uh like local host 3000, right? And then it says like ready in like sometime for example like uh like 12 milliseconds or 50 milliseconds or something like that. uh in next chest it would say 500 or like 1 second or something like that. What we found is uh like what I found like looking into this because I was like confused about uh like doesn't like no other work happen. And it's just like uh for us it was like a timing issue like we would load the Nexus config for example uh which which might be including like all your uh like like external plugins like Sentry or or things like that and would be requiring all of those before it actually said like hey let's like the server is ready uh it took this much milliseconds. Uh so recently we uh we optimized that by first logging out all of this like the the server is ready type uh uh like logging and then uh like requiring the config and things like that in in parallel basically and we're like some like interesting optimizations of yeah how this like perceived performance is is there as well. Uh and then have you considered just faking the numbers just uh never no I would.js JS, I think that was. Yeah, that's great. By the way, I just looked it up and um that Webpack magic comments was me. I opened the issue uh on on that. I was running into a weird issue where I had like a dynamic import and in Webpack when you have dynamic imports, it it gets everything right. and I had like a a a video in the same folder and it was like loading the entire video and and then crashing. Uh that was a a weird one. I'm sure you run into all people doing stupid stuff. Yeah. There on top of that there was just like the test suite had a lot of these cases already. So like it might be that like Westrom like eight years ago had already reported this issue and like we had a test for it, right? Like there was a lot of cases like that as well. Uh things that we would have never caught uh without this test suite. People using it in weird ways. There is a bit of pros and cons of like you know sort of like maintaining a framework for so long is that like the the battle tested uh sort like features that we have there in a way they're like the things that are almost like slowing us down because you could start like a new framework today right and you go to kind of eat um and it would look like it works and then you kind of don't have like the the burden of like sort of like maintaining too much um but eventually at scale like what we what we found over the years is just that they're there they're sort many cases that kind of tend um to pile up there and um I feel like the the big ther of like why we have gone for turbo pack at first is just like uh like what's what's really important for our team is that we just keep you know shipping as fast as possible and so you know it could be like a symptom like it could be I I don't know if you heard about the term of like like sort of like not invented here um kind of problem where um if you go at Google or meta uh um you know they they have like every versions of like what you use externally but like um built purpose made for their own system because it's just actually if you have the resources it is just always the fastest solution and and kind of like what we were saying earlier right is like Nex on its own somewhat somewhat simple right you could take our test suit and try to recreate it some some parts of it and and you could get you know somewhat somewhat somewhat pretty pretty far uh on on on your own, right? And like Tim said, we spent like just an insane amount of time just trying to make sure it wasn't like a breaking change. Uh now maybe the juicy part I I can share here is um like you know u like we're not married to node or bun or anything like what matters more is the experience that we kind of provide to our users in the same way like for example we used to ship with like a lint command uh that used to kind of just like bundle eslint because that was like the you know the the industry standard at the time. Yeah. um we were kind of reconsidered and like we now we we're just not doing anything but we might you know consider a word where maybe we should run around like biome or something like this. Yeah. And um in the same vein transparently like every year we ask ourselves you know like uh do we do we want to sort of like um try exploring vit do we want to sort of continue trip we working with the rspack folks like what's the what's the best end result right for our users and um the big blocker was really just the amount of work that it would take to kind of make this like a non breaking change. There was also just the fact that like there's there's nothing I think that like pushes a a bundler as much as like NexJS today because we we talked about like you know server components, server actions, cache components. Um like you said there's like why could you do some stuff today but like I don't think there's quite something that is quite that has to support quite like you know as as many sort of like larger um websites there. yeah. Yeah. And so if we wanted to find out, I think we kind of need to spend like a lot of time into exploring this ourselves. And um and I think the very latest here is just like maybe in the age of AI uh maybe this is like very doable, right? Uh maybe we can like finally have like an answer to the would like next work better on Vit. Um I think I think it's uh definitely still like a an open question. Oh, cool. On our end. Oh, that that's good. That's good. And is TurboAC part of like the Nex repo now? Like are other projects using TurboAC or is this just just a next thing now? Right now it's part of the Nexus repository. Um it it is a fully uh standalone tool that you can run. Uh the only thing is that uh we don't have added like we haven't added a public API API for this yet. The reason for that is that we wanted to really like prove out that like this bundler has a right to exist that it can like run on these uh like Nexus applications and uh that doesn't have like any like bugs on bundling uh like mpm packages and things like that. We're past that point at this point and we do have like a CLI for it now. Uh the the only thing is like figuring out like how to like what the public API looks like. Uh, and then especially like one of the things that's that's missing is like we can run like we loaders for example and things like that. Like we don't have a plug-in API specifically for right now. Uh, and that's like what we're currently exploring as well. Uh, like what that will look like then there shouldn't be a reason that you can't run uh trope. Uh but right now the like if you wanted to use it for example like there there is like another um bundler called U2 uh like u TO uh so U2 uh is uh like another like framework/bundler that's built on top of the Turopac core uh and they uh they're like using it very successfully. they're contributing back to Turk as well because they're using it in in this way. Uh and they're building like pretty like large applications on this. Uh and it's uh yeah so like there's no reason that you shouldn't uh like that you wouldn't be able to use this as a as a standalone bender like in the same way that you would use ESB build or feed or anything Uh really the uh like it's on us to add like a public API for this. And really the only reason we haven't yet is like we want to be really intentional about like we want to support that thing if we put it out uh and uh and give the right level of support there. Cool. You should make the API just the VD API. We've been talking about that as well because like feed has a pretty decent API. The only uh like the only thing that we've been like thinking about there is like there's some like paralization uh concern as well uh regarding that but it should be doable like it's it's not like the the individual pieces are very similar. Uh there's also the like unplugin uh like NGS API uh that might be good as well because there there's a lot of like plugins that are already written using using unplugging which is like it basically bridges uh you can basically think as unplugging like open next for uh like uh like Webpack loaders and plugins uh or like like similar API um because it bridges like this like every adapter to like one single unified API. Yeah, that would that'd be Do you think we'll ever see that like um you know like like web request and whatever has been fairly standardized across everything. Do you think we'll ever see that for like bundler tools? It kind of depends cuz like the the bunders themselves are uh quite different, but as I just told you, like they're a lot more aligned at this point in in the the way that they work. I think for like transforms for example like transforms are a really like basic like source input uh some metadata and then like return and output like and and plugin like gives you like standardized API for that like having like one standardized API across like all bunders would be possible there. The only question is like getting it to like everyone to agree on like this is the one thing that you should part standards. Um and heard about web confidence. Yeah. Yeah. Exactly. A good point. So it's really Yeah, it's that and then plugins is slightly different because like the the way that the learners work under the hood is like very different like like Turopek is nothing like Webpack under the hood. Uh like it's uh it works completely different. There's different phases. There's uh uh like the the way that works is is just completely different. Uh so like for plugins like for example you could write a plugin in Webpack today that like re like reorders chunks or something like that like it gets like access to all your modules and you can put the modules in different chunks or create like extra chunks or things like that like having a like in in terentional API for this because otherwise like it's very easy to introduce uh like very large slowdowns or like like single point of failure bottlenecks basically where where things get a slower and then like naturally because like having a plug-in API it's it allows everyone to like add these plugins like it would very quickly become like everything gets a lot slower for every app or things like that. So you want to be very intentional about like what are the APIs, how are they surfaced, like similar to like we didn't talk about this, but adapters has a specific log line in the the build that will show you how long the adapter took instead of like that time being attributed to to Nexus internals or things like that. You got me diving into this U2. Um it's it's from Alip Pay by the way. Um thank you. Yeah, I forgot uh which company built it. Yeah, that's really interesting. That's whole Chinese JavaScript is like a whole different world. Uh you know, yeah, it's crazy. They have some pretty cool tools. This is kind of why we um had started working with um RSpec last year to um like uh and it was like this we we had the same realization, right? They're they just kind of introduced us to how um like how Bidance kind of compiles their own apps internally and in some ways they have the they might even have like the the larger set of like Nexfs uh that like the word will never see. Um I I don't know how many engineers that they have over there. they kind of showed us like the other sort like frameworks and it turns out there was like another framework that was also like very much like very similar to to Nex internally and those companies are so big that they have like they can have like four web infrastructure teams. Uh it's always really fascinating um to see I wish I wish we we shared more there. Yeah, it's one of the best talks I've I've had um or I watched was Zach Jackson. He's one of the devs on RSPAC and he works for Bite Dance. We had him on on the podcast as well and like man that was fascinating to just to get a peak into how some of these big companies work and and their infrastructure and I'm sure you guys hear from it as well is like if you can save them like 1% that's sometimes just like millions of dollars a year in productivity and in compute and server time all that stuff. For sure. There's also uh something that we didn't didn't touch on is like this this whole strategy of like building turbo pack and uh giving it out to uh to everyone using NexJS uh as as the default uh and like very compatible has actually paid off because like on Nexus 16 there's like 92% of like development sessions are actually using Turbo. Awesome. It's like the the rest is uh is like Weapon customization basically. Like people that have like customized their weapon config and they they might still be able to use it today because like if you're only using Webpack loaders, you can already add them to Turboac and they will just run for like I will not say like every loader runs but like like most of this like simple input output transforms they uh they they work uh for sure. Yeah. Beautiful. Cool. Um, anything else we didn't cover before we I know we're running up on an hour here, but I want to make sure if we covered everything you wanted to touch on. Yeah. No, I think we're good. So, now is the part of the show where we get into sick picks. Uh, Tim, I know you've been on the show before, so you you know what a sick pick is, Jimmy. It's really just anything you're enjoying in life right now. uh could be as like a podcast or a YouTube channel or any type of product, a phone charger, a hula hoop, whatever you want. Yeah. Um I should I should have asked for sponsors before. Uh I uh I really like um I don't know if you guys are like much into like um coffee beans or not. Feel like a lot of like engineers are all um but there's this this shop that's I think it's based in California. It's called like adrenia. The dark coffee beans are amazing. If you like, you know, sort of like really really light almost like experimental kind of coffee. Very pretty. I'd recommend it. Oh, it's like the hydrangeanger coffee roasters. They're like the flower. This looks great. Beautiful bags, too. Tim, what do you got for us? I still remember this section from last time. Last time I I called out Apple TV uh because uh of sci-fi mostly. Um I will I will call it out again. Uh if you're into sci-fi, Apple TV is such like so good. Uh but that's not the one I want to call out. So uh I'll do two. So the second one is Choir Podcast. Uh I think you all at uh Sentry are are also sponsoring it. Uh because I I hear the the segment every time. uh is this podcast if you haven't heard about it it's pretty popular so you might have heard about it if you're into podcasts but uh they they do like 4hour 5 hour to like even longer uh podcasts they talk through the history of companies and how they started where they're at now uh like the whole uh thing uh that if you're into any of the history of like Microsoft or Google or like any other company. It's It's definitely a a nice listen. And I usually uh like put it on when during chores or whatever and then uh yeah, time just flies by. Um the the one that I found super interesting is the interview with uh Steve Balmer. Uh cuz like Steve Balmer has such a like crazy like I've never like listened to anyone that has that much energy like for for like two two three hours straight just like talk about everything. Um, yeah, Acquired is really I'm gonna watch this. This looks awesome. I really like the Hermes episode and the uh what was another one? The Costco episode and the Porsche one was really good, Wes, if you haven't listened to that because Doug Demiro is a guest on it. So Oh, cool. Oh, I'm definitely going to check that out. Sick. Yeah, really good. Awesome. Um, shameless plugs. Anything you'd like to plug to the audience before we head ways? I kind of would like uh for the audience to kind of read the the blog post about like that we kind of publish about the adapters um because I feel like not a lot of people kind of read those kinds of like uh blog post and we put a lot of efforts into it but also I think it's really interesting about like kind of how we how we came together right like cuz because it does sort of like you know establish a little timeline about how we work and like more importantly I think like kind of the g the engagements they were kind of you know having with the the group like what are the exact roles that we want to have like around the adapters and like just just you know sort of like how we do kind of care uh like like actually care about like making Nexus work well everywhere and um yeah like I think I think it's a nice blog post. definitely. We'll link it up in the show notes. Yeah. Awesome. Uh for me it's Nexra 16.2 uh jokingly called Snow Leopard by a lot of people. Uh so it's this uh new release that we did uh two weeks ago. Uh it includes adapters uh but uh it includes so much other things like it includes like up to 60% faster rendering. Uh like this is faster dev starter that we talked about. The sort of actions are now logged when you call them in development uh to to make it easier to see what's happening there. like we we added so many things into this that we had to write three blog posts to cover it. Uh so um hydration diff indicator. Yes. So if you we already had a hydration diff but now it includes the the text of like what is server and client but it was good to call it out again because like apparently not everyone had seen it before. Uh it's bas like if you have a hydration error, it will show you exactly what was the cause of the hydration error or like at least like the the place where where it happens in your code. Uh and then what else? Oh yeah. So we're doing a lot of research into what do frameworks look like with AI. uh as it's probably on your mind as well like you're you're using like AI agents and like things keep improving quite rapidly and like uh like we we built this like first version that was like an access MCP and now we're also looking at this like agent browser if you haven't seen that yet like look up agent browser it's it's really interesting as well and basically like we wrote a separate blog post called next 16.2 to AI improvements and that talks about like a bunch of learnings that we had from uh like building like frameworks for like AI agents next to uh humans building on uh on XS uh and and includes like one of the learnings is that like if you add an agents MD that has like a like a small snippet that knows where to look for the docs that it will suddenly become like significantly better at figuring out like where it had to await your search params or how to uh like write an extra structure or like uh some code that it was previously adding use client to we didn't add use client to anymore uh and and we've doing a lot of research into that uh which is super interesting. We should probably talk about it uh like another time. But that's uh yeah that was my uh shameless plug. Yeah. Sick. Awesome. Well, thank you guys both for coming on. Thank you for all your work on Next and I'm pretty excited about this. Appreciate all your time. Thank you for having us. Peace. Yeah. Thank you so much.

Get daily recaps from
Syntax

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.