I wish this was clickbait

Theo - t3․gg| 00:27:29|May 12, 2026
Chapters8
The host discusses ongoing satisfaction with Bun historically and growing concerns about its future stability and direction.

Bold Rust rewrite of Bun could redefine its future, but it brings big risks around memory safety, stability, and project focus.

Summary

Theo walks through the high-stakes evolution of Bun as it pivots from Zig-based internals to a Rust rewrite, driven by pressure to fix stability and Windows issues and to better support embedding in apps like Cloud Code. He weaves in perspectives from Dax of Open Code, Jared’s leadership, and industry chatter about Bun’s future under Anthropic. Theo highlights the tension between performance bragging rights (e.g., Bun’s 3x runtime benchmarks) and the real-world fragility of a massive 960,000-line rewrite and thousands of unsafe Rust calls. He also draws lessons from the TypeScript go port and discusses how AI tooling could both accelerate and complicate such rewrites. The sponsor moment aside, the core takeaway is a race: can Bun deliver a safer, faster, more maintainable codebase without sacrificing the ecosystem that relies on it? The answer remains uncertain, but the discussion is exactly the kind of bold engineering experiment that defines modern tooling."

Key Takeaways

  • Rust rewrite of Bun is 960,000 lines of code, with 681,000 lines in Rust and 571,000 in Zig, underway to replace the Zig surface area.
  • 99.8% of Bun's pre-existing test suite passes on Linux for the Rust rewrite, according to Jared, signaling a strong, if not complete, test coverage milestone.
  • Unsafe Rust usage is enormous in the rewrite—13,044 calls to unsafe across the codebase—raising concerns about long-term safety and maintainability.
  • Open Code’s move off Bun due to Windows stability, Electron compatibility, and future uncertainty highlights real-world constraints that Bun must address to stay viable.
  • Anthropic’s acquisition of Bun adds incentives to keep Bun profitable and stable, but also raises questions about policy impact on tooling and dogfooding.
  • Zigg’s philosophy (and the pitfalls) are driving the rewrite: line-by-line porting versus rewriting for safety and lifetime guarantees, and the trade-offs this creates.
  • Comparisons to the TypeScript go port illustrate a pragmatic approach: rewriting existing tooling in a closer-to-source language can yield speedups, but it’s not a silver bullet for correctness or safety.

Who Is This For?

Essential viewing for frontend and toolchain developers tracking Bun, Rust-backed rewrites, and AI-assisted code projects. Anyone curious about the trade-offs of large-scale language/tool rewrites and vendor-backed tooling will find this discussion highly relevant.

Notable Quotes

"Fun is a technology that's near and dear to my heart."
Theo intro expresses his affection for Bun and the ecosystem.
"99.8% of Bun's pre-existing test suites pass on the Linux x64 Gib C in the Rust rewrite."
Jared’s stated benchmark milestone for the rewrite.
"There is zero chance he would merge the Rust port without the end result being measurably better than the Zig implementation in performance, memory usage, and of course, stability."
Jared’s condition for merging the rewrite.
"13,044 calls to unsafe."
Theo discussing Rust safety concerns in the rewrite.
"If Bun breaks, Cloud Code breaks."
Linking Bun’s fate to Cloud Code under Anthropic.

Questions This Video Answers

  • How will Bun's Rust rewrite affect its memory safety and stability compared to the Zig version?
  • Why did Dax from Open Code move away from Bun, and what does that mean for Bun's future on Windows and Electron?
  • What are the risks of porting a large project line-by-line to Rust with 13k unsafe calls?
  • How does Anthropic's acquisition influence Bun's development roadmap and open-source status?
  • What can other projects learn from Bun's approach to AI-assisted reengineering and language rewrites?
BunRust rewriteZig languageOpen CodeCloud CodeAnthropicDax Open CodeWindows stabilitymemory safetyunsafe Rust
Full Transcript
Fun is a technology that's near and dear to my heart. I've been friends with Jared for as long as I've been doing all this content stuff for. He was one of the first guests I had back when I used to do guests on my show, and I have loved the progression of Bun ever since it started. I've been a huge supporter of Bun and the team, even speaking at their final event before they ended up being acquired by Anthropic. And as much as I love to support them, I found it harder to use Bun recently. And I'm not the only one that feels this way. Dax from Open Code recently made the bold decision to move off of Bun in favor of Node.js. JS. He gave a wide set of reasons from the stability on Windows to Electron compatibility to not have to spawn out separate processes. But the biggest and most important issue is the uncertainty about the future of bun. And DAX is not the only one. There's a great article from William Johnson about the future of bun. They're rewriting in Rust. According to Jared, 99.8% of Bun's pre-existing test suites pass on the Linux x64 Gibb C in the Rust rewrite. Bun is notoriously one of the biggest Zigg projects, probably the biggest relevant one. Zigg is a language that is very cool, but also has all sorts of novel issues, strange community problems, and a weird relationship between the Bun team and the Zigg team. And it seems like this has all come to a pretty crazy stalemate here where Bun is being rewritten in Rust. What seemed originally to be a fun side project seeing if agents could run in parallel and rewrite something like Bun has turned out to be much more viable and I would be surprised if this doesn't end up merging and shipping. Bun went from forking Zigg to forking itself in Rust very quickly. There's a lot of fun details about how they decided to do this, the pipeline that they built in order to make this change happen. A lot to learn about how to migrate software using these AI tools. But there's also the legitimate concerns people have had about Bun and the reality where this does address some but doesn't necessarily address all of them. The future of Bun is both brighter than ever but also scarier than ever. There's a lot of details to dive into here. Before we can talk about this fork of Bun, we're going to have to take a fork off this topic for today's sponsor. You've already heard me talk about today's sponsor. It's Work OS. They're an off platform. Probably don't need them for the majority of your projects. And I'll tell you the same. I have a lot of different projects and we didn't use Work OS on most of them. But there's a problem. When the project starts doing well, I end up regretting it almost immediately. But Theo, it's easy to roll my own off. Yeah, sure it is to add like one to two sign-in buttons, but what happens when somebody starts spamming your site? Without something like Radar, which just comes built into Work OS, it's hard to know if that person isn't just a bot or an OpenClaw instance trying to take advantage of your service. Well, if people are doing that, you should probably just put out an MCP, right? Well, how do you authenticate the MCP? O is historically one of the hardest parts of getting MCP right? Not only does work OS have MCP off built in as part of the service, they also host MCP night, which is the coolest event in SF about MCP. I'm saying as somebody who is notoriously not the biggest MCP fan, I have attended every MCP night so far because they're a really cool event. So if you are in San Francisco on May 21st, you should definitely consider coming. It's really fun. But Theo, all the big companies roll their own off. Oh yeah, big companies like OpenAI, Cursor, Versel, Base 10, Carta, Webflow, Vanta. Guess what? We live in the real world and in the real world, Oth is too important to do yourself. Stop vibe coding OT and start shipping safely at soyv.link/workos. It's time to talk about bun in Rust. Kind of got three layers of problems we have to get through. First, we have the problems with Zigg. Then we have the problems with bun, but then we have the problems with rewriting everything. There's a lot of layers to this one. We're going to start with Zigg, and I'll do my best to keep it brief. Everything I've heard about the Zigg ecosystem has been weird. While the language is incredibly powerful and is awesome for so many things, it really does feel like a successor to C in a way I haven't seen in other languages, especially the comp time stuff where you can like change how the compiler generates the code with special comment syntax. It it's magical. The ability to at compilation time adjust the binary outputs is so useful for everything from multiplatform to integrations with other systems. It's it's a magical feature that they included that allows for some really cool trickery. But these magic tricks also can be seen as foot guns. And when you combine those powerful capabilities with a language that isn't memory safe inherently because it kind of just trusts you to do whatever you need to, you end up with a lot of potential for things like memory leaks, for things like security problems and especially once you get across the multi-system stuff where you're trying to make it work on Windows when it was built originally for Linux. Good luck. And a lot of people have been experiencing that with Bun. Bun is incredible, but it has its issues. I've even experienced a surprising number of issues with bun and memory management stuff, especially when I'm doing things like advent of code. I regularly bump up on the edges of memory stuff, especially in the earlier days of bun. Once it's stabilized, I noticed it was meaningfully better as long as I wasn't using it on Windows, but god damn did I have problems on Windows. Also worth noting that bun makes it easy to bundle your JavaScript code as a single binary that you can ship by including both bun and your code in a compiled entity that can be executed on many different systems. This is how things like claude code ship. Cloud code is bundling all of the things it needs to run including bun itself when you install the cla code binary. This is also how tools like open code are currently built. But that means that all of these tools are suffering from the same set of problems with bun, especially the window side of things. As Dax has been sharing publicly, he has been working on the tedious task of moving off of bun specific APIs. Specifically, in this case, the bun file APIs. Bun lets you call bun.file APIs that are much faster and nicer and often more elegant than the node equivalents. The Node.js file APIs are questionable at best. Thankfully, Bun also does support all of the node API, so you can call them and they've shimmed them, so they should work properly. But since the Open Code team chose to use the bun specific APIs, they're now kind of screwed if they want to move off of bun. Jared did jump in here and comment on the Windows stability issue, specifically saying that he's curious if they're still running into issues after bun 1.3.10. As somebody who does actually use cloud code on Windows a decent bit, I can say nice certainly there are definitely still stability issues in bun 1.3.10. done. As I mentioned earlier, Dax had a bunch to say about why they're making this move. He said that node is more stable on Windows. They have a lot of pain on Windows right now. Electron desktop app can easily embed it. We are not certain about the future of bun for various different reasons which we'll talk about in a bit. It gives them the option to freely explore deeno let people embed open code in their own apps more easily. Very underrated powerful use case. And they benchmark things and they're not really benefiting from buns slight performance advantages. I've been saying this for a bit that like buns perf wins are not as great as people think for runtime stuff. They're incredible for package management. They're incredible for actually bundling things for like running it on your machine as a dev. It's great as soon as you've bundled the code and shipped it. People often underestimate how powerful node is in bun can go faster in the environment of like thousands of requests per second in the web on a server. But like what we're talking about here is at best a 3x once you're trying to do as many requests per second as possible. You go from about 20k on node to about 60k on bun with express. Like that is huge. Like obviously that is way better. Three times faster. But that's three times faster on the order of 20,000 requests per second. If that benefits your code for your local agent, you wrote shitty code. This article from William Johnson goes even further. I'm not going to read the whole thing, but I'll leave it linked in the description if you're interested in doing such. Bun is great software. I use it all the time. Fast and practical. Team ships constantly. You get all of the ideas. I want Bun to win. I want a serious note alternative, but I'm worried about Bun's future. Anthropics owned Bun as of December 2025. It also said everything you wanted to hear. It stays open source and MIT license. The same team keeps working on it and the roadmap keeps focusing on high performance JS tooling. It also called out that cloud code ships as a bun executable to millions of users. If bun breaks, cloud code breaks. Anthropic has direct incentive to keep Bon excellent. In December, this sounded reassuring, but now that Claude Code is falling apart, it feels less so. Anthropic still makes great models. This is not an anthropic bad post. Well, not entirely. Cloud Code used to be great. It did really feel incredible a year ago. Even December of last year, I fell deeply in love with Cloud Code, but it did start to seem like things were getting worse. The Bun acquisition actually felt like a thing they were trying to do to make it better. way better for Bun as well since they had no real way to make it profitable. I talked with Jared extensively about their options and none of them seemed great. But now Claude code is bad. There are lots of good coding agents out there. I love that T3 code is in here even though I'd argue it doesn't fit because we're not an agent. We're a wrapper for other agents and hardnesses. So like you can use T3 code with cursor with codeex with open code soon hopefully with pi with cloud code even as the agent underneath. Love to mention though definitely try code if you guys haven't yet. There's a reason we put so much work in. It is a really nice experience. The author had to stop using cursor, so he went back to clawed code after not touching it for a couple months and hadn't seen before how bad it got and was shocked. I love this section here. Enthropic published an engineering postmortem that blamed product layer issues, including a reduced default reasoning effort, a stale session bug, and a prompt change that hurt coding quality. Appreciate the postmortem. It's better than pretending nothing happened. Honestly, it was probably the first time Anthropic mentioned anything being their own fault. Then there was the open claw mess. I yeah, that one was an absolute mess. He even cites my clip where I show that you can just have certain things in your commit messages and a simple claw-p high is enough for it to route your billing or error. This is textbook and shitification. It just kind of feels like nobody's carefully dog fooding the actual code level experience before shipping changes and it feels like they're moving in the wrong direction. And that's why bun currently worries the author and also to an extent worries me. Bun is embedible in cloud code. Cloud code appears to be in shitifying. So now I have to worry that Bun could and shouldify too. Not because Bun is bad. Bun is not bad. Bun is excellent. Not because the Bun team stopped caring. I do not believe that. The problem is as Bun and its team get further integrated into Anthropic, so will their policies. The same policies that have led to the collapse of Cloud Code. Will we see issues start popping up in Bun that make it seem like the team doesn't even dog food their own product? I don't know, but I'm not sure I want to continue using it just in case. Yeah, a lot of people are upset and all of the reasons are totally understandable. I personally have had enough problems that I'm considering moving us off bun 4t3 code for the package management because we have lots of fun edge cases we hit all of the time. That said, generally I think a move to Rust makes sense. It's really cool to see the pushing of models in this way to do this type of bold tedious task that would never have made sense before. But I am concerned that this type of full rearchitecture is going to introduce whole new categories of bugs that might not get addressed swiftly enough because they don't affect the bundling and distribution of cloud code. And here's where we get into the problem with rewriting everything. Let's say we have a set of problems. We'll pretend that this is 10 issues in bun. of these issues. We'll say some of them over here are like package management or monorreo specific. So I'll say monorreo bugs. These are like the ones that we deal with in T3 code because we're just using it for package management. And all the rest here are in the runtime environment and hypothetically these affect cloud code. If right now we have 10 issues in bun and most of them are in this section that affects cloud code here and there's a few that don't a few that exist outside that are just affecting people with big mono repos using bun for package management. Which ones do you think the bun team is currently focused on? I think it's pretty obvious they're going to be focused on the ones that affect cloud code directly. I have already noticed that the bugs on the other side, the ones that aren't necessarily bugging quad code too much have been deprioritized. So what happens if we do this rewrite? If those 10 issues in bun become a 100 issues in bun and they shortened this so the splits a little closer to 50/50 because they keep addressing all the bugs that are affecting bun as they do this rewrite. My biggest concern is that a new type of technical debt is going to acrue on the other side. Things that weren't broken before the rewrite because they were problems that the bun team faced when they were trying to target a more general audience. Now that Bun's primary reason for existing and being funded is cloud code support, I could see these new issues never getting the prioritization that they would deserve. And that's a real concern I have. If all of the tests and all of the work and all of the effort that the bun team puts in is going into making sure that the paths that cloud code needs are well supported, what happens with everything else? And anybody thinking that a community fork is enough to solve this problem, you have not paid much attention to the history of Node.js. Community forks are very difficult in things that are this complex. I I wish you luck. It's not going to happen. This is something the idealist in me is having to come to terms with. The people running the project tend to focus on bugs that affect the work they have at their day job, not necessarily on those that a lot of people are facing. Yep, very real. Before I go too much deeper on the rewrite, I want to talk about how Jared is thinking about this. As he said before, 99.8% of Bun's pre-existing test suite is passing in the Linux compilation environment for the Rust rewrite. It's basically the same codebase except now we can have the compiler enforce the lifetimes of types and we get destructors when we want them and the ugly parts look uglier now because they all have unsafe everywhere which encourages refactoring. And then we have the why I am so tired of worrying about and spending lots of time fixing memory leaks and crashes and stability issues. It would be so nice if the language provided more powerful tools for preventing these things. The unspoken part here that Jared isn't saying, but I can argue is implied is that when he was writing all of the code himself, he wouldn't have to worry about those things the same way because he was writing the code and that was like where he was the level he was at when he worked on it, was paying attention to those details. Now that he's been abstracted out more, both as like a manager running the team, but more importantly here as a person running a lot of agents, this level of abstraction he's at now makes it harder to think about these details cuz he's not in the weeds. And the effort it takes is even higher than it ever has been before, which is incredibly annoying when you're trying to make real changes and fix real problems, but you're not working in the level where those things happen anymore. It's really rough. He's planning a full blog post about this change, what it means for bun, benchmarks, memory usage, maintainability going forward, and also the literal process of doing it because it wasn't just Claude, rewrite bun and rust. Make no mistakes. It's a 960,000 line of code rewrite. the code truly works. Passing the test suite on Linux and soon on other platforms, end to end, he started working on this six days ago. This would have been a massive amount of work by hand. Yep. They also posted on the official Bun account earlier in Bun's Zigg fork because they forked the Zigg language to fix the things that Zigg wasn't doing. They added parallel semantic analysis in multiple codegen units to the LLVM backend on Mac and Linux. This makes debug builds on bun compile four times faster, improving internal development velocity. That resulted in a new release coming out, bun 1314. And Jared also said on his personal account, "If we do merge the rest rewrite, this will be the last version in Zigg," suggesting that this new version is going to happen very, very soon. I do love Charlie's push back here. I love that you're trying this. One fear I have, if I was in your shoes, are you trading 200 known issues for an unknown number of unknown issues that users will end up discovering over time? How do you ship this with confidence? Yep, that's exactly what I was just drawing this diagram for. I am fully with you as always. I cannot remember the last time I saw Charlie say anything that I didn't agree with 200%. Passing the test suite's amazing, but even a great test suite only covers some portion of program behaviors. Otherwise, you wouldn't have any bugs. Yeah, Jerry does also say that it's passing all of their tests. It closes at this 200 plus issues they have. He's yet to see a benchmark where it's slower than the Zig implementation. It's basically the same codebase. That's an important detail. The it is basically the same codebase. Doesn't use async rust and like the Zig implementation uses few thirdparty libraries. It's really the same thing with better tools for them to prevent crashes. Somebody said no Tokyo and he replied that would make it worse. I love the branch is a clawed branch for this fork. A couple things I want to check in here. The first I want to check is a clock run. While I count the lines of code, I wanted to see how many times they are writing unsafe in this. If you're not already familiar, one of the coolest parts of Rust is the borrow checker. It's a comp time check that the way that you're allocating and using memory across the entire like graph of your application is proven to be secure. That when one thing is accessing memory that you have verified at a code level that that is the only thing using that memory. It is one of the things that makes Rust so complex. It's one of the reasons that Rust compile times are so slow, but it's one of the things that makes Rust such an incredibly safe language. You're writing machine level code that is memory safe because the compiler yells at you when it's not. There is a way out of it though. The unsafe keyword. People seem to think unsafe is kind of like any in Typescript. And I guess conceptually it is. The main reason unsafe exists is so that you can opt out of these borrow checker checks and the memory safety nature of Rust to make it easier to write code, but also to get around things that the borrow checker might not honor properly. And generally speaking, you shouldn't have too many calls to unsafe. For reference, I grabbed UV, which if you're not familiar, is Python's equivalent of bun in a lot of ways. It's a package and project manager, more of a pip replacement, but it's super cool. Charlie, who I cited earlier, is the creator of UV. They're now at OpenAI, which acquired them, which made a ton of sense. But just to compare, cuz their project is entirely Rust. 98.1% of it is Rust and then a little bit of Python cuz it is or Python after all. Let's see how much unsafe exists here. We're just going to check RS files. Okay, so across all of UV, we have 165 instances of unsafe in 36 files. I like this way that as John from my chat put it, unsafe is abused a lot. You're supposed to use it for small abstractions that you can prove are safe, but the compiler can't. Yes, thankfully they only had to do that 165 times in all of UV. Correction, this includes comments and things. This number is not fully accurate, but you always put a bracket after according to my chat. So unsafe space bracket occurs 73 times across all of UV. UV is 350,000 lines of rust and across those we have 70ish unsafe calls. The bun rust rewrite is still going. It has 681,000 lines of Rust and 571,000 lines of zigg. It's also worth noting that the Rust code has over double the number of comments despite being similar amounts of code. It was actually close to triple. You can tell that AI wrote that. But how many unsafes are there in this codebase? That's like twice as big because the UV numbers are great. To be clear, 13,044 calls to unsafe. Hopefully, this emphasizes the problem properly. They aren't really writing Rust. They are writing C++ with Rust syntax. It is entirely possible that this code is fine, that they're using unsafe properly, but they're not. We go to Jared's quote. It's basically the same codebase. This is important. They aren't rewriting from scratch in Rust to solve every level properly the Rust way. They are line by line porting the zig code to Rust. And since that zig code wasn't safe according to the borrow checker, they would just throw the unsafe everywhere. Eventually, this can be reduced. And I do actually see the happy path on where this could go. If you notice a memory leak and you find the function where the memory leak starts from, you can tell an agent, hey, this function's unsafe. Make it not unsafe. Trace all of the things it calls that make this a problem so that you can no longer have the unsafe call here. Effectively they are getting the tooling to in the instances that they choose to enforce memory safety from here down anywhere within the tree. But as soon as you have an unsafe call somewhere, it has polluted everything above and below it to an extent. So having 13,000 plus of those is a bit scary, especially because they're about halfway done. I want to give one counter example here that you guys might think I'm crazy for, but but hear me out. Typescript go. The TypeScript Go port is a port of TypeScript. Historically, that TypeScript server has been written in TypeScript because it kind of has to be. PSC needs to encompass all of the weird behaviors of JavaScript. And doing that without JavaScript is difficult. That's why up until now there are a lot of tools that can transform your TypeScript into JavaScript, but none of them are checking if your types are right. The type checker has always just been in Typescript until the TypeScript Go project started. And their solution and the reason that they picked Go specifically is they wanted to do a line by line rewrite of the existing TypeScript compiler in Go. They went as far as using code mods initially to try and transform the TypeScript syntax to Go syntax to rip as much code over as possible to initially get it working because Go syntax is flexible and to be frank stupid enough that it was possible to do a lot of that in. They explored other languages including Zigg and Rust and chose Go specifically because its syntax was close enough to TypeScripts that a lineby-line port that is intentionally and fully aware to be suboptimal and not using any of the parallelization things that Go is so powerful for. Just by being a native language, the process of rewriting existing code line by line resulted in faster code. Since then, they've went way further and are starting to optimize it more directly, taking advantage of the things that make Go specifically great. But until that point, they were just doing the line by line rewrite and it was basically not Go code. It was Typescript code written in Go, compiled by Go, and then running in a native binary instead. This way of thinking can work. When you try to break away from thinking of your languages as a language, as the way you communicate with the world, and instead treat the language as a way to tell the compiler what to do, you end up thinking and building in a fundamentally different way. And I think AI is encouraging the style of thinking even more now where we're treating our languages not as the thing that we learn and obsess over every detail of like we used to, but instead as the thing that is being passed to a compiler to generate the thing that's executed. Go is not the language that the TypeScript team loves. Go is a tool that enables them to make a compiled binary that performs better. In the same way, Rust is not the language that the bun team loves. It's clear they do genuinely love Zigg. Rust has the right tooling that they need to prevent certain types of bugs that they're currently frustrated with. And again, if you're letting the AI do it, it's a lot easier to tell the agent, hey, this unsafe function is causing us issues. Make it not unsafe. Keep running until you stop hitting borrow check errors. That's a powerful loop to run that could actually solve real problems. and the issues that Jared's trying to solve right now don't have good enough feedback loops for the agents to actually solve them without running the whole thing to try and reproduce the issue. Now, it hypothetically speaking is as simple as running a Rust compilation check and seeing what does and doesn't pass. And Rust's error messages are so good they give me envy as a TypeScript dev. If those error messages are able to be handed to the agent and are readable in any way, this just made it much easier to do AI contributions and AI bug fixes to Bun. There are some other thoughts from Jared here I'd like to read. One of the tricky things about the Rust port is layering. It's currently many dozens of crates which speed up compile time but block cyclic dependencies. A lot of buns zig codebase use tag pointers for interfaces for things like event loop tasks process exit callbacks and non-blocking file IO. Says like traits are the idiomatic Rust approach for this but traits are costly. They don't get devered at compile time or through LTO. Same for function pointers in zig. We didn't split up packages so this wasn't a problem but I want compile times to be faster and ideally some parts of bun can be a crate for others to use without pulling in the whole bun everything. Can someone with rust experience suggest a better approach that has around no runtime or memory overhead costs that is visible to the compiler and linker? What the current Rust code does is it's a macro that generates external Rust declarations for each type, releases, builds a single codegen unit, enables LTO, and then there's no shared libraries, relying on the linker LTO to do the inlining and direct function calls based on that tag. This seems to work, but it sounds uncommon and messy. Is there a better approach? He's learning the inner workings of Rust as he's doing this, but it is cool to see him share this type of stuff. also pretty crazy to get this far in a port and to be willing to upend how you're thinking about the packaging and how you break up your codebase on a fundamental level because you can just run Mythos in a loop because they have Mythos access. They can do that. I would not be surprised if a lot of this code was Mythos. I would actually be surprised if a lot of it wasn't. I do think that one of the issues they're going to be running into a lot throughout this project is getting around the compilation time issues. Rust compilations are not particularly fast. Breaking it up into lots of crates is a way to help, but the breaking of cyclic stuff as well as not having a proper compilation time primitive like they have in Zigg, they have some fun problems ahead for them. I'm curious to see how this goes. Overall, seeing that this post got so much interaction in terms of likes, but so few replies shows just how niche this knowledge is. And your AI agents aren't smart enough to figure these things out yet. This is top level difficulty of like low-level problems. Good luck. I actually do wish them luck. This is kind of crazy. But 13,000 unsafe calls is also pretty nuts. For those who are worried, I will end with Jared's words here. In case this wasn't clear, there's zero chance he would merge the Rust port without the end result being measurably better than the Zigg implementation in performance, memory usage, and of course, stability. As stability is the priority. As burned as I've been in the past by Jared's current employer, I will take his word at this because I do still trust Jared. He's a good friend. He's working his ass off. Excited to see where this all ends up. This is a super bold rewrite. This is like one of the most public-f facing what if we use AI to rewrite the entire system that we're building type things. And if they actually manage to change the language that all of Bun is written in in under a month, that's pretty groundbreaking. And I hope we take the time to appreciate it, especially now that the TypeScript port to go is about a year in, give or take, and it was in a lot of development before it went public, too. This is a pretty crazy project that Jared and team have taken on, and I have a lot of respect for them for it, and I do still hope they succeed. I have no idea where any of this is going, but I am excited to see these types of things happen. We need to see more people taking these bold jumps with the technologies that we have available to us because it lets us see the limitations of the things that we're using and how we are building every day. Do I think this port's going to be great and just everything will be improved as a result? Maybe, probably not. I do think it's going to happen. I do think there will be benefits. I do think there will be issues. And I certainly think we'll all have lots to learn as a result. Curious how y'all feel though. Is this just a small fun thing that they're trying out mythos on, or is this a legitimate rewrite that's going to change how we think about things going forward? I know I'm thinking a lot more, and I hope you are, too. Until next time.

Get daily recaps from
Theo - t3․gg

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.