A rant about Javascript bloat
Chapters9
An introduction to the problem of excessive and unnecessary JavaScript code being shipped with web apps.
A sharp look at JavaScript bloat: why old patterns persist, how dependency trees explode, and what we can do to trim the web’s JS footprint.
Summary
Theo (t3.gg) revisits a familiar frustration: JavaScript’s ubiquity comes with hidden costs in the form of bloated bundles. He foregrounds James Garbett’s three pillars of JS bloat as a framework to diagnose why our npm trees have grown unwieldy. The talk dives into older runtime support, cross-realm safety, and an obsession with tiny, reusable building blocks that multiply dependencies. Theo walks through concrete examples—is-string, has-symbols, primordials, and cross-realm reax—to show how seemingly harmless utilities cascade into massive, redundant graphs. He also highlights the prevalence of pony fills, polyfills, and atomic architecture that double or triple the number of packages downloaded each week (think 96M+, 133M+ pulls). The sponsor segment with Browserbase is used to illustrate the practical shifts in how AI agents interact with the web, including new capabilities in GPT-5.4. Throughout, Theo emphasizes practical steps: prune direct dependencies, adopt replacements via the E18/NPMX ecosystem, and use tools like NIP and npm-graphs to identify dead code and opportunities for cleanup. He ends with a call to action for maintainers and companies to fund essential open source work to keep the web lightweight and healthy. The takeaway is clear: we can reduce bloat, but it requires deliberate maintenance, tooling, and a willingness to push legacy patterns aside for modern, leaner paths. Overall, the video blends critique, real-world examples, and actionable tooling to challenge developers to rethink how they consume and ship JavaScript.
Key Takeaways
- Older runtime support and cross-realm safety tools (like is-string and primordials) persist in npm trees to handle legacy engines and safe environments, bloating modern projects.
- Atomic architecture practices—splitting tiny functions into separate packages (e.g., shebang reax, exe, get stream)—can lead to many small dependencies that are downloaded millions of times weekly.
- Pony fills and polyfills linger long after native support exists (globalThis and indexof show 49M+ and 2.3M daily/week downloads), introducing unnecessary risk and maintenance overhead.
- Dependency duplication and cross-tree fragmentation (two versions of is-docker, path, or npm-run-path) dramatically inflate install sizes and surface area for supply-chain attacks.
- Practical remedies include using module replacements (E18), NIP to detect unused code, and npm-graphs to map dependency trees; migration tools (chalk to Pico colors) can reduce footprint.
- Maintainers should actively question each dependency: remove redundancies, prefer inline code when possible, and consider forks or custom branches for legacy needs.
Who Is This For?
Essential viewing for JavaScript and Node.js maintainers, frontend leads, and open-source contributors who manage large dependency graphs and want to shrink bundle sizes without sacrificing compatibility.
Notable Quotes
"The three pillars of JavaScript bloat."
—Theo introduces the central framework for diagnosing JS bloat by referencing James Garbett’s three pillars.
"Most of the JavaScript being downloaded isn't necessary at all."
—A core claim Theo repeats to set the stage for the rest of the talk.
"Two lines of code… 133 million times a week."
—Theo highlights how tiny utilities can dominate download counts due to atomic architecture.
"Pony fills did their job at the time. They allowed library authors to use future tech without mutating the environment."
—Explanation of why polyfills/polyon fills linger past their usefulness.
"The small group should pay the cost. They should have their own special stack that pretty much only they use."
—Theo’s closing thesis on distributing the burden of legacy tooling to its rightful users and forks.
Questions This Video Answers
- How can I identify JavaScript bloat in my npm dependencies?
- Which npm packages are most commonly responsible for dependency bloat and how can I replace them?
- What are pony fills and how do they differ from polyfills in JavaScript libraries?
- How can I use npm-graphs and NIP to trim my dependency tree?
- What is E18 Foundation and how can it help reduce JavaScript bloat in projects?
JavaScript Bloatnpm Dependency TreesE18 FoundationNPMXPolyfillsPony FillsCross-realm SafetyModule ReplacementsCLI ToolsBundle Size Optimization
Full Transcript
I want to be clear about something. JavaScript is far from my favorite language. There's a lot of things I like about it, but there's a lot that I don't. That said, it is almost essential because it's one of the few languages that's supported everywhere. And I mean everywhere. There are some costs to that though, and those costs add up, and the result often ends up being a lot of bloat. Most of the JavaScript that's being shipped around the web is almost entirely unnecessary. We're talking about bloat that just doesn't need to be there at all.
Some of it's cuz our tools are outdated. Some of us is because the devs are lazy, but some of it is because it made sense at the time and we just never took the time to fix it after. And the result is a web where most of the JavaScript being downloaded isn't necessary at all. And that sucks. I think it's important for us all to take a moment to reflect on this. Why is there so much useless JavaScript being passed around and what can we do to fix it? At the very least, we need to be able to identify it.
My team just sent me a very exciting article from James Garbet. The three pillars of JavaScript bloat. This is a breakdown of all of the different types of bloat that exist in our code bases that make it so we end up with these giant bundles that are full of garbage that nobody needs. If we ever want to fix this problem, we need to be able to identify it and describe it. And there's a lot to dig into there. Thankfully, not everything on the web is bloat. And one of the things that isn't is today's sponsor.
I think we all understand now that our agents get way smarter when they have access to the web. But what we usually mean when we say that is that they can go to sites and get information from them. But if you want to do more, if you want to actually interact with and navigate the web, you need a browser for your agents to use. Today's sponsor is Browserbase, and they're here to give you the best possible experience for building with agents and the web. A few days ago, Ben was mad at me and he was threatening to leave SF, so he built an app using Browserbase to give him the best flight deals for leaving to go to LA.
But there's something very interesting here that is a new thing as of GPT 5.4. We have two versions of this project. We have the traditional browser use version where it uses browser use primitives to trigger clicks and keyboard presses and all of those things, but 5.4 was trained with a new capability, the ability to write JavaScript that is meant to execute on the page to fake these interactions. And it turns out it's way more accurate and way faster. I just launched the app and thanks to browser base, you can actually watch as the browser is used throughout the whole process.
Here we see it going to the flight tracker. It is already interacting with the page and we can see in the logs in our app what's going on and what it is doing. And here we see the code that it is writing to execute on the page. Pretty cool that it finds the button, selects the right button from that list, waits for the timeout for the focus from the click, and then does the next steps. And it's wild to watch. For my experience, the JavaScript execution method has been so much more reliable that I personally went from, yeah, I guess this isn't really for me to, yeah, I want to start using browser use type stuff.
These types of tasks used to take minutes and now they take seconds. If you want your agents to really use the web, look no further than soyb.link/browserbase. I'm actually really, really excited to go through this article with you guys. Shout out to James, the author. He works really hard on ecosystem performance things, just trying to make our JavaScript ecosystem better. That's why he's so involved with things like unjs and all these other cool projects, as well as npmxdev, which is a really cool new like npm registry viewer tool. Choked is essential for a shitload of stuff that I do.
This is the package that is used for tracking what changes are happening on your file system in node for things like hot reloading and he's an active contributor to it. It is rare that one dev touches that many different things in the open source world. So very excited to see his thoughts here. He knows this [ __ ] better than almost anyone. Three pillars of JS bloat. Over the last couple years, we've seen a significant growth of the E18 community and a rise in performance focused contributions because of it. A large part of this is the cleanup initiative where the community has been pruning packages which are redundant, outdated or unmaintained.
One of the most common topics that comes up as part of this is dependency bloat. The idea that mpm dependency trees are getting larger over time, often with long since redundant code which the platforms now provide natively. In this post, James wants to briefly look at what he thinks are the three main types of bloat in our depth trees, why they exist, and how we can start to address them. I couldn't be more excited for this one. This is one of those few people that like really knows what they're doing here. Oh [ __ ] one of my chatters has actually just joined E18 today.
That's really cool to see. First piece and this is important. Older runtime support with safety and realms. What the hell is a realm? We'll know in a moment. Here's an example graph they have. There's the is string package. Is string checks has to string tag which has a dependency on has symbols. It also has a dep on call bound which has get intrinsic which has a shitload of additional subpackages. A handful of which touch call bind apply helpers and those touch function bind and ES errors. So yeah, quite a dependency tree for is string. Graphs like this are a common site in many npm dependency trees.
A small utility function for something which seems like it should be natively available followed by many similarly small deep dependencies. So why is this a thing? Why do we need is string instead of type of checks? Why do we need has own instead of object or object.prototype hashazone property? Well, there's three things. First, there is support for very old engines. Second, there's protection against global namespace mutation. Third, and most importantly, there's crossrem values. Oh boy. First, we have the support for old engines. Somewhere in the world, some people are apparently exist who need to support ES3, like Internet Explorer 6 and 7 or extremely early versions of Node.
Fun fact about this. Curious what he links as a footnote here. He says that he believes there are people who need these old engines, but he would love to see examples. There's actually a company that does this. They're named Hero Devs. They have what I would consider some of the most thankless work in the industry. Their job is to come in as contractors at companies that have old ancient terribly maintained and structured code bases that none of the engineers at the company want to work on. If they even have engineers there, they maintain these old endof life projects as well as their dependencies.
They have forks of things like NodeJS 0.8 that bring in security fixes while still having the weird [ __ ] that the company needs for the old version of Node. They also will often have to change packages and fork them and maintain them themselves in order to keep them compatible with old versions of node. This has even burned some people in the past where they took over a poorly maintained repo that had a dependency that was being used by spelt and when they modified it, they wanted it to work for these really old things. So they took a package that had no dependencies and they ended up adding different things that they had made in order to let it support old ancient unmaintained versions of node.
But that resulted in a package that used to be a single small dependency in speltkit becoming like 20 subd dependencies. Rich Harris flipped a [ __ ] And that was the first time a lot of people saw the type of work that these devs have to do. I know a couple employees at this company, they live in the shadows. They like development, the science more than the ecosystem. My friend who told me when they were going to start working there that they were interested. I was like, "Oh, you're not even like you're joking, right? Like there's no way you'd actually want to do that type of work." And they were like, "No, I love this stuff.
I love these hard, thankless problems that nobody thinks about. Kind of nuts. Yeah, these people do actually exist. And to James, who seems to not know any of them according to his footnote, they are the ones you need to talk to if you want examples of people using these really old things that have this crazy set of weird dependencies that aren't needed in most places. It's all them. There's a bunch of things that we take for granted in modern JavaScript that just don't exist in these old places. For example, for each on arrays or reduce on arrays or object.keys.
keys, which I use all of the [ __ ] time, or object. I use some of the time. I do not know how I would write JS without these. These are all ES5 features, meaning they simply don't exist in ES3 engines. For those unfortunate souls who are still running old engines, they need to reimplement everything themselves or be provided with polyfills. Alternatively, it'd be really nice if they just upgraded. Totally agree there. Protection against global namespace mutation. The second reason for some of these packages is safety in quotes. Basically, inside of node itself, there's a concept of primordials.
These are essentially just global objects wrapped at startup and imported by node from then on to avoid node itself being broken by someone mutating the global namespace. For example, if node itself uses map and we redefine what map is, we can break node. To avoid this, node keeps a reference to those original versions of things like map which it imports rather than accessing the global. They have a whole section about this in the repo. This is fascinating. I did not know about this that Node rebinds all of the default globals in a custom name space early so that they can use them without your [ __ ] overriding them.
Fascinating. This also makes a ton of sense as an engine because it shouldn't break if somebody overrides some core [ __ ] Good stuff. As always, the Node project is underappreciated for the level of complexity that they're operating within. Like Node and V8 are two of the most crazy complex, generous gifts to the software development world that get treated like [ __ ] because people just hate on JavaScript. always has annoyed me, even back when I hated JS. Okay, last time I hated JS was like the last time I had to write JS to be fair, but you you get what I mean.
Apparently, there are some maintainers that believe this is the correct way to build packages, too. That's why we have dependencies like math intrinsics in the graph above, which is just a reexport of the various math.star functions in order to avoid them being mutated. What? Yep, math intrinsics. All that package does is reexports the default so if they get overridden, you can still rely on them. That is obnoxious. And now we get into realms crossrem values. This is what happens when you want to pass things from web pages to iframes and stuff like that. In this situation, a new reax in an iframe is not the same thing as a reax class as one of the parent pages.
So if you make a reax inside of a page and you make a different one inside of the iframe, they don't match. So if you do something like trying to compare the window.reax versus iframe window.reax, the values are different and that would end up being false if it came from the iframe. For example, he's a maintainer of chai and we have this exact issue. We need to support assertions happening across realms. This test runner may run tests in a VM or even in an iframe. So we can't rely on instance of checks. For that reason, we have to use object.prototype.2string.all val equals object reax to check if something's a reax, which works across realms.
It doesn't rely on the constructor. Apparently, it's the same thing that is string does because if you pass a new string from one realm to another, you might not be able to check against the global like window string because it's a different type because it came from a different realm. God, I people don't appreciate how hardJS can be at these levels. I don't know about you guys, I've never had to deal with this myself. I've never had to worry about what happens if I pass a value in out of an iframe and I can't do instance of calls on it.
That's very different from the world I live in personally. I like this call out here. All of this makes sense for a very small group of people. If you're stuck supporting very old engines, passing values across realms, or you want protection from someone mutating the environment, these packages are exactly what you need. Problem is that the vast majority of us don't need any of this. We're running modern versions of node that were and by modern I mean the last 10 years or we're using an evergreen browser that gets automatically updated. We don't need to support prees5 environments.
We don't pass values across frames and we uninstall packages that break the environment. Yes, mostly again hero devs. Yeah, these layers of niche compatibility somehow made their way into the hot path of everyday packages. The tiny group of people who actually need the stuff should be the ones seeking out special packages for it. Instead, it is reversed and we all pay the cost. Exactly. Thank you to the chatters and begrudgingly, thank you to Grock for the help finding this example from way back. The AX object query package, which is used by a bunch of things for programmatic detection of accessibility issues.
It originally had almost no dependencies and a different developer had just come in to take over the project because it wasn't being maintained by the original devs anymore. they agreed to let him take it over. And when he did, he immediately focused on backward compatibility maxing, which he did by fundamentally changing the structure of the dependencies. And here you can see they added a bunch of additional dependencies that largely came from that team in order to try and make back compat with really old versions of node possible. This change pissed off so many people and confused them so much they thought it was a supply chain attack.
Most devs had just never seen this type of back compatibility being added to a project. And it looked like this too since the dev LJ Harb was the one who built most of the subd dependencies that were added because again remember it is his job to maintain these ancient code bases and build all of the key pieces that are needed for things to work in stuff like old node 0.4 projects. That said, Sam was entirely wrong here. This was not a supply chain attack under the guise of supporting node 0.4. for nobody would do a supply chain attack with something that's that obviously stupid.
They would do it with something that seems more promising like performance improvements. Nobody would ever do anything under this cuz this is not something anyone would want to merge. This is something that they're doing because they have to maintain these old code bases and that's the easiest path they've had. My problem with her devs and I have told this to the people who I know there is that they treat these core dependencies as though they have to support everything. They treat these core dependencies as the path to get that legacy support into their things, not even through custom versions of the packages, but through forcing the main versions of these packages to continue to be compatible with these ancient versions of things.
And the result is a bunch of unnecessary bloat. It's worth noting that of those 60 new direct dependencies that were added, one of them, which is deep equal, which is necessary when using ancient versions of node for comparisons of things, added 50 by itself. This nearly doubled the number of packages that were necessary to install SpeltKit. And the Spelt team was [ __ ] pissed, as they should be. It's kind of crazy that a minor version bump of something that you're already using can suddenly double the number of dependencies that you have in your codebase. Again, I want to emphasize just how thankless this work is, but also that I don't think it is appropriate for the Hero Devs team to effectively be forcing their unique needs onto the default path for other devs.
These things are so uncommon that they probably need to be in either other custom branches or tags or ideally even a fork with a different name for maintaining these old versions of the packages. That was all just for the first pillar. By the way, we got two more other separate things that can cause bloat. Thankfully, these ones are a little bit simpler. The next is atomic architecture. Some folks believe that packages should be broken up to an almost atomic level creating a collection of small building blocks which later can be reused to build other higher level things.
So you end up with graphs like this one where you have exec a which has a couple subdevs like cinders merge streams cross spawn which depends on which path key shebang command shebang reax etc because all of these are really simple things like shebang reax a very simple reax for checking if it's a shebang or not is exe what that does get stream has its own separate readable stream implementation it references as well as an is stream function instead of just having them inside you get the idea when there's a part that is reusable it is broken out to be reusable Although it's funny, I just referenced the shebang reax one and then that's the example the author gives too.
This is a package that is a package that is downloaded 133 million times a week. Do you understand how absurd that is? Two lines of code that there are four versions of four majors that are downloaded 133 million times a week. Two lines of code. It's a reax. It's a reax and an export. You might start to be thinking we did this to ourselves. And you know what? I think we did. By splitting code up to this atomic level, the theory is that we can then create higher level packages simply by joining the dots. I get this.
I am a big fan of the Unix philosophy myself. But uh Unix philosophy should not be at a line of code level. Here are some common examples and I'm going to check how popular these are in npmjs as we go through. We have rfi which converts a value to an array. If it's an array, then we return the array. If not, then we return it with an array wrapper. It's one line of code. And this package is downloaded 32 million times a week. Next, we have slash, which replaces backslashes and file system paths with traditional slashes.
This package is downloaded, wait for it, 96 million times a week. We got CLI boxes, which is a JSON file containing the edges of a box 40 million times a week. It literally is just a JSON blob with top left character, top character, top right, right, bottom right, bottom, bottom left, left. 40 million times a week. It's a JSON file. I'm going to die. I knew it was bad. I did not know it was this bad. Oh, path key. I've used this one. I've used this one. Oh no. How popular is this? 158 million times a week.
It's a 4 kilobyte of JS file. It is literally just a file. Let's look at the JS for it. If platform is not wind32, return path. Otherwise, we have to go steal the path in Windows because Windows has a lowercase path. I can't believe the world we're in. We did this to ourselves. Today's sponsor is all about hiring. If you need great engineers, look no further than G2. That's not G2I. That's the AI engineer conference in Miami. Yeah, I kind of tricked you guys. G2I does a lot more than just hiring. They run some of my favorite tech events.
I've been going to React Miami for three years now, and it's consistently my favorite tech conference every year. They are still doing React Miami. In fact, it's right afterwards. If you want to hang out with cool engineers like me, Prime, and all the other awesome folks who show up for this stuff, I highly recommend checking out the AI engineer World Fair in Miami as well as React Miami from April 20th to 24th. If you don't have the time for a conference, it's probably because you need to hire more. And if you do, I highly recommend hiring at soy.
Chat made a good call out. The dev who wrote this article is also working on npmx, so I should definitely be using that for these searches. It's a lot quicker when I search one time, which ensures a function is only called once, which is 131 million downloads a week. And it's once again Cinder. Poor Cinder has had to write so much random [ __ ] that we all rely on. He wrote Kai, which is still my favorite fetch rapper, as well as a ton of these thankless small projects that are necessaryish when you want to reuse things and have them be simpler.
Kind of h a lot of respect for the work he does. I rely on so much of his [ __ ] regularly, but some of these things it hurts that they exist. Oh, this is cool. String width. This is how long the string actually is, not just how long the binary is. Thankfully, this one's relatively small for what it is. And it's actually kind of annoying to do this, right? Because like is a character like this is one care, but it is actually longer when you print it. And if you put a bunch of these tags around it, it's still only two cares long.
And string width handles it. That's actually a good example of a thing that's kind of annoying to calculate and do, right? Especially with emojis and [ __ ] Yeah. One of the cool things I didn't realize on the site is that npmx will show you the total downloads per week of someone's packages. and his packages are downloaded almost 10 billion times per week. That is insane. Oh god, how popular is Windows? Okay, it's only 27 mil. It's not as bad as some of the others, but you get the idea. These are all very, very, very, very, very, very popular packages.
Absurdly. So, you want to build a new CLI, for example, you can pull a few of these in and not worry about implementation. You don't need to do env path or envase path or cells. You can just pull a package for that. Great. I actually have a whole video about why I think libraries are kind of dying as a result of vibe coding. It's really cool that it's a lot easier to write this solution than it ever was before because AI will know this mistake or at the very least find the package and hopefully read the code and realize it shouldn't be using it.
TBD, but you get the idea. So why is this a problem? In reality, most or all of these packages did not end up as the reusable building blocks they were meant to be. They're either largely duplicated across various versions in a wider tree or they're single-use packages which only one other package uses. For example, shebang reax it's almost solely used by shebang command which is by the same maintainer. CLI boxes is used almost solely by boxin and ink also both by the same maintainer. One time is used almost solely by restore cursor which is again the same maintainer.
These are separate packages that have no reason to be separate packages. Each of these having only one consumer means they are equivalent of inline code but cost us more to acquire because of npm requests, tar extraction, bandwidth, etc. Yep. If three different things you used required shebang reax, having it as a single shared package could theoretically make sense. The fact that it's one line makes it less so, but hypothetically speaking, but that's not the case. This is just dumb. And then we have the duplication problem. Taking a look at the Nux dependency tree, we can see a few of these building blocks that are duplicated because they have two different versions of the same thing.
Another fun problem. There's two versions of his Docker. There's two versions of his stream. There's two of is Windows Subsystem for Linux. There's two of his exe two of npm run path two of path key and two of path scurry that's because they have packages that they're relying on that both rely on is docker or is stream and they rely on different versions so you end up with two versions of the same thing inlining these does not mean we no longer duplicate the code but it does mean we don't pay the cost of things like version resolution conflicts cost of acquisition etcing makes duplication almost free while packaging makes it expensive yep entirely correct and one of the biggest issues especially recently is the larger supply chain surface area makes it easier to attack and hijack people's [ __ ] It's so common that people now speculate that when somebody is trying to make something work in older projects that it is a supply chain attack because we just expect this now.
The more packages you have, the larger the supply chain surface area is. Every package is a potential point of failure for maintenance, security, and so on. For example, a maintainer of many of these packages was compromised last year. This meant that hundreds of tiny building blocks were compromised, which meant that the higher level packages we actually install were also compromised. Logic as simple as array.isarray probably doesn't need to be its own package. Security, maintenance, and so on. It can just be inlined and we can avoid the risk of it being compromised. Absolutely agree. Similar to the first pillar, which is the older runtime support, this ended up being a thing that made its way into the general hot path.
So, the thing most people do happens to now involve this. Even though these are more conceptually good things that are useful in niche specific places, it became the thing we're all doing whenever we install [ __ ] And the result is that we all have to pay this cost without getting any benefit ourselves. I'm going to turn off the dark mode for a sec. So, uh, prepare your eyes. It just happened to break this particular view because canvases are hard. This is the pony fills section. Pony fills that overstayed their welcome. There's a lot of things in here that were really useful in 2010 like is array that still gets 160 million installs a week or object.keys or string prototype trim that are way too popular and these dates by the way are effectively when the native support occurred.
So in 2010 all these things became common in JS package are still huge is nan got built in in 2013 package still gets millions of installs a week. array prototype fine, string prototype, repeat, object sign, you get the idea. All of these features that have been in JavaScript for 15 plus years for some of them still have packages being installed that are entirely unnecessary. If you were looking away because of the lack of dark mode, you can look back now. You're building an app, you might want to use some future features that your chosen engine doesn't support yet.
In that situation, a polyfill can come in handy. For all of you guys who really wanted temporal, you've probably been using a polyfill for a while now. Polyfills provide fallback implementations where the feature should be so you can use it as if it was natively supported. Very very cool thing. Typescript used to even introduce certain polyfills for things themselves so that features that most people wanted especially TypeScript focused ones could be in early so you could have access to them. Generally, I think that's an okay to good pattern. For example, temporal polyfill will polyfill the new temporal API so you can use it regardless of if the engine supports it or not.
I swear when I started doing JavaScript heavily back in 2018, we were still talking about how exciting temporal was. It's just starting to ship now 8 years later, which is kind of crazy. And here we are. You're building a library instead. What should you do? In general, no library should load a polyfill as that is a consumer's concern and the library should not be mutating the environment around it. Again, polyfills add things to like globals so they will work in your environment. If you are personally writing code in your project that's userfacing and you want temporal, go install the polyfill.
That's totally kind. It's a top level depth that makes sense. Polyfills sneaking in through other things makes way less sense. Thankfully, there are very few projects that do that that bring in a polyfill that can [ __ ] up your environment. There's a lot of devs who wanted that and I can sympathize here. There's a lot of devs that are using features that might be kind of new in their packages and those features are probably causing bugs for a lot of their users. The users being developers implementing them. So if you made a package that uses temporal person installs that package and it breaks because they don't have temporal installed, you're probably tired of telling all of them, make sure you install the polyfill, make sure you install the polyfill.
So the solution for this is called a pony fill. It's kind of like a polyfill, but it gives you a different import path. So instead of changing the runtime, instead of binding things to globals, it just lets you import a fake temporal that you can call directly instead. I can absolutely sympathize with this cuz I had a lot of weird problems when I was maintaining projects at Twitch. There's only an internal dashboard for managing reports when users report people on Twitch. And almost all of the safety ops people that actually went through those reports were using Chrome, as were the devs.
Everything worked great in Chrome. If you ever wonder where my disdain for Firefox started, it was then, because we had one single person who really wanted to use Firefox, and the amount of random [ __ ] that broke was horrifying. Backwards reax just wasn't supported. And it didn't even give good errors when you tried. That meant I had to polyfill reax for Firefox. Yes, really. This is the world we live in. Pony fills are basically polyfills that you import rather than mutating the environment. This kind of works since it means a library can use future tech by importing an implementation and it can pass that around so it can use the native one if it needs to, if it exists.
Otherwise, it'll use its pony fill otherwise. None of this mutates the environment, so it's safe. So, why is this a problem? It seems fine if they're not mutating the environment, right? Well, the code's still there. These pony fills did their job at the time. They allowed library authors to use future tech without mutating your environment and without forcing consumers to know what polyfills they have to install. Problem comes with the pony fills when they outstay their welcome. When the feature they fill in for is now supported by all major engines that we care about, the pony fill should be removed.
However, this often doesn't happen and the pony fill remains in place long after it's needed. We're now left with many, many packages which rely on pony fills for features we've all had for a decade or more, including global this because global this didn't used to exist, but it's been widely supported since 2019. The globalist package still gets 49 million downloads a week. Or index of, are you kidding? That's been supported since 2010 and it still gets 2.3 million downloads a week. And then object entries, which has been supported since 2017, still getting 35 million downloads a week.
Unless these packages are being kept alive because of pillar one, which is the support for the old things, they're usually still just used because nobody ever thought to remove them. When all long-term support versions of engines have a feature, the pony fill should definitely be removed. Absolutely agree. Oh boy, the index of package. The description is great. Lame index of thing. Thanks, Microsoft. This clearly was only built to support Internet Explorer at the time. We look at the code, the very complex code. If index of does exist in your runtime, which is checked with the empty array index of, if it exists, we just use the array.index of that you passed looking for the object.
And if it doesn't, then we for loop across it to get you the index. This is 10 lines of code that are downloaded millions of times a week that aren't only stupid and easy to write, they're entirely unnecessary. The author of this package is TJ, who is the creator of Express, who left JavaScript a decade ago. Dogpaw and Steven are helping a ton here. Thank you both, by the way. Huge shout out to you guys for making sure I get all the fun facts in this one. So, now the most important part. What can we do about all of this?
Much of this bloat is so deeply nested in dependency trees today that it's a fairly hefty task to unravel it all and get to a good place. It will take time and it will take a lot of effort for both maintainers and the consumers of these libraries. Having said that, I do think we can make significant progress on this front if we all work together. Start asking yourself, why do I have this package and do I really need it? This is especially important for maintainers. If you find something which seems redundant, raise an issue with the maintainer asking if it can be removed.
You encounter a direct dependency which has many of these issues, have a look for an alternative which doesn't. A good start for this is the module replacements project which is also part of E18. I already liked these guys. I very quickly am realizing they are essential to this ecosystem. Another great recommendation is NIP. We use this in all of our projects now. It has been an incredibly helpful thing. NIP will find unused code and imports in your project and make it easy to get rid of them. It also helps with unused depths, dead code, and many other random things.
Be a great tool to find and remove depths you no longer use. It doesn't solve all these problems necessarily, but it's a great starting point to help clean up depth trees before doing the more involved work. Oh, E18 has a CLI that is an analyze mode that can determine which depths are no longer needed or have community replacements that are recommended. Apparently, chalk can be replaced with native functionality. That's really cool. This is a great little package here. That's so cool. They even have a migration command that can migrate for a handful of these things that are automatable.
In this case, it'll migrate from chalk to Pico colors, which is a much smaller package that provides the same functionality. CLI will even recommend based on your environment. For example, it can suggest the native style text instead of a colors library if you're running a new enough node. That's really cool. Also, npm graphs. This is really, really useful. Using npm graphs to figure out what the dependencies are of your [ __ ] That's really good. Yeah. Yeah, I had a feeling the T3 package has uh built up a little bit. We got some effect. We got the diffs library.
We got open and websocket node pointer and the claw agent SDK package is so heavily offuscated that they're also using what's the package that versel has? NCC. That's what I'm thinking of. NCC is a package by Forcell that takes all your dependencies and bundles them into a single JS file. And I'm almost positive Anthropic is using that here because they don't want us to know anything about how the Asian SDK is implemented because they are kind of evil. So I would bet this has lots of other subdeps. We just can't see them. Apparently Pierre diffs library has a lot of subdeps.
I did not realize that like half our dependency graph comes from the diffs library. Interesting. To be fair, there are some core depths in here that have to be a little complex like shiki for the syntax highlighting. Like that's just not a trivial thing. And then the has to util HTML that turns the syntax tree into HTML has a lot of these subp packages too. Interesting. You learn something every day. And I'm learning about my own dependency graph right here. See this is a really good example here too with ESLint where there's this find up package that is a bunch of sub things around six total.
And if you can replace that package then you can trim out six packages from your npm install. That's a great thing. There's also the module replacements project which has a set of replacements that are modern, simpler, and more well-maintained than these old things that makes it really easy to identify and replace anything that could be replaced. This has been an awesome article. Huge shout out again to James. Let's read the closing thoughts and do one last important thing at the end here. We all pay the cost for an incredibly small group of people to have an unusual architecture that they like or a level of backwards compatibility that they specifically need.
It's not a fault of the people who made the packages as each person should be able to build however they want. Many of them are an older generation of influential JS devs building packages in a darker time where many of the nice APIs and cross-co compatibility that we have today simply didn't exist. They built the way they did because it was possibly the best way at the time. The problem is that we never moved on from there. We still download all of this bloat today. Even though we've had these features for several years, I think we can solve this by reversing things.
The small group should pay the cost. They should have their own special stack that pretty much only they use. Everyone else should get the modern, lightweight, widely supported code. Hopefully things like the E18 Foundation and npmX can help with that through documentation, tooling, etc. You can help by taking a closer look at your dependencies and asking why. Raise issues with your dependencies, asking them if and why they need these packages anymore. We can fix it. E18E is one of the most essential things happening in the JS world. I already thought highly of them, but now I am convinced that we need them in order for JS to survive.
As such, I'm going to do the most important thing to support a project like this, money. I just contributed $5,000. I think this is important. I hope this helps prove just how important I think this is. And I hope many of y'all take the opportunity to do the same. These projects are no joke. These are the things that actually make the ecosystem survivable. These are the things that make the web better and keep things from getting worse. If we want the web to win, and I hope we all do, I hope you guys who have money, and I know many of you have money, take the opportunity to contribute, or at the very least tell your boss that it's probably worth including in your open source financial contributions.
Obviously, not everybody needs to donate this much money, but if you have a couple bucks to throw their way, you'd be amazed how far it goes. The team doesn't actually need too much money to survive right now, and they're very, very, very lean. I would argue more so than they should be because these devs are doing some of the hardest work, and they are being paid like [ __ ] They currently have a total balance of $17,000, which in my opinion is enough to pay one dev like once. These devs should be paid very generously because this is very, very important work.
The fact that an organization that is this important literally only has about $17,000 there is terrifying and I am thankful I just got to meaningfully increase that myself. I got nothing else to say. Just go support these guys. They're doing important work. And until next time, peace nerds.
More from Theo - t3․gg
Get daily recaps from
Theo - t3․gg
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.







