Open source is dead now?
Chapters8
Emphasizes strong support for open source as essential and explains concerns about moving away from open source in the industry.
Open source isn’t dying, but real risks from AI-enabled exploits are forcing hard questions about funding, security, and whether big projects should stay open.
Summary
Theo - t3.gg argues passionately that open source remains essential, even as AI changes how we think about security. The Cal.com decision to close its core codebase becomes a rallying point: if we want software we can fix and improve, we need to fund and defend open ecosystems. Theo recalls Cal.com as a premier example of the T3 stack and explains why losing its open-source core is a warning for the whole industry. He ties this to broader shifts in security, noting that AI can both help code and weaponize it, raising the cost of hardening public codebases. The discussion weaves through the potential of Work OS as a model for sustainable tooling, and how tokenized security spending might become the new norm. He cites OpenAI’s advances (GPT-5.4 Cyber) and Anthropic’s Mythos as proof that models are approaching, or surpassing, human capabilities in code analysis and vulnerability discovery. The video also covers the practical tension: maintainers are overwhelmed by CVE reports and security advisories, yet openness invites better collaboration and faster fixes. Theo closes with a firm stance: fight for open source and push for smarter, more funded security so the future stays open and trustworthy.
Key Takeaways
- Cal.com’s shift away from open source illustrates a broader risk that prized open ecosystems can contract under AI-driven security pressures.
- Open source has historically driven trust and rapid repair in software; closing it can reduce community-driven hardening and create exploitable blind spots.
- AI models ( Mythos, Opus, GPT-5.4 Cyber ) are increasingly capable of analyzing and exploiting large TS codebases, potentially narrowing the traditional security gap between domain experts and attackers.
- The security economy may shift to a token-based arms race: the side that spends more on discovering exploits can outpace attackers, which makes broad, funded open-source ecosystems more valuable.
- The podcast argues for funding open source rather than abandoning it: Work OS and similar platforms can offer scalable, collaborative security tooling and governance.
- Maintainers face a heavy burden with constant CVE reporting; sustainable funding models are essential to keep critical open-source projects secure and healthy.
- Despite risks, the speaker believes open source remains the best model for most businesses and urges continued advocacy and responsible security hardening.”],
Who Is This For?
Software developers, open-source maintainers, and tech leaders who rely on open ecosystems and are concerned about AI-driven security threats. Also useful for CTOs weighing the trade-offs between open-source funding versus closed development models.
Notable Quotes
"Open source is dead. That's not a statement we ever thought we'd make."
—Theo uses this provocative line to frame the core thesis about AI-driven security pressures on open ecosystems.
"AI is fundamentally changing the security landscape. Code is no longer just read. It is scanned, mapped, and exploited at near zero cost."
—Describes how AI enables new forms of vulnerability discovery and exploitation.
"To tie all of the chaos of things going on right now together, Project Glass Wing is something worth thinking about."
—Mentions a planning concept linked to the broader security strategy discussed.
"If you hide the source, the level of domain expertise needed to do well here probably bumps back up to like a 4 out of 10 for now."
—Explains how removing open source can temporarily raise the bar for attackers, but only until AI advances.
"Open source remains critically important. For those of you who aren't exposed to AI maximalists, the statement feels absurd."
—Affirms the speaker’s defense of open source in the face of AI-driven threats.
Questions This Video Answers
- Why is Cal.com's decision to close its open-source core controversial and what are the implications for open-source projects?
- Can AI models like Mythos and GPT-5.4 Cyber realistically compromise large open-source codebases, and how can teams defend against it?
- What is Work OS and how can it help open-source projects harden software without slowing development?
- How does token-based security spending change the economics of securing open-source software?
- What steps can maintainers take today to keep open-source projects secure in an AI-enabled threat landscape?
Full Transcript
If you've been paying attention to my content recently, you know that I've become a much stronger advocate of open source. Not that I wasn't before, but I think now more than ever, it's really important that we're open sourcing our software, that we're supporting open source communities, and that we're building in a way where things can build on top of each other. I am really scared of a future where we stop open sourcing our stuff and the software we rely on every day gets worse and sloppier with no ability to fix it. Which is why this announcement from the Cal.com team terrifies me.
If you're not familiar, cal.com is an open- source alternative to Calendarly. At least it was because it no longer is open source. And this sucks for so many reasons. Cal.com is one of the first T3 stack apps. So much so that it predates the T3 stack. And we've often joked that Cal is one of the best examples of what the stack looks like. They even briefly employed Alex, the creator of TRPC, in order to improve their stack to make performance better. They have historically maintained this as one of the best full stack examples of a good big TypeScript application.
It's even been used for things like compiler tests and demos and changes to the TypeScript Go rewrite because it's one of the best examples of a big open- source fullstack TypeScript next app. And it no longer is. This sucks, especially as the followup to the two videos I just did about how open source is great for businesses and should be the direction we're moving in. And here's where I'm going to give you guys a little bit of inside baseball. I'm a good friend of the team at Cal. I've been talking with them about this for a bit now, and these conversations were absolutely in my mind when I filmed those two videos.
That's why I even brought up Cal as an example in my most recent video on why businesses should be building on top of open source. I was hoping I might be able to put the pressure on to prevent this change and I have failed. And I want to talk in depth about why. Why is it that Cal thinks their best bet is to close their source and how this might doom the future of software? This one scares me, but it's an important conversation to have. And if I'm going to keep building and funding open- source stuff that I care about, we're going to need to find some way to pay for it.
And rather than charge you, we're just going to take a quick break for today's sponsor instead. If I told you that T3 Chat, Cursor, and Versel all made the exact same mistake, you'd probably be confused. And if I told you we all solved it the same way, you'd probably be even more confused. And I'm sure this quote isn't helping much, but let me explain. All of the companies I just listed screwed up OT. T3 Chat and Verscell rolled their own, and Cursor went with Oth Zero. All of us regretted it. All of us moved to work OS and all of us are significantly happier now.
A long time ago, GMO pulled me aside because I went through this whole O chaos where I chose to host my own off and then realized that was a mistake and he pulled me over cuz he wanted to talk about it because he had the same regret for Versel, the entirety of Verscell. I was super scared to talk about this cuz I didn't want to share private information, but then he just said the exact same thing to the Work OS team and now it's on the site. According to GMO, he could have done way more business if they had partnered with Work OS earlier.
It's been incredibly wellreceived and there's no reason to not start with work OS because your first million users are free. Don't make the mistake we did and get offright at sidyv.link/workos. Open source is dead. That's not a statement we ever thought we'd make. Cal was built on open source. It shaped our product, our community, and our growth. But the world has changed faster than our principles could keep up. AI has fundamentally altered the security landscape. What once required time, expertise, and intent can now be automated at scale. Code is no longer just read. It is scanned, mapped, and exploited at near zero cost.
In that world, transparency becomes exposure, especially at scale. After a lot of deliberation, we've made the decision to close the corec.com codebase. This is not a rejection of what open source gave us. It's a response to what risks AI is making possible. We're still supporting builders, releasing the core code under a new MIT licensed open-source project called Cal. DIY for hobbyists and tinkerers. But our priority now is simple. protecting our customers and community at all costs. This may not be the most popular call, but we believe many companies will come to the same conclusion. To tie all of the chaos of things going on right now together, Project Glass Wing is something worth thinking about.
If you're not familiar, you didn't watch my video on Claude Mythos, and I would recommend going to watch that first, as well as my video on security of software as a whole, because the security field is changing on a fundamental level right now. Historically, there was a very small number of people that knew software and security in software well enough to be useful for not just finding things that could be wrong and helping companies fix them, but really being able to build up and exploit these types of deep security flaws that exist in all of software.
In order to be good at exploiting software, you need to understand software well. You need to understand exploits well, but you also need to understand how the specific software that you're trying to exploit works, where its edges are, how the parts interact, and much more. It is possible to build a good understanding of those things externally by reverse engineering the binaries, by trying to look at the compiled source and the JavaScript apps that we're all using every day. There are lots of strategies to try and build that deeper understanding of the systems that you're trying to break, hack into, or find exploits in.
Those things get a lot easier when you can read the source and see the relationships between those parts. And they get even easier if an AI can build the same understanding. The how exploits happen function is relatively simple. You have somebody's security knowledge effectively multiplied by their domain specific knowledge. The combo of these two is what makes for big exploits. Somebody who understands the domain they're in. Whether that is the actual source of the code they're working with, whether it's the system the code runs on and the quirks of that system, the domain that you're hunting in is really important just as much as the security knowledge.
Historically, the number of people who have either of these was low. So in a given domain, the people who understand the domain and the security aspect was often even lower. So something like cal.com was not a great target because people who knew security really well didn't know how to go through a big full stack TypeScript project. So it being open source didn't meaningfully change the average security researcher's domain specific knowledge because they didn't understand fullstack TypeScript. So having a fullstack TS codebase did not meaningfully change their capability in that codebase and for that project. I'm sure most of the people in my chat here are developers.
So, I would like all of you to drop in chat a number between zero and 10 for how confident you are in your capabilities as a security researcher. If you're given a code base, how confident are you that you can find and like identify the security issues that might exist in it? I haven't seen any number higher than three. I saw negative 100. NMG, who is like the person I literally pay to do security audits of my [ __ ] dropped a six. Most of these numbers are significantly lower. This is the issue. People who are strong on one side, like here with the domain, so if you're really good at Typescript, the likelihood that you're good at security goes down.
And if you're really good at security, the likelihood you know jack [ __ ] about TypeScript goes down, too. So while AI does not necessarily have a ton of knowledge about security stuff, what it does have is weird domain knowledge on everything else. So if you take a full stack TypeScript applications source, the AI can understand the domain specific stuff and all you need is a little bit of security knowledge to push it over the line and find real exploits. So previously to find exploits, you would have to be like a seven out of 10 on both sides.
Now with the models being so good, you can literally be a zero or a 1 out of 10 on the domain specific knowledge side. In the security knowledge side, probably also lower because you can just throw so much it at the wall like a four out of 10. And now at this level, you can find real legitimate concerning exploits. That is a huge change. We went from like maybe a few thousand people in the world had this capability to the majority of people who have done a CTF before suddenly can. That gap is what's so scary here.
AI has made it so the amount of domain specific knowledge has hit the floor. And the amount of security knowledge needed has also gone down quite a bit too. If you think this is only a thing for poorly coded full stack TypeScript devs, I would like to remind you that Mythos preview found a 27-year-old vulnerability in OpenBSD. OpenBSD is one of the most carefully crafted and maintained code bases in the world. It is the only Linux alternative that is almost inarguably way more secure than Linux that is focused on being a reliable core to power everything from our routers and our firewalls to power lines and [ __ ] OpenBSD being poned by AI is terrifying.
But part of why the AI could do it is they targeted the source code. The way Anthropic did these vulnerabilities was writing a script that would go through every file in the codebase and spin up an agent saying we are looking for CVEEs and potential exploits in this codebase. Start from this file and trace the system to find anything that might be interesting. They're effectively using each file in the codebase as a starting point, as a seed function almost to make more outputs happen. And just by running this across every file in the codebase, which mind you, it's not staying on the one file when it runs.
It's just the entry point. So it starts on this one file and then goes through the codebase looking for other [ __ ] They did this before with Opus as well, yes, but they did it with Mythos here and found a ton of scary [ __ ] None of that requires crazy hacking expertise. That's like a clever idea most 15-year-old vibe coders could have come up with. And somebody said, "So they're effectively training it to help hackers better." No, absolutely not. That's not a thing that they did in training. That was an experiment they did with the model once it was trained.
They did not train the model to hack. They trained it to code. And the point I'm trying to make here is that the model's getting better at code means that half of this function has collapsed. The two blockers for a random person finding huge exploits and pwning the world are they don't know enough about security and they don't know enough about the domain. Now the domain knowledge doesn't matter anymore. So the only thing blocking is security knowledge. And even that bar has gone down. You just need to be a little clever now and have enough money to burn on tokens.
That's it. And that is scary. And if one of the methods here that works is brute forcing by telling a model to start from every file and trace everything, that's a lot easier when you have the files. These AI models and the agents and hardnesses they use are capable of doing crazy deopuscation work where they can take a compiled program or minify JavaScript and unfuck it in a way where they can start to understand what's going on under the hood enough so to find some security issues. But these models are way better at doing it when they have direct access to the source.
So again to go back here, if this is the function that we're thinking of, if the code is open source, the model has all of the tools and techniques it needs to pone that domain because it has the source. It knows how to navigate source code well. It understands code, it will destroy here. It doesn't understand deoffuscation and reverse engineering as well. So it's not going to be able to decompile as well as it could parse existing known source code. So if you hide the source, the level of domain expertise needed to do well here probably bumps back up to like a 4 out of 10 for now.
But what happens when the models get good enough at deoffuscation and decompilation? This will also fall back down to a one. So my first argument against what Cal has done here is simply that they have only bought themselves time and they've not bought a lot. They are terrified of the bar for finding exploits going down. So they've done this change to temporarily bump it back up a little bit. Not where it was 5 years ago, but a place that's a little more reasonable considering how powerful this AI stuff is. But I would suspect this advantage will be lost soon, too.
Good point from Aiden here. Doesn't this assume that you always have the binary so you have something to decompile unless you can scrape and find backend endpoints and techniques from the front-end clients? There you go. That's the key there. You can do enough scraping to make up for some of the difference here. On one hand, the client will give you access to everything that the client connects to. In this case, all the endpoints that it uses. So, you can get a lot from that, but you don't have the backend code. But you do have all of the services that are hit and infinite time.
That's the other thing that I didn't include here that is definitely worth including time. How much time do you have to test these things and do these things? People who are strong in security knowledge and the domain specific knowledge are often strapped for time. So the likelihood that these people have enough spare time to pone your random open source thing was very low. But what happens when you have a bunch of agents that don't care about time that you can just let run overnight for hours at a time and just run in loops constantly trying to find things they can exploit.
That's when you end up with a CVE [ __ ] storm like we're in today. So what have others had to say here? Tanner jumped in almost immediately, saying that if AI reading your open source code is hurting your business, you are likely using open- source as a growth strategy instead of a philosophy. And closing it now doesn't make you secure. It just means fewer goodfaith developers are hardening your code and more bad actors are pointing AI at your production servers. Oh boy, this is a good take, but there are layers to this. On the topic of hardening your code, this is also a thing worth talking about.
OpenAI dropped a new model today, GBT 5.4 cyber. The point of this model is that it can be used to harden your code bases ahead of time. I believe you have to get whitelisted to have access. I don't even have access to it yet, but the point of it is to make it easier for companies and businesses and developers who are building important software to be able to harden their software before agents are used against them. Security is always an arms race between the people trying to secure their software and the people trying to exploit it.
The loop's about to get tighter and I am thankful that both OpenAI and Enthropic are putting extra effort in to make sure the people building this software have a chance to fix it before these tools are in the hands of potentially bad actors. Attackers and defenders are the official industry term. Thank you NMG for keeping me on point when I'm talking about these things. With decompilation, I see reasonably it will always converge. With security exploits, I'm not sure if it's the same. I don't think decompilation will matter too too much here for the backend side in particular, but it will let you decompile the front end side, figure out what endpoints are hit, and then brute force try to break through those things.
There is a very real security issue here. And I don't think most of us, even the best developers I know, have really had it click just how [ __ ] we are. The only reason that we're not getting pawned constantly is that it took too much effort from too much elite attention from really smart people that have a wide breath of knowledge and those people just it wasn't worth their time to find exploits in random [ __ ] That's changed and that change is what's so scary. But as Peter says here, if you're not familiar, Peter's the creator of Open Call that's now at OpenAI.
If you look at GBD 5.4 Cyber and its ability for closed source reverse engineering, I have bad news for you. I do very much feel the pain though. There's hundreds of teams that try to poke holes into OpenClaw. Our response has been of rapid iteration in code hardening, which did introduce occasional regressions, and yes, you've all been yelling at me for it, but I see it as the only way forward. I'd be very careful of other open source projects and harnesses that ignore this work and do not publish their advisories. Sim Wilson also chimed in here and linked a really interesting piece from Drew.
He argues that the cost of locking down software through LM analysis makes open source more valuable now. Cyber security looks like proof of work. Now, is security going to spend more tokens than your attacker? If you're not familiar with proof of work, this is a concept that is largely popular in the crypto world, but it's also popular for things like captas. It's a way to slow down negative harmful work. So, for crypto, it is a way to prove that this transaction is real because real compute went into signing it and thumbsing it up. For more traditional captas, it is a way to solve a complex problem that takes real CPU usage before you're allowed to do an action.
So, if we implemented a proofof work capture on T3 chat, your computer would have to spin up its processor and do something to prove it's a real actor before going and doing whatever thing you want to do. The reason you would want to do something like this is because you want to make attacking your service expensive. If we didn't have captures on T3 chat, you could use our endpoints to generate things for free effectively by just hitting a random endpoint on the free tier and generating a response. Proof of work is a way to make that too expensive by you not being able to just bought it out.
You need computers with real processing power to get approved by solving the puzzle and doing the work and proving that real compute went into doing a thing. So saying cyber security looks like proof of work now is saying this comes down to spending power. Who's willing to spend more? If you can cost me a dollar using one of my end points and the proof of work only costs you 50, you just found a way to save 50% of your spend by spamming my end points. So balancing this out so the side that is trying to exploit has to spend more than the result of their exploitation.
That's the complex balance. If it only cost you 50 cents to take a dollar from me, you're going to take a lot of money from me. But if it costs you $2 to take a dollar from me, you're not going to do that a whole lot. And that's where we're going with cyber security. Who's spending more? The security side or the attackers? This is both a very scary and a very real framing. Last week, we learned about Anthropic's mythos, a new LLM that is so strikingly capable at computer security tasks that Anthropic didn't release it publicly.
Instead, only critical software makers have been granted access, providing them time to harden their systems. We quickly blew through our standard stages of processing big AI claims. Shock, existential fear, hype, skepticism, criticism, and finally moving on to the next thing. I encourage people to take a wait andsee approach as security capabilities are tailorade for impressive demos. Finding exploits is clearly defined to verifiable search problems. You're not building a complex system, but you're poking at one that already exists. A problem well suited to throwing millions of tokens at it. Yesterday, the first third party analysis landed from the AI security institute, largely supporting anthropics claims.
Mythos is really good. A step up over previous frontier models in a landscape where cyber performance was already rapidly improving. Their whole report is worth reading, but he wants to focus on this chart detailing the ability of different models to successfully complete a simulated complex corporate network attack. This is scary. Opus and Mythos have made meaningful improvements in how many steps they can complete to get to a task like reverse engineering in crypto analysis where 5.4 is just over the line for wiki exploitations and credential replay. Yeah, this is a very scary chart. This benchmark called the last ones is a 32-step corporate network attack simulation spanning initial reconnaissance through to full network takeover, which the ASI estimates would require humans 20 hours to complete.
And this is qualified humans who really know what they're doing. The lines are the average performance across multiple runs, 10 runs for Mythos, Opus 46, and 5.4 with the max lines representing the best of each batch. Mythos was the only model to complete the task in three of its 10 attempts. And that's what this line is. The max is it succeeding and it was able to do the full network takeover. It's the only model it's ever been able to do that. Terrifying. The chart suggests an interesting security economy. To harden a system, we need to spend more tokens discovering exploits than attackers spend trying to exploit them.
The ASI budgeted 100 million tokens for each attempt. That's $12,500 per Mythos attempt. 125k for all 10 of the runs they did. So 125k for three of the 10 runs to actually succeed. That's a lot of money, but that's still really scary. Worryingly, none of the models given a 100 million budget showed signs of diminishing returns. Models continue making progress with increased token budgets across the token budgets tested. If continued to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation. To harden a system, you need to spend more tokens discovering the exploits and the attackers will spend doing the exploits themselves.
You don't get points for being clever, you win by paying more. It is a system that echoes cryptocurrency's proofof work system where success is tied to raw computational work. It's a low temperature lottery. You buy the tokens, maybe you find an exploit. Hopefully, you keep trying longer than your attackers. First, open source software remains critically important. For those of you who aren't exposed to AI maximalists, the statement feels absurd. But lately, after the light LLM and Axio supply chain scares, many have argued for reimplementing dependency functionality using coding agents. Here's Karpathy from just a few weeks ago.
Classical software engineering would have you believe that dependencies are good. We're building pyramids from bricks. But in his opinion, this has to be re-evaluated, and it's why he's been so growingly averse to them, preferring to use LMS to yoink functionality when it's simple enough and possible. If security is purely a matter of throwing tokens at a system, Lionus' law that given enough eyeballs, all bugs are shallow. This now expands to include tokens. If corporations that rely on open- source libraries spend to secure them with tokens, it's likely going to be more secure than your budget allows.
Yeah, because you're effectively combining the budgets on either side. To go back to the example here, these runs had a 100 million token budget. Three of the 10 succeeded. But if you imagine that as 10 different orgs, 10 different exploiters spending, there's no way that one's spend benefits the other. So, it's not additive. But of 10 different groups contribute 100 million tokens to securing a project, those are additive. So if the bar to find an exploit at the 50th percentile is 200 million tokens and we are assuming that it's roughly matched on the security side, three people putting in 100 million tokens is going to secure relatively well on average against the one person spending 200 mill tokens to hack it.
So, while these open source projects with lots of companies putting effort into securing them should be more secure, there is a complexity here. Cracking a widely used open- source project is inherently more valuable than hacking a one-off implementation, which incentivizes attackers to spend more on these big open- source targets. Second, hardening will be an additional phase for agentic coders. Already been seeing developers break their process into two steps, development and code review, often using different models for each phase. As this matures, we're seeing purpose-built tooling meeting this pattern. And Throgg has put out a code review product that costs $15 to $20 per review, which is still absurd, but yeah, the above mythos claim holds.
I suspect we'll see a three-phase cycle. Development, review, and hardening. Development is when you implement the features, iterate quickly, guided by human intuition and feedback. Review is when you document, refactor, and do other gardening tasks. Async, applying best practices with each PR. And then hardening, where you identify exploits autonomously until your budget is out. Critically, human input is the limiter for the first phase, and money is the limiter for the last. This quality inherently makes them separate stages. Why spend to harden before you have something? Previously, security audits were rare, discreet, and inconsistent. Now, we have to apply them constantly within an optimal, we hope, budget.
Code remains cheap unless it needs to be secure. Even if costs go down as inference gets optimized, unless models reach the point of diminishing security returns, you still need to buy more tokens than attackers do. The cost is fixed by the market value of an exploit. Yep, this is a phenomenally put article and I agree that we need more spend power on the things we all rely on, which means we need more effort going in to open source right now. This also means that open source projects are going to need to do a better job of embracing this way of spend.
If a project like FFmpeg is going to refer to the spending finding security stuff as CVE slop, we're gonna have a problem. Google spent a shitload of money trying to find exploits in open source software. And they gave FFmpeg a huge 3 plus months heads up about an exploit that they found in a barely used codec within FFmpeg. That codec is included in most FFmpeg installs. So it is a very real legitimate issue, but they flagged it as low severity and ignored it. And then when it came out, they flipped a [ __ ] on Google for it.
This isn't going to work. Part of this new system where proof of spend effectively is necessary for security. Open source maintainers have yet another burden they have to deal with. And I hate that part. It genuinely sucks that open source projects have to eat more work than before because now not only do they have to try and make a good project, deal with all of the [ __ ] they get in through issues and whatnot and do their best to make it secure, they now have to deal with all of these security reports coming through all sorts of different channels and doing their best to stay on top of it and maintain it.
It is thankless work. It is important work. But pretending this work isn't important and referring to the people reporting these things as creating CVE slop with AI is absurd. And yes, FFmpeg did block me after I donated $5,000 because the guy who runs the Twitter account's a [ __ ] But that's a separate thing. I will resist the urge to tell you guys all about the different maintainers of FFmpeg that have hit me up complaining about the person who runs the Twitter because the FFmpeg Twitter account is not run by a maintainer. Just saying. Regardless, we need to not have this attitude.
That is why this is here because this is dangerous and if hackers realize FFmpeg is ignoring the attempts to make it more hardened and safer, they will target it with exploits. You don't need permission from the open source maintainers to exploit their software, but you absolutely need their permission to fix bugs and security issues in their software. So, if the maintainers aren't willing to embrace the feedback they are getting about potential security stuff, the hackers will take advantage of that gap. I think I've said all I have to on this one. I understand where the Cal team is coming from here and I also understand that other businesses aren't going to be interested in helping fund them securing their stuff with tokens, but I still hate seeing a great open- source project go closed.
And I have a bad feeling this is far from the only time we're going to see this type of thing happen. Keep fighting for open source. Keep doing what you can to support the projects that we all use and rely on because otherwise the future is going to be closed. I still believe open source is the best thing for most businesses. Go watch my previous video if you want more info on that. But for now, we have to push hard to convince companies that this is the right call going forward because more and more of them are going to see these exploits, assume that open means bad, and then shut this down.
And we cannot let that happen. We cannot let fear-mongering kill open source. So, keep fighting for what's right, keep pushing to keep things open, and keep working to keep the tools we use secure, reliable, and great. Until next time, peace nerds.
More from Theo - t3․gg
Get daily recaps from
Theo - t3․gg
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









