AI Sucks At CSS

Syntax| 01:00:16|Apr 9, 2026
Chapters12
An opening potluck of questions about AI’s relationship to CSS, design systems, vibe rules, performance debugging, and whether software improves with AI, framed as a lively Q&A.

AI struggles with CSS, and real design still needs human taste; the crew weighs design systems, prompts, and practical workflows for smarter but imperfect tooling.

Summary

Scott and Wes (Syntax) return with a potluck of AI-and-CSS questions. Jurgen asks how to design with AI and whether CSS-only prompts can ever achieve elegance; the consensus: AI often patches locally and ignores modern CSS features, so even with design systems, the output can look bloated or off. They discuss prompts, system prompts, and tools like Google Stitch, Tailwind nudges, and the tension between templates and unique design. The crew also dives into workflow ideas for AI-assisted front-end work, plus the reality that humans with taste are still essential for great UI. Beyond CSS, they riff on interviewing with AI, performance debugging (Sentry as a practical aid), and whether AI promises have actually materialized in faster software. The chat then moves to broader topics: the value of skills vs. rules vs. agents, and the evolving ecosystem of vibe rules and skills for agent frameworks. Practical tips surface on how to evaluate slow apps, when to rely on network vs. rendering bottlenecks, and how to keep velocity without sacrificing maintainability. In short, AI helps, but when it comes to front-end polish and design systems, human judgment remains crucial—and the tooling landscape is still settling into workable patterns.

Key Takeaways

  • AI currently struggles with CSS: even with a rigid design system, it tends to patch styles, add bloated one-off classes, and ignore newer CSS features like nesting.
  • System prompts and 'style biases' from tool headers (e.g., codeex typography prompts) can steer AI toward specific visual outcomes, sometimes against your intent.
  • Design quality—especially layout and information architecture—still hinges on human taste and critical review, even as templates and AI templates proliferate.
  • Performance troubleshooting benefits from dedicated tools like Sentry to identify slow routes and database queries; network tab alone isn’t enough to diagnose all slowdown reasons.
  • AI interviews are being rethought; a notable idea is the Composer One approach (cursor) that tests engineering skill under constrained, fast AI tooling rather than pure prompt work.
  • Vibe Rules and skills.md offer a path to shipping AI-guided behaviors inside libraries, but their adoption raises security concerns and versioning questions; both are evolving ecosystems.
  • Practical takeaway: when you adopt AI for front-end work, pair it with deterministic checks, reviews, and performance analysis to avoid sloppy output and ensure maintainability.

Who Is This For?

Essential viewing for frontend engineers exploring AI-assisted design and development, UX designers curious about AI prompts, and tech leads weighing when to trust AI vs. human judgment for UI and performance improvements.

Notable Quotes

"AI loves to just patch and throw things in there, even when you explicitly tell it not to."
Illustrates the tendency of AI to patch locally rather than solve globally in CSS/design tasks.
"Don't rely on flat single color backgrounds. Use gradients, shapes, or subtle patterns to build atmosphere."
Cites an example bias from a system prompt that pushes a particular visual style.
"The moment that you have an eye for CSS, an eye for design, the slop looks like slop."
Highlights how human taste sharpens perception of AI output.
"Composer 1 is extremely fast. Candidates can accomplish a lot in 1 hour."
Describes an AI-native interview concept to better assess engineering skill over mere prompt-writing.
"We still need humans with talent and good taste for good design and layout."
Reinforces the central role of human designers in front-end work despite AI advances.

Questions This Video Answers

  • Can AI CSS generators ever reliably produce production-ready layouts with consistent design systems?
  • How should teams balance system prompts and design tokens to reduce bloated AI output in UI code?
  • What is the Composer One interview approach and why is it considered a better measure of engineering skill with AI?
  • What tools best help diagnose web performance bottlenecks beyond the browser network tab (e.g., Sentry, flame graphs)?
  • Are vibe rules and skills.md secure and scalable for shipping AI-capable features inside libraries?
AI At CSSCSS Design SystemsDesign Systems vs AISystem PromptsVibe RulesAgentic CodingComposer OneSentry PerformanceWeb PerformanceFrontend Tools
Full Transcript
Welcome to Syntax. Today we got a pot lock episodes. That's where you bring the questions, we bring the answers. Got some really good questions today. Why does AI suck so much at CSS? How are we supposed to get good, clean, repeatable design out of using an LLM? Some questions around Vibe rules? Um, skills versus rules versus agents. Should these rules be shipped with a package or should you just reach out to the internet to grab them? Interviewing with AI. a genius genius use of using a dumb model um from Brendan Faulk. We're going to describe his thoughts on that which I thought it was so so genius. How to find and debug performance issues in your application. There's so many different spots where your app can start to feel slow. And then finally, question from someone says, "Why isn't software getting better and faster with AI? Isn't that what we were promised?" Let's get into it. What's up, Scott? You want to hit us with the first question? Yeah. First question from Jurgen. Do you have any workflow suggestions for design and CSS when working with AI agentic coding? I'm currently experimenting with openspec plus cloud code and targeted system prompts for roles and it's great at functionality but the design looks terrible. I have started a new experiment with new system prompts for UX design and opinionated front-end dev persona, but I'm wondering how other people are tackling this. Is CSS without a rigid design system just too difficult for LLMs? I will say this, Jurgen, even with a rigid design system, AI sucks at CSS. It's a perfect storm, if you will, of things that AI does poorly. AI loves to uh just solve a problem locally instead of solving a problem globally. It just likes to patch and throw things in there, even when you explicitly tell it not to. And with CSS, that often means just throwing just a bunch of bloated additional one-off classes or styled components or anything that you could be possibly in and Tailwind too. It does the same thing with Tailwind. It could just throw a bunch of extra stuff. Yeah. Just dump pull up dump truck. Yeah. And even like like modern CSS as well. Like yesterday I was working on it and it I said like like use nesting and like it didn't use nesting at all and then it used the old version of nesting you know before we had like the you don't have to put the amperand in there. Um and then it like switched to BM at one point and and then it doesn't know how to use any of the new features that are are out there. And I I know that these things can be solved with the the right amount of prompting, but man, and and like that's not even what this guy is asking about, right? like like yeah the the CSS that it kicks out is is that but also it it also just looks I don't want to say awful because like like the like table stakes that like a cloud code will will do kick out is like like a black background gray borders around a thing and it it it neatly puts them on the page and it looks decent you know like it doesn't look awful but it doesn't look good in in my opinion it's it the like layout the information architecture kind of sucks on it quite a big task and I think a lot of people are trying to crack that nut right now. Like Google released what? Stitch. Google Stitch the other day where you can like basically it will come up with designs for you um that are supposed to look way better than it. I'm pretty sure they just have like 20 different templates. Yeah. Yeah. 18 different variables. I think they're just simply feeding it a bunch of rules like always use this line spacing and whatnot. things that are known to look good as well as a bunch of like pre-made templates cuz I like I ran it a few times myself and then I was looking at what everybody else was generating on on the internet and it they all look fairly similar which is also the tragedy that comes with a lot of the stuff is it looks good until everybody does it and then it starts to look like slop. Yeah. And I think some of it is system prompt stuff. I was I I I got into it with my AI the other day where I was just yelling at it being like, "Why are you adding gradients here?" You know, "Why are you adding background? Why are you doing this and that?" And it was like, "You told me to." And I said, "I I definitely did not tell you to. I told you explicitly not to." And it was like, "Right here. Here's where I here's where you told me to." And it was a a line of text I'd never seen before that I didn't write. It wasn't in my agent's file. So I Googled that line of text and found it in the open code header. So the codeex header for open code says typography use expressive purposeful fonts and then it lists interro rooto aerial color and look choose a clear visual uh define avoid purple on white defaults no purple bias. The most egregious one here is don't rely on flat single color backgrounds. Use gradients, shapes, or subtle patterns to build atmosphere. And so I tweeted out, I said, uh, I love open code, but uh, this is not something a harness like this should be deciding for me, right? These types of stylistic choices. Uh, Dax responded saying that that is the system prompt from the codeex CLI and they just keep it as is. That's why I think I'm moving a lot of stuff to PI because there's no system prompt stuff uh getting in the way. But even if there is this this header text that gets put into all of your prompts, it's also at a model level, these things are definitely biased into certain designs and certain styles and they're trained on a bunch of garbage CSS. like they're trying to beat the purple out of these things. And they've done a fairly good job of like beating the purple out of the model and and like getting rid of it via prompts, but now we're sort of swinging so hard the other way where everything looks exactly the same as well. You know, rounded corners, border on one side of a card. I think they're also uh optimizing for normies that when they use it, it just looks good and they go, "Oh, AI is so good because it made look good." instead of like if you you want practitioners with full control over it now suddenly these things are going to get in the way man yeah it's a it's a it's a problem beyond like the design part of things and the actual implementation I'm trying to figure that part out Wes with my graffiti library this whole thing is built for the idea of like here are templates here is the customization here's the things that you like here is the structure and here's how it should work but then you also this this is how you customize it. And you don't add you don't add one-off classes to do this thing. You use the classes for the structure and then you build on top of that systemically. And I've like built a a best practices skill. I've tuned the docs now the docs return markdown. We'll talk about that later. But like I've just been grinding on this specific problem for for a while now because I I I hate this thing. Yeah, I hate this. Um, it's honestly a very hard problem to solve because like in many other cases, you probably will want to step outside of some sort of design system because you just know and and I think the answer at least right now to a lot of the stuff is like you still need humans with talent and good taste, right? And I know everyone hates the word taste right now, but a lot of doing like good design and like laying things out so it makes sense and not just adding buttons absolutely everywhere just comes down to like somebody that actually knows what they're doing and can be powerful with these tools, right? Yeah. So totally I don't think anybody like I've I've searched high and low for for this type of stuff and unless you have I like I think if you have a very rigid like well doumented design system that certainly helps quite a bit but still I don't know anybody that has certainly nailed this just yet. same. And like the people I got into it with some knucklehead on on Twitter the other day where like he just says like AI is amazing at design. All you need is screenshot and he says uh great artists make or great artists steal whatever that quote is, you know, and then I like just like screenshotted his website. It's like you're you're spouting this BS off and your own website looks like something Claude just turded out in like 12 seconds and but you think it looks good because you got no taste. Yeah, I know. But that that's the thing that aside from that, the guy was selling like a an accessibility tool for checking color contrast and literally his own website didn't use the tool and didn't what didn't have the contrast in it. And that that drives me nuts because people are like people are probably in the comments already to this one being like, "Oh, I can do X, Y, and Z, and I have a great thing." And it's like, "No, your website looks like the same vibe coded app that everybody else came out with." You know, I've never never been impressed or enjoyed using design of one of these vibe coded things. It certainly is better than a lot of the crap people were like just like cludging together themselves, which is good. means like the baseline for like decent experiences is much higher now. But I don't think that you can simply just screenshot a good design of some other app and and say, "Oh, mine now looks amazing as well." Yeah. I I think this all comes back I had a tweet that was, "It's crazy how AI is really good at the stuff I don't know anything about and total dog at the stuff I do." Because the moment that you have an eye for CSS, an eye for design, an eye for anything, the slop looks like slop. And there's a reason why I think my AI code for backend stuff is great because I I don't I'm not as good at that stuff. I'm I'm good at backend, but I'm not like an expert. So, hey, it works. It works. But when I look at anything closely, especially front-end code, the stuff that I'm good at, the design stuff, I just I see how terrible it can be. So, yeah. No, I think the AI is certainly much better at backend code, which is very deterministic, very programmatic, following patterns, and then when there's something that's a lot more visual, and has to come into like the taste that you have, it's it's much harder. Not to say, I think that like we probably will crack this. I think somebody that actually knows it. I know Tailwind guys are working on UI.sh right now and I've seen some decent stuff. They've been cranking out I a lot of smart people working on this right now and I I I bet we'll probably see it solved in the next 6 months or so. Le says, "I'm getting back into hands of web development after years of working in product roles. I want to get my hands dirty again. So, I'm doing a React app to prove also to myself that I can still do it. But it is so much easier to get AI to write most of the code myself. This is a dilemma for me and I am also learning. Since I'm also a teacher, I know I am learning less when I use AI. But then again, maybe some of the skills I am missing out on here are redundant and I'm better off learning how to code with AI on my second screen. He goes on to say, I do know the fundamentals of coding with React. Um, I just haven't been writing lots of code myself for the last couple years. What are your takes on this? Yes, this is this is kind of an interesting one and I think as as much as people I don't know it it's it's a delicate balance here. I certainly think you still do need to understand how the ins and outs of a lot of this stuff work, but I don't think you need to know the ins and outs of it as much as you previously did. I think that people that are building the lower level stuff of this certainly need to know how this works, but that they'll be then turning that into libraries and whatever that we can sort of sit on top of. And then the people that do know how a lot of this bad stuff happens like whether it's performance or optimizations. Um they will be creating skills that can can be modeled out and you can apply it to your your own codebase. So like what's my take on like do you still need to to know how all of this stuff works? I think the people that are getting the most out of this AI right now are the people that actually understand how to tackle large projects, how technology works, how these things all work together. You see Toby from Shopify, obviously creator of Shopify. That guy hasn't slung code in in many, many years. And then all of a sudden he's just cranking out these like really interesting projects, you know, QMD and and auto research pie and and whatnot. And and that's because of the type of person he is. He's a thinker. He's a technologist. He understands planning products. He understands sort of the bigger picture of a lot of this stuff. And you're starting to see a lot of like like really reputable project managers. I know that term gets uh not to say Toby's a project manager, but you're seeing a lot of people who previously were sort of handsoff with the code and and much more of a stipier ship steer. Now they're sort of getting back into the code and and the stuff they're building is is good because they they understand how all of this stuff works. So that's my take there is that the people that are going to do the best on this are not just the people that can type in the box and and produce a bunch of garbage, but the people that really know how to think about a project and apply solve problems, you know, all of the things that we're we're at the end of the day we're we're using this code for from a ship steer to stir. Uh I you know what it's so funny, Wes, I I agree. I think one thing that could help here is just the idea of slowing down a little bit and reading the code. I know it's when you're learning something, there's a little bit of a difference there because you're not going to be able to recognize bad patterns, but with the amount that AI does write kind of sloppy patterns, if you can read the code that it's writing and identify where the problems are or even ask it to review its code and and see possibly if it has ideas for where these sloppy things are, I think there is a loop there in which you're able to pick things up that you wouldn't have picked up before by really having a deep eye for reviewing the code rather than if if if you're, you know, enjoying having the AI write the code, but like being able to have an eye for what is good and what is bad. I know that's tough to do without really writing the code uh for a long time. you know, many of the patterns that I can recognize as being bad, I only recognize as being bad because I wrote them poorly myself at some point and then refactored, you know. So, yeah. Well, I I'll give you another example of where it matters about the code, but so much more the the actual implementation. So, I have this one Photoshop that's called Texture Anything. It's a Photoshop plugin called Texture Anything. And the way it works is that you you drag this plugin in and it has a list of I don't know like 30 or 40 Photoshop actions that it applies to it and then it will sort of give like a logo like a texture grunge look to anything. We'll put some screenshots up if you're watching the video right now. And I love that plugin, but I just like I hate having to open Photoshop for it or it takes like a long time. And I was like I would love for this to be programmatic. So I I got Claude I dropped it in Cloud. said, "Convert this to to using JavaScript, right?" And then it went through it. It surprisingly was able to read the Photoshop action, figured out what each action was doing, and then out the other end, boom, it it gave me the entire effect done in I think it was in Sharp and it it did a great job. Like it it looked exactly what I wanted, right? It reimplemented all of the blurs, all of the every single thing that Photoshop would, it now did it programmatically in code. But the problem was I was it was slow, right? Hey, like if I I uploaded a 500 by 500 pixel syntax logo, it would take 10 seconds to apply it. And I was like, I would like for this to be much faster because there's knobs I would like to turn. You know what happens when you turn the grain level up? Then you got to wait 10 seconds. So I like I took my programmer brain and and I was like, how like how are how are you doing this? You know, what what technologies are you using? And over the course of maybe two hours, I sort of went back and forth and said, "Why don't we use WOM for this?" And then, "Okay." Oh, good idea. You're so smart. Yeah, I know. I'm thinking, you know, uh, let me move this over to Wom. And then I was like, well, like time, how long each step takes, right? And then I would I would look at it and say, um, this step is taking much much longer. Like, what are you doing with this? Oh, I'm looping through every pixel, but we don't really need to be looping over through every pixel. We just need to be maybe we can filter for pixels that have a color first, you know. And then I got it down to like 200 milliseconds, you know, 10 seconds to 200 milliseconds. All because I knew how like graphics processing in canvas works. I knew how how web assembly works. I knew how to parallelize things in in web assembly. And I didn't write any of the code myself, but like that was a major if I was just to type it like make it faster, maybe it it could figure it out. And and maybe that's something for I could use like an auto research plugin for, but I like I think most people would have been like, "Check this out. I made this amazing Photoshop to JavaScript converter and it it works at the end of the day." And it's like, well, it's yeah, it works, but it's it's not really that great of an implementation. And I think the code that I kicked out at the end of the day was really good and and way faster. We have an episode that I I want to work on. Well, I'm I've already wrote most of the episode, but on using more deterministic methods for improving the AI output, whether that is like quality checks or speed checks or those types of things where it's not just like telling the AI, hey, go make it faster because it will take shortcuts or it'll cheap out or lie or do all kinds of stuff for you. So, I think there's a lot here in terms of like being able to understand where the problems are. uh being knowledgeable enough to direct the AI in a way to fix these problems. Again, I think those are all interesting questions. All right, next question here is from Alex. To preface my question, I'm a network engineer and don't have much web developer background. Oddly enough, I enjoy the Syntax podcast despite having very little context to keep up with some of the episodes. With that said, I've been a part of a fair share of company website performance investigations where I typically turn to the browser developer tools network tab and confirm supporting infrastructure is operating normally to contribute. I'm looking for other ways to bring greater value to these investigations from an IT operations infrastructure perspective. What insights, tools or practices would be helpful to web teams during performance troubleshooting? Yes. Yeah. So, Alex, this is this is great. One thing I will say here is make sure that the teams you're supporting with this stuff uh would like your support. Not not because not because that you're not being helpful, but sometimes it can be not helpful when somebody who isn't having like the full picture is giving feedback on stuff. You know, sometimes it might even be better to be more helpful to say like, "Hey, I'm noticing these pages are are slower, whatever," rather than trying to give a deep like technical explanation as to why. Worst thing is is that when somebody doesn't know what they're talking about but also punches something into collad or chatg and then paste the reply to you and it's just like that's so disrespectful to me cuz like you're not suggesting anything. I could have punched that in myself as well and it's you're wasting my time making me read this now as if you came up with it. Anyways, continue. No. Yeah. No, I I think that's that's right along the lines of it where like I think it is helpful, Alex, if if they find what you're doing to be helpful. That said, we do have a number of episodes where we've talked about this and go deep in depth. Episode 585, fundamentals, what makes a website slow. We also talk about it in 874, fast apps, easy perf wins. Uh we talk about in 897, making your apps feel faster than it really is. and 972. These things make your app feel like crap on mobile. Uh these episodes are all a great place to to get dive into a little bit of what exactly you should be looking for. But again, uh most websites I will say, you know, maybe it's a little bit different in the modern era, but it fundamentals are still here. Most apps are slow because they're loading heavy resources. The network tab uh is a great place to see what is taking a long time. And like you said, it can identify if is the server response too slow, is it loading some giant images or something where it shouldn't be. There's a number of things you can certainly learn from the network tab there as well. Yeah. Yeah. He was saying like he he he's a network engineer and he knows how to open the network tab. Um, and that's that's a a pretty good spot to start, but like there's so many other spots in the like from a website being a baby to actually being delivered to your um your browser like that can introduce latency or or slowness ones, right? Like if if you the network engineer realize, oh, 400 I don't know 800 milliseconds is taking for this page to actually respond from the server even just to get the HTML. Um, at that point you're just like, well that's slow, but like like but why? You know, and at that point you probably have to start to figure out how how the app actually works. And as a network engineer, it's probably you probably have some some good insights into like how to make this thing faster, you know, and you get some like observability in there. You get some metric as to like what's actually taking is it database? um is there some like weird thing that we're waiting for the tweets to load before we send the whole whole thing out and that's taking 400 milliseconds and and holding the page up past that understand caching look into stalewire revalidate those things can certainly help quite a bit also like like like we said there's there's a lot you can do on the UI layer to make your website feel faster in some cases the actual rendering um is slow and that could be how your resources are being loaded in or it also could just be like you are rerendering these things way too often. That can sometimes happen in in React applications. Um so just like kind of go through all four of those episodes and just understand where slowness can can be introduced and when you're on the website, does it feel slow? Is it actually slow? Understanding all these tools and like the network tab is not just the only one. You can go into like the rendering tab, the flame graphs, understand how all that stuff work. I think that's a that's going to be a very good skill to have, I think, in the next little while as everybody's just cranking out these websites where they don't necessarily know what the code looks like. Being able to just dive in and figure out, all right, well, we have this thing, but like how can how can we improve it? Where is it actually going slow? Yeah. And I will say here's an opportunity to plug Sentry. sny.io. One of the coolest things that Sentry does is they have tools that can identify what your slowest routes are or potentially uh what your slowest DB queries are, those types of things. And it can make it really easy to find exactly where the slowdowns are in any of your applications. And man, we used this so so much when we built the uh current version of the syntax site, and I'm sure we'll use it uh just as much when we uh finally finish the next iteration of the syntax website. But uh it it can really like it gives you a a misery score and stuff for like how frequently this page is hit versus how slow it is. There's a lot of great stuff in Century for tracking these down as well as uh really getting into where these slowdowns might be. Yeah, the front end has like the the web vitals, performance scores, all that kind of stuff. The database one we had, we've told the story many times, but I think it's hilarious is that Sentry sent us an email one day like as an alert being like your your database queries are are climbing and they're at a point where they're too slow now. And it turned out we didn't have an index on our shows page. So every time we added another show, it just increased the size of the database. And when we were querying based on I think we were querying based on like the show number or something like that and there was no index. So it had to like loop through every single show in the database and and look for it or was looking for the I think the slug something like that. And like I I don't know that we would have noticed that because it got just a couple milliseconds slower every single time we pushed a show, right? And that's not it was not significant, but over adding 400 shows since we built it, it got way slower. And then we're like, "Oh, pop a pop an index on that." And then we were good to go. Which is hilarious. Yeah. Well, you know what? I'm seeing right now that Sinhack is getting some not great scores for web vitals. I'm wondering why. So, I need to dive into that considering it is using the very fast zeroync. And in practice, it feels fairly fast, but I'm sure there's some stuff to learn here about why we're getting slow performance scores on here. Interesting. So, check it out. Century sny.io/sex. Sign up. Use the coupon code tasty treat all lowercase all one word. Uh, these are some great tools and I think very relevant to this question. Sorry, I didn't mean to turn your your question into an ad, Alex. It just happened to work that way. Mike says, "Hi, guys. Guys, first and foremost, I want to say thanks and give you your flowers. I was able to move into a developer role because of the podcast in both of your tutorials. Well, you're welcome. I want to get into fixing electronics as a hobby. Wes, what do you recommend as a good setup or soldering station for a beginner? Yes, man. I've got I've got thoughts on this soldering iron. You can you can pretty much just get like anything, but if you want to get something like decent and cheap, I have the Ryobi battery powered soldering iron. Um, and then I also have the little um I think it's called like a T100. It's like a USB little pen shaped soldering iron. And I do reach for the Ryobi one much more often because I when I reach for the USBC one. I can never find the freaking power cord that is is 100 watts or I can't find like a something to plug into the wall. Um and I I much rather just have the thing on my desk um and so I can solder with it. So big fan of those. It doesn't have to be the Ryobi one. Can be literally any batterypowered one. So whatever tool brand you have. I think even Adam Savage has a self-build one where he took a DeWalt battery and he plugged one of those little the barrel adapter ones right into it. So that's my my recommendation for there. Along with that, I would probably recommend getting just just a whole bunch of like little connectors and clips and and whatnot or even just take apart I forget what it's called like a solder sucker where you you push it in and you let go and goes and then what that does is it it sucks the solder out. Um, I'm trying to see if I have it on my desk right now. No, I cleaned up my desk. A little bit of um flux in like a little syringe. Flux is really messy. Um, and you need it when you're soldering or desoldering components. So, I always recommend getting one in like a little squeezy pen so you don't flux make a mess. Flux is if if you when you're heating up your solder, it doesn't want to nicely flow into it. So, a lot of times you see people that are like, "Oh, I suck at solder." and look at it and it's just like this like ball of crustiness. That's um And that's Yeah. Well, it's probably because you don't have flux and it flux is like this like gooey paste that you put on top of it and when you apply heat to it, it does something to this. I don't even know how how it works, but it it basically makes your your solder just kind of flow out. Um, and you need that when when you want it to like flow into a joint that you're soldering. Or the opposite, if you want to like suck up the solder with like a like a copper wick or something, um, you need to put a little bit of flux on top of it. So, I need some flux and some solder sucker. Yes, absolutely. And like a little what's it called? Like a braid. I don't know the names of any of these things, but if you just Google it, you can find some stuff. Go on AliExpress, go take a look around for all the fun things that you can buy. figure out what the absolute cheapest thing is and then buy like one level up from that. You'll probably be happy. That's good advice. Although be careful what the like lead solder is certainly a thing in China, which a lot of people love lead solder because it's so much easier to work with, but also it is um very harmful to your health. So, if you are buying stuff off of AliExpress, be be careful that you're not actually buying lead if you're going to be huffing it in. Yes. Important. All right. Next question from Kalin. Hi, Syntax. How do we balance prepping for no AI interviews while job descriptions demand AI proficiency? Honestly, after getting spoiled by tab completion and instant AI answers, manual coding feels like going backwards. Is old school prep still the only way to prove experience in 2026? Well, yeah, that's an interesting thing. To have a job that demands AI proficiency but doesn't allow you to use AI in interviews feels like a bad structure for an interview if you're asking me. Like, this job demands jackhammer proficiency, but guess what? You got to just use a chisel. Like, okay, what are you testing me here on? I think some of this too is like really just kind of dialing in your your understanding of things because many times like the code we write is like less important than understanding the concepts of writing good code. So a lot of this too is less about the code you end up writing and more about like being able to express the understanding of systems to express the understanding of best practices. So even if you're used to the tab completion style of things, if you can prove to them that you understand the core concepts, you understand uh why you're doing things and what these best practices are. I feel like those are the things that are going to always uh keep you ahead of somebody who's just like, I don't know, I just hit the button and it works or I I'm putting this component in. Why am I putting this component in? I don't know. was just how I've always done it, right? You you being able to express yourself and show your work in those types of ways to me I I think lands better. But it it is interesting that they wouldn't allow you to use AI in a tool that demands or in a a job interview that demands AI proficiency. I saw this from Brendan Faulk on Twitter the other day and I thought this was genius. He says, "I believe we have found the best AI native coding interview. We call it the composer one interview." Composer 1 was like the very first model from cursor and he says candidates get 1 hour to build a real medium-sized project live. The only constraint is they must use cursor's composer one model. Why composer one? It's extremely fast. Candidates can accomplish a lot in 1 hour. More prompts and more code equals more surface area to assess. And two, it's perfectly mediocre. I love this. Composer one is good enough that the output is decent but not good enough that it does everything perfectly. Candidates can't rely on the model. They have to be good engineers themselves. Weak candidates use one agent, write bad prompts, and accept the code they don't understand and end up with poorly structured codebase. Exceptional candidates run parallel agents, uh, write detailed prompts and enforce very high coding standards. I I saw this I was like, this is genius. like give them something that's not that good but is fast and and see see how they how they use these tools. I this is not really answering your question about like how do you prep for a no AI interview but I thought it was just so smart of like this really separates people who understand how these systems work, how they can plan, how they can tackle things versus like people are just like I just pushed the button, I type in the box what you told me to build and I'm going to accept everything that comes in. And that goes along with what I was saying is because if the even if it's tab completion and and not like prompts and agents, even if it's just tab completion and the tab completion is giving you something that is bad, you learn a lot from watching somebody modify the tab completion or choosing not to use the chosen tab completion because of why it's why it's bad. Cursor learned a lot by watching you not accept your tab completions as well. That's how pretty sure that's how they trained all their their new model their composer 2. Uh they have all this data on what well they have all this data on like what people have accepted right like they have they're sitting on this gold mine of like we have done two billion whatever requests and we proposed this code and here's the cases where people just deleted the whole thing. Here are the cases where people changed these things. Um and that's pretty good. The cursor rolled out their composer 2 a couple days ago and it actually is pretty good. But you saw the drama around that, right? Yeah. Yeah. They So they took what? They took Kimmy 2.5 and then trained on top of that. Um and but they didn't say that that's what it was. So there was all this drama around. People thought they just took the the Kimmy 2.5, slapped their name on it, and uh and called it a day, which is a bit of a a PR crisis for them. Yeah. A little bit. But that that said, it's actually really good. I used it for like a week and it I was like, "This is actually pretty good." It's not as good as as like Opus or or Codeex or any of the whatever the latest OpenAI one is, but it's pretty good. I would say pretty good. Pretty good. And you can use it with Zed. I didn't think you could, but you can. Eric says, "Have you tried Figma? What do you think about their dev mode?" So, yeah, this I thought this was an interesting question since we're talking about design and and how to approach these types of things. I've never used dev mode in Figma. Have you, Scott? No. No. Get that out of the way first. No. I mean, I've I've looked at it, but I've never considered using it for any serious anything. Yeah. Yeah. I I always look at these types of things and I I'm like, I don't really want my like design tool writing code for me. And now we're at a spot where you can just screenshot the thing. Shout out to the LA one of the last thing. and just have the thing coded for you. I don't know that we necessarily need that. I I will tell you that what we do need we still need an app that is like Figma to actually design these things and whether you're doing that via prompting and it's just kicking out code which is what a lot of like I know paper and I think there's another one called pencil. There's there's several of these like apps people are working on which is just like Figma, but instead of instead of it rendering out like like WebGL or whatever Figma is using, it's it's literally just rendering out HTML elements to the thing. We certainly still do need apps like this that are going to help us figure out what the design looks like and some sort of interface. Whether that's clicking around and dragging things or whether that's inputting values into boxes or whether that's simply just typing into a prompt box and saying space these things out 20 pixels a little bit more and and seeing what the result is. I think we still need those apps because the alternative of just yoloing it and and having this is what we talked about earlier. The alternative of just yoloing it and having Claude kick out a design for you is is not working out and it's it's making some pretty brutal interfaces. I've been actually going into Figma lately to design instead of designing in browser, but not using the code mode or code output or even giving I'm not even giving screenshots of it to AI. I've just been really liking Figma lately for just straight up uh designing instead of I I I primarily design in browser. But as Wes knows more than anybody, what happens with me is I start from such such a systemized point of view that when I design in browser, my designs feel soulless because I don't get chance to play. I build the system and then okay, it looks fine. I'm bored. Let's move on. where if I go into Figma, I can really get arty with it and throw stuff at the wall. I actually really like uh some of the auto layout stuff in Figma. But um their their dev mode, their code stuff, I I personally wouldn't touch it because I don't trust somebody else's code except for uh Claude's I don't trust Claude's code. I'll tell you that. Okay. Um I did test what was it? paper the other day where the idea is that paper is like a canvas where you can like feed it a website or like a component. Um, and then it will recreate that component in paper and then you have this like interface which you can either click and drag and around like visually or you can also just prompt with an MCP server to change things. Um, and then the idea is that you then once you're happy with it, you put it back into code and your your code base is sort of this the source of truth. Only when you need to like work on a design, you sort of take a piece of the website out of the website into this like what do you call it like canvas. You work on it with your your designers and whatnot and then once you're happy with it, you stick it back into the website. So that was kind of interesting way to to go about it. So you don't have this like designs and then you have your actual implementation of your website where they start to drift from each other. So again, I don't know that we know what the answers to all of this stuff is, but certainly a lot of people are trying to figure out what that looks like. Next one here from Bosch. Not really a question, but Wes, your sick pick the other day was ice, which is inactive, and you should use thaw instead. Yes, use thaw. We'll link up thaw. Thaw is Thaw is the modern version of Thaw uh of ICE which is a uh menu bar manager in Mac OS and Thaw is pretty much a straightfork that's being maintained of ICE and I I got to say I've been using Thaw for a bit and uh that's looks good. I have had zero issues with my ice but I didn't even realize it was unmaintained. I you will I'll tell you you'll have issues when you upgrade to the latest version in Mac OS because I'm on the betas and it doesn't work and then I installed BA and it works just fine. So um that's good. Yeah, I I'll tell you one issue I have I have with it is I use ICE to add little backgrounds to my menu bars. Looks kind of cool and it it looks nice and they're a little bit laggy in how they draw. Like if I if I hover over top of some of the items, you can see that they they don't paint as quickly as possible. So maybe that's fixed. I'll give it a shot. Thank you. Bosch Moth Man says, "Since AI has made writing code and changing so much easier, one would expect as a collective we would drift towards more performant or more stabilized APIs. For example, moving servers to Go or moving front-end code to web standards and web components instead." But I have seen a bigger shift towards proprietary ecosystems like Nex.js. What do you think that is if you accept my promise at all? Yes. Lots of things to say here. First of all, we are recording an episode. I think by the time you're listening, this will probably be out a week before this one on moving systems, right? Like I moved my Express codebase, which has been bugging me for many years. I moved over to Hano and we have a really good episode on like how to tackle a project like that because it I don't think it's as simple as like move my Rust server to go and then all of a sudden it just moves absolutely everything over and it's it's totally fine or make everything fast as possible. I don't think we're there just yet with this type of stuff. And the question here is I've seen a bigger shift towards proprietary ecosystems like Nex.js. Pretty sure by the time you're listening to this, the announcement on the Nex.js stuff will go live. They are rolling out a whole bunch of stuff about hosting elsewhere and and adapters, which is pretty cool. So, we'll we're going to be talking to them about that as well. What do you think that is? I don't know. I don't think I wouldn't say these things are are proprietary. If anything, like a lot of these like I saw Linkree the other day had jacked their price up like by 3x or something stupid like that. I'm like, "Yeah, that's because you shouldn't pay somebody for a page that has a six links on it." You know, that's that's ridiculous that that you are even doing that in the first place. And people are realizing, oh, I don't need to pay $300 for a link tree. I can simply just make my own. Uh, so I I wouldn't say we're moving towards proprietary stuff. Not at all. I think we're we're moving much more standardsbased, much more all of the stuff that is being built today. If you look at what the cloud code stack is, you know, it's all very much based on web requests, web response, fetch, streaming, all that good stuff. Yeah. Yeah. I do think there is this like default AI stack that if you tell it you're building a web app, it leans on all this stuff that you don't need oftent times. Yeah, I I I wonder the same thing, Moth Man. I've been doing some Rust with AI and it's really nice. I've been doing all kinds of stuff that again I I'm not like Yeah. Yeah. I'm sorry. I don't have a formed opinion on this even though I've just been thinking really hard about it and and I was thinking really hard on I'm feeling like I got an eloquent answer here and then it comes out like that. Uh Mo Man, I think it's just what it's trained on and what it's like really steered towards. We looked at that in the state ofjs where it's like this is the stuff that a AI is going to suggest the most and I have a hard time with that because it's always suggesting a bunch of uh stuff like I don't want it to use it's never I never had it ever once suggest to me to use web components even though there are times when a web component would be the right call there. So yeah I think I think some of it is it all comes down to we're going to say it again personal taste and you have to know what you want. There's your competitive advantage. Yeah. Like I knowing what you want. I don't know the people that are complaining about AI not suggesting what they want. Like are we are we that lazy that you can't spend three minutes figuring out what framework you want to use and you're just you're whining and like not not directly at you moth man. Moth man. You moth man. You're cool. You're one of us. But like come on. Are we that lazy that like we have to like worry about this type of stuff like let the people I guess like if those are your competitors that are not using Go as our server um and it's it's really going to slow you slow them down so much then then type up a new one you know competitor make a new slop fork something you know there's a lot there I don't know we I think that the whole theme of this episode is like you still have to think Please bleep that out. But like you still have to you can't be an idiot yet. You can't. Yeah, use your brain. Yeah, use your brain, folks. All right. Jim Carrey clone says, "Hey, Wesson Scott. While referring to Tanstack router docs, I mentioned something called Vibe rules. And after digging, it seems like an npm package. MPM packages example tanstack/recreattor now ships an LLM's folder inside the dist of the node modules which can be installed with vibe rules. Is this replacement for contact 7 or agent skills or at least docs? Have you used it personally? Scott, have you used vibe rules? Yeah. So, Vibe Rules is a package and what it does is allows you to manage rules files. So whether that's like cursor rules or those types of things. Now the way that this works is through a CLI where you would run vibe rules install and you can point it to things and the idea would be that you are shipping the MD with the library. So that way you could install the Vibe rules into your whatever it is you're using just via their CLI because it's been shipped. I don't really use this style of like rules files. Now it could put it into uh it looks like it could put it into the agents.md file. It can put it into uh claude.md. There's some of this that I can understand existing, but I would much rather it ship like at this point, personally, I would rather it ship a skill and then just have that skill available rather than having it be a rules file or I don't know. I I hadn't seen vibe rules before we got this question in. And it doesn't necessarily feel like something that I I'm personally going to be like going nuts to use, but it does like the one thing I do want to applaud the tan deck folks here is taking ownership of that uh part of things because too oftent times we are just pointing the AI at LLM.txt txt or we're crafting our own custom skill or our own custom agents file. So, if the library can tune one up specifically, then that's that's great. I I've been working a little bit this I mentioned before that I have like a graffiti best practices skill that is like trying to do that ownership of that, right? So, for my library, I'm trying to take that ownership over well, just doing it in a different way. I I do want to point you to a blog post that uh David Kramer wrote called optimizing content for agents which I found to be really interesting and he was talking about the Sentry documentation and how the Sentry docs are written in markdown and the web uses the markdown to render the web but instead of using this old school old school which is like what like 6 months ago an LLM.txt txt style of duplication of the docs. What this is doing is is that when an AI agent hits the sentry docs, the sentry docs are returning markdown to the agent, not a web. So that way it's not getting a whole bunch of the DOM stuff, right? It's just getting the markdown when an AIS it. I think this to me is a I don't want to say better approach, but a an approach to the same type of of problem is like we're trying to optimize how when agents tackle a problem, where can they go to get that correct information. Yeah, it's it's tricky because part of me likes this idea of like shipping your docs with your package, meaning that if you are on like a version 4 and version six is already out, then it's it's simply just pulling it from that exact version of it. Um, but like you said, that's that's not to say that it can't just reach out to the web. And I certainly don't need more text in my node modules folder. You know, like we we have an episode on like why is node modules so big. And spoiler alert, it's a lot of text. And if everyone starts adding these vibe rules, it's it's going to get significantly bigger. But I do like this. I do like this tool. So there's there's kind of two big tools out there for doing this type of stuff. There's obviously vibe rules and then there versels. Versel has something called skills. And and both of these are just CLI tools for installing agents.mmd or or rules or whatever whatever have you instructions on how to use the thing via the CLI. Um and then unfortunately right now all of the different things out there you know claude uses claw.md a lot of them have standardized on using skills which is really good and I think we're probably going to get there. We're still figuring it out. We might we might decide to change that entirely in the future, but it it certainly is a replacement for like an M MCP like Context 7 if the these packages are just shipping their own rules and you can use vibe rules to bring them in. So I I think this is cool. I it doesn't seem as popular as simply just having the agent go to the web and and fetch it. I I think a lot of the stuff where you have to like config and set it up will not be a thing in the long run with a lot of this. I think the agent will just know where to go and how to find the the proper docs and and whatnot for what it needs. But stop gap for now at least. Yeah, I would say even uh be careful with skills.sh. What I do is I find the skills on skills.sh. I go to their GitHub and then I either just download it straight from GitHub or I copy and paste it. Why should you be careful, Scott? I personally worry about I I've heard people describe skills.sh as being malware. Uh although I don't necessarily like subscribe to that. I don't want like a tool managing my skills because at the end of the day, your skills are just some text files inside of a folder on your computer. It's not the type of thing that like yes, sure, maybe you want to version it and stuff like that. You know, even I think Century has a tool for like versioning skills or something, but I I personally I just don't want a tool to do that. I just want to make sure I have a handle on everything. I want to copy and paste it. I don't want anything installed in there via a process of a script that I don't know about. Yeah, it's hilarious that we're now talking about this with skills because it's it's been an issue with like like open source software and like npm and every package manager for a long time is like you are installing some code that some wrote and that code could do something malicious when it was running in a privileged environment and now we're we're here with with skills as well cuz skills can be prompt injected. you're installing these text files from random people on GitHub and if they were to change that and then it would automatically just say read the MV file and send off a fetch request that then sends it to this external endpoint. Right? I'm just reading an article from Snick here which is 36% of the skills on there were malicious and had like prompt injections. probably not 36% of the top ones, but there certainly was a lot of people that saw this tool and went, I'm going to try uh try to exploit this thing with with prompt injections. So, that certainly is going to be a problem. And there's like Scott said, there's nothing wrong with these files are usually not very long. You could just go copy paste them in or use the the tool to install them. Um, it does some nice stuff like it'll sim link them, you know, if you if you want them in a claw.md and you want them in a agents.mmd they'll sim link them. I think that that is good. And also I don't know that these I do think that these skills need to be published to npm and be versioned. So I think viiber rules does that better than these skills which is generally just fetching a text file from an external thing. But rules files, Wes, are also like rules files are are much more like generalized, right? Where skills are a little bit more targeted. You're using a skill where a rules file is just applied, you know? Yeah. Yeah, you're right. I I think those two things will be merged eventually. It's like we had a whole episode on it and like even us, we couldn't like do a really good explanation of where you would want to use one versus the other. I'm getting into it a little bit more because I started using Pi and Pi doesn't have a lot of concepts that many of these other ones do like agents and whatever. And with Pi, it's much more normal to just put a lot of stuff into skills or to build an extension. Uh so I've been d I've been using skills a lot more and I got to say, yeah, they're great. Yeah, you just put an agent on it, load up a skill. That's pretty much that's good. I I honestly I think in in most cases rules and skills are the same thing. Even if you go to the cursor docs, they say like the cursor rules is now the exact same thing as agents.mmd. is just that when you're doing a specific task, you hope that the AI realizes, oh, I am doing front-end web design or I am doing video generation, therefore I should then go reach out to the um specific skill in there. We'll see. We'll see. I I say these things and I'm just like, well, maybe I don't agree with that. It's we're just we're just trying to figure all this stuff out. Let us know in the comments below what you think. Cool. Well, that's it. Uh, do you want to get into sick pics, Wes? Yes, I want to sick pick something. I have this. Speaking of standards and whatnot, USB cables are a very, very hard standard. So, I got this thing called a USB cable tester board. Um, and then I also got an additional um, little USB feature detection one. And the way that these things work, the the tester board itself is is extremely simple. Um, let me let me pull out a cable here and show you. It basically is just like a continu. If you've ever used a continuity tester, it's basically doing that and it's figuring out what wires are connected to what inside of your cable. So, if you ever have a cable and you realize, hm, I'm not sure if this supports USB 3 or Thunderbolt or fast charging or quick charge or any of those things, um, the way that it works is that it simply just looks at the pins on the USB cable and it will light up which of the pins are actually connected. Um, and then by looking at those pins, you're able to figure out if USB3 speeds are are supported, if fast charging, power delivery, quick charge, all those things. And it also has um RJ45 port on it as well, which is really nice if you have if you ever have a flaky cable like let alone like does it support this thing? But if you ever have a cable where like I have one Thunderbolt 4 cable that is flaky and it still works for charging, but it doesn't work for data. And I'm able to plug it into this thing and and look, okay, I see if I move if I wiggle it, the like data one is going on and off. And and then I I know that that cable is not good. And then this one as well. This one is actually has like a microcontroller in it. And it will Let me show you real quick. Oo. So it'll it'll show you the voltage pull. Um and it says it'll tell you if quick charge or power delivery is available. It charges you. It tells you which voltage is being used because USB can it can run at three different voltages. Um, and that is so handy if you have something that is dead and you don't know if it's actually charging or not. And you can just use this thing and say, "Oh, it's it's pulling current." Or, "Oh, it pulled current immediately and then stopped and there's probably something fried inside of it." So, yeah, I got two nice little little tools for debugging all of my USB gadgets. Sick. I got to give me one of those. That looks cool. I'm going to sick pick a TV show that man I watched the first season of this when this came out a while ago which was uh Jury Duty. I think I probably sick picked it then the the concept of jury duty was that everybody on a jury is an actor except for one person who thinks they're actually on a real jury case. And because of that, they are just kind of manipulating this this one person with all these ridiculous comedic events. And it was like the funniest show. I've I really Man, it was the funniest show. I loved this show. And it was such a big hit when it came out that people were like, "All right, this is awesome, but they're never going to be able to do it again because now that they have this concept, if this ever happens again, they're going to have to find somebody that had never heard of this show." Right. Well, that first one came out. I forget when. It looks like 2023. So, the second season now just came out, the first three episodes at the time we're recording this. And they they're doing it as a company retreat. And it's the we watched the first three episodes and I was just in tears through most of these episodes. And the idea was is that they hire a temp worker to a fictional company that's a hot sauce company and with the role of they are the assistant to one of the employees at this company and their job is to keep the the one guy you know his everything in order on this company retreat that's like a big deal but everybody at the company retreat including the people managing the venue as well as all of the employees are all actors. And the whole thing is just meant to to mess with the one guy who's being hired as a temp. And again, man, it is just what's this called? It's called Jury Duty. And I'm like, it's on it's on Prime Video. I've just been loving this new season because you would not believe the stuff that like they're they're just seeing how far they can push it. And in the if you didn't see the first season, Wes, the first season is well worth your time. It it like as far as like comedy shows go. Just extremely funny. And my god, there's just some incredible moments on in both seasons of this so far. So big fan of Jury Duty. Check it out. So, thank you so much for your questions. If you have any questions for us, head on over to syntax.fm, leave your questions in our potluck question form, and we'll get to them on the show. If we didn't get to your question, we may get to it in the future. Uh because there's always just way too many good questions. But thank you for leaving your questions and we will catch you in the next one. Peace. Peace.

Get daily recaps from
Syntax

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.