Senior Developers are Vibe Coding Now (With SCARY results)

Traversy Media| 00:17:17|Feb 18, 2026
Chapters9
AI-generated code is enabling faster development but compromising quality, security, and reliability. This chapter breaks down the problem, explores its causes, and discusses practical ways to address it.

Senior developers are embracing AI, but Brad Traversy warns it’s increasing security flaws and code debt unless we tighten reviews and guardrails.

Summary

Brad Traversy (@Traversy Media) confronts a timely reality: AI-assisted coding speeds up delivery while quietly inflating risk. He cites Veracode and Code Rabbit findings showing AI-generated code failing security tests and carrying more issues per pull request than human-written code. Traversy emphasizes that AI can grasp syntax but often lacks domain knowledge and architecture awareness, leading to misconfigurations and outdated dependencies. He also highlights growing PR sizes (about 18% larger with AI) and the need for stronger review processes and smaller, clearly scoped changes. The takeaway is not to abandon AI, but to treat it like a capable junior engineer—worthy of review, with guardrails and thorough human oversight. Traversy demonstrates practical ways to integrate AI safely: layered reviews, Code Rabbit CLI, and a two-pass PR process before final approvals. He also reflects on trust, knowledge transfer, and the importance of accountability when code is pushed to production. In short, the AI shift is here to stay, and the path forward is disciplined usage coupled with robust code reviews and documentation.

Key Takeaways

  • Veracode reports show 45% of AI-generated code failed security tests and introduced OWASP Top 10 vulnerabilities.

Who Is This For?

Essential viewing for senior developers and engineering leads who are integrating AI into codebases, or evaluating AI-assisted tooling. It explains concrete risks, and offers practical guardrails and review practices to maintain quality.

Notable Quotes

"AI generated code is causing some serious problems. Security vulnerabilities that are introducing real threats into applications, sloppy code, and bloated pull requests."
Opening summary of the core risk: AI can degrade quality and security.
"AI generated code is producing PRs which are on average 18% larger."
Cites metrics that larger PRs hinder review and quality control.
"Computers can never be held accountable. That's your job as a human in the loop."
Emphasizes human responsibility in code reviews.
"If you skip review, you don't eliminate work. You defer it."
Adds authority to the need for thorough reviews.
"The proper place to introduce AI feedback is before the PR goes up for review."
Advocates pre-review AI checks to save reviewer time.

Questions This Video Answers

  • How can I safely integrate AI tools into code reviews without increasing security risks?
  • What is Code Rabbit and how does it fit into a two-pass PR review process?
  • Why do AI-generated PRs tend to be 18% larger and how can teams curb that?
  • What OWASP Top 10 vulnerabilities are most common in AI-generated code, and how can they be mitigated?
  • What guardrails should a senior developer put in place when using AI for code generation?
AI in software developmentCode reviewsOWASP Top 10VeracodeCode Rabbitpull request managementsecurity vulnerabilitiessoftware engineering best practicesBrad TraversyTraversy Media
Full Transcript
AI generated code is causing some serious problems. Security vulnerabilities that are introducing real threats into applications, sloppy code, and bloated pull request. This is what the latest reports are showing. It turns out that while AI is helping us write code faster, it's also degrading the quality of our code. And not only is it producing more bugs, it's producing an entirely new kind of issue altogether. Now, even if you're not using AI to write code yet, you should still know about the type of issues that are out there because in one way or another, this kind of affects all of us. In this video, we're going to break all of this down. We want to look at the kind of problems these reports are seeing. We're going to talk about what's causing these issues, and I also want to take a look at what we can do to mitigate these risks. Seeing how things changed from the start of early 2025 to the second half of the year made me realize that I can no longer ignore this AI shift. I saw the same senior developers who at the beginning of the year brush it off as AI slob or fancy autocomplete start embracing it like never before. One report that stood out to me was done by a cloud provider. And this report surveyed 791 senior developers all with 10 or more years of experience. And this report stated that 32% of the senior developers they surveyed said they had shipped AI generated code. Now to be honest, these numbers seem to fluctuate depending on who does the survey and what time of the year this was done, but generally speaking, this number seems to be pretty accurate based on what I've seen. Whether you're convinced that this is a good idea or not isn't really the point anymore. The fact is there's a lot of decision makers who are buying in and for now this seems to be the direction we're headed in. So what are these reports finding? Well to start this report by Veraricode found that 45% of code generated by AI failed security test and introduced OASP top 10 security vulnerabilities into code. If you don't know what OASP is, it's a globally recognized foundation that provides guidelines and information on software security. And every year they release a list of top 10 security vulnerabilities that applications face. On this list includes things like cross-ite scripting attacks, SQL injections, misconfigured access controls, and much more. So these are not minor issues. What's even worse is that these results remained largely unchanged even as models dramatically improved. Another report which was done by code rabbit reviewed 470 open- source GitHub poll request and this report had similar findings when it came to security being an issue in AI generated code. This report showed that on average AI poll requests had 10.83 issues per PR while human generated code had 6.45 issues per poll request. That's 1.7 times more issues in AI generated code. Now, if we break this down by severity levels, AI underperformed in all metrics here. When it comes to critical issues, it was 1.4 times higher. Major issues were 1.7 times higher. And when it comes to minor issues, it was nearly double for AI generated code. Now, let's take a minute to see what's actually happening here. So, this report breaks these down into four categories. We can see that we have logic and correctness, code quality and maintainability, security findings, and performance issues. For logic and correctness, the two that stand out for me are going to be the incorrect dependencies and sequence and misconfiguration. I see this quite a bit where I'm working with a newer library or package and anytime I'm trying to write some code, if it can't figure it out based on the latest version, I'm going to get imports that are based on an older version or simply just off of older documentation. So, it's really having a difficult time piecing things together and trying to understand what's modern and what should be used. If I scroll down to code quality and maintainability challenges, I can see this is around code readability, unclear naming, code formatting and errors, and then unused redundant code. Now, the one that I've seen the most is going to be around redundant code. So, for example, if I'm trying to create some kind of UI component, if I need to use this in other parts of my application, I might consider this and I'll know to create this as its own component. So, I know how to follow dry principles of not repeating myself. With AI models, they're usually trying to find the fastest path to get there. So, if I create a page, it's going to generate all my UI components for that page in that one section. It really won't consider all these other pages unless we clearly specify it or it's kind of obvious to the model. So, I've seen the redundant code part quite a bit and this usually adds a lot of tech debt. Now if we look at the most important part here which is security vulnerabilities we can see that this confirms what was on that Veraricode report where some of these issues were on that OASP top 10 list. So we have improper password handling cross-sight scripting attacks and insecure deserialization. So a lot of these issues are simply due to a lack of human oversight. At least that's the way I see it. There's a couple of mistakes that developers make that I think are really leading to these issues. First of all, I think we have a lack of understanding of what AI can and cannot do. I think we overestimate its ability and we try to just give it too much. AI has gotten good at writing the actual code. That part it can do quite well. Where it lacks is in domain specific knowledge or an understanding of your overall architecture, your project requirements, and how everything comes together. If we let AI think for us, it's going to spin out of control and it's going to assume a bunch of stuff. and performance will degrade as your application grows. The second problem is pull request size and oversight or lack thereof. Now AI makes it easier to produce code and as that happens we tend to ship more code and create larger pull requests that are now harder to review and this is confirmed in a report that says AI generated code is producing PRs which are on average 18% larger. From the looks of it, AI seems to give us the illusion of speed by writing our code faster, but makes us pay for it on the other end when it comes time to ensure quality. How do we fix that? How do we ensure that our team can use the speed of AI but also ensure that our team ships quality code without creating another bottleneck? Now, some of you might be thinking, let's just not use AI. And trust me, sometimes I feel that way, too. But until things change, this just looks like something that we're stuck with. So, we're just going to have to figure this out. For now, the solution seems to be to put tight guard rails on AI to break things up into smaller chunks to avoid it going off the rails and strong processes and documentation. So, we treat AI like a very capable junior engineer, one that can write code, but one that should never be trusted without thorough review. And this is where code reviews are more important than ever. I still vividly remember the first time someone sent me AI generated code in a poll request. Now, this was several months ago, maybe over a year ago, and at that time, I wasn't that good at spotting AI generated code. But something about the comments made me suspicious. You know how AI adds comments to literally everything. Well, after about 30 minutes of review, I reached out to the original author and I asked him about a particular section in the code. And their response was, "I don't really know. I'm just trying to get it to do X, whatever that was. And at this point, I realized they had no clue what the code actually did or how it worked. And I was pissed at that point. I decided that I'm never going to review a PR that the original author doesn't understand themselves. What made me so upset about this was that it broke the trust that developers have between each other. The burden of proof is on the original author to ensure that the code in the poll request works, not the reviewer. By reviewing your own code, you're actually holding yourself accountable and you're respecting the reviewer's time. If you can't explain your own code or prove that it works, you shouldn't ask someone else to approve it. It's clear to me that with AI, this is only going to become more of a problem. And what's even worse is with more developers shipping code they don't fully understand, we're breaking that knowledge transfer mechanism that makes teams so resilient. If the original author can't explain why or how their code works, how is an on call engineer going to solve a problem at 2 am when there's an outage? As Addy Omani, who is a lead engineer at Google Cloud, writes in his article, "If you skip review, you don't eliminate work. You defer it." And another quote from an IBM training manual, writes, "Computers can never be held accountable. That's your job as a human in the loop." But what about AI code reviews? After all, if we have these tools, couldn't we just have AI review its own code? Aren't they meant for that? Well, plain and simply put, absolutely no. Now, don't get me wrong, I use these tools. I use Code Rabbit personally, and I couldn't really imagine working on a team or an open source project without these tools, but these tools are great for enforcing patterns, ensuring that there's some structure, and they're also great to have another pair of eyes on a first review. But a lot of developers miss the point of these tools where you have two sides of a spectrum. You have the ones that don't trust AI to review code and you have the ones that trust it way too much. And we need to clear something up here. These tools act as a first pass reviewer, not the final review. Think of it like spellch check. You wouldn't want to send someone a document where there's a bunch of red squiggly lines under certain words and suggestions. You ideally would rather fix those spelling errors, see if the suggestions are actually good, and then send them off to someone for review. At least I hope you would because I would be pretty upset if you sent me something to review and made me do all the spellchecking. Well, these tools should be treated the same way. They're powerful, but they're not that final check. So, the idea is to add a second and third layer of verification before you send this off to someone for review. This way, you're actually respecting their time and you're not having them review all this basic stuff, things that you could have cleaned up yourself first based on those AI suggestions. Let's take this poll request for example. This was actually something that was sent to me for review. Now, I would have caught these errors that Code Rabbit found, but having it automatically find this for me before I had to go through all of this was a nice layer of validation. Now, ideally, this would have been caught before it was pushed to GitHub. As one Redditor writes, "The proper place to introduce AI feedback is before the PR goes up for review. Have the original code author review any AI suggestions and then act on them. Forcing everyone to review code and then also review AI suggestions is a waste of time." And I couldn't agree more. I want to try something out here and actually get our hands dirty and test one of these AI code review tools so we can actually see what this process is like and actually get our hands dirty. So, I use Code Rabbit. That's the tool I'm familiar with. And we're going to start by first reviewing this diagram on the official Code Rabbit website. Now, here we can see two main stages. We see our local development where we're building our application. And we can actually test in that local environment before our code is pushed as we talked about. Now, in this section here, we have this as the first pass review. We write our code, we check it, we fix it right there. Now, once that's done, we push this code to GitHub. This is the second phase and this is where we have that second review process. So it's a double check here. Again, if we catch something that we missed, we have that perfect set of second eyes on that review process. Now from here, this is where we should pass this off to a reviewer. Once we do that first and second check, we give this to someone and they can ensure that most of the things that we may have missed have already been caught and now they can focus on looking at the important things inside of this review. So now that we have a highle understanding of this, let's actually jump in and learn how to perform that local review, that first stage in this process. So in order to perform local reviews, we're going to have to use the code rabbit CLI. This is what's going to allow us to run commands, check our code all within that environment. Now we're going to get started by going to code rabbit.ai/ CLI. And here you're going to see this command. So I'm on a MacBook. I'm gonna go ahead and copy and paste this in. And that's going to start that installation process and just run through it. Now, if you're on a Windows, you might want to look up what that command is because I think it's a little bit different. Now, the official documentation tells us that we need to refresh our shell. So, I'm just going to copy this command here and we're going to paste that in. And that should take care of that. And in the next step here, we're going to want to log in. So, make sure you have a Code Rabbit account. Just go to their website, sign up there, and we're just going to go ahead and run this command. That's going to be code rabbit o login. Now, there's going to be different options in how you can log in. In my case, I'm going to log in with GitHub. And this is just going to give me this O token. So, I'm going to copy this, paste it in, and I should be logged in there. So, that looks good. You can use Code Rabbit with multiple AI coding tools. We can use Cloud Code, Open Code, and a bunch of others, but in my case, I'm going to use Cursor. There's two main ways that I want to show you here. So, there's Code Rabbit by running commands directly in the terminal. And then there's using Code Rabbit with the AI agent. So, whatever tool you're using, essentially you write your commands to the agent and you tell your agent to use Code Rabbit. We're going to start by doing this the manual way. So, I have a project set up and I made some changes and I want to make sure these changes are reviewed. So, the first thing I can do here is in my terminal, I'm just going to go ahead and use the code rabbit command. So, when we run this, we're going to get this interactive terminal. And here we get some information about our project. We can see what branch we're in, how many files were changed and modified, how many insertions and deletions we have, and so on. So, if I want to review this, I can just go ahead and click enter. And this will kick off the review process. So, depending on the project size, that time frame is going to change a little bit. So, if we have a bigger project, the review is going to take a little bit longer. So, we're just going to go ahead and let this run through and we're going to see the suggestions here in a second. Okay, so the review is done. And here we can see that we have some changes and suggestions. Now, it's not that many. This was a small project and I purposely added in a bug here. And we're just going to go ahead and just see what this suggested. So, we can navigate through this by clicking on our arrow keys. So, we can go right, left, up, and down. And here we can see that it found some issues with Astro specifically missing a param here. So, we're just going to go ahead and scroll down, see the suggestion, and what's nice here is that we actually get the suggestion. So, we get a prompt that we can pass into AI for AI to actually fix this. So, we have the option of either directly copying this into our AI agent or we can apply the changes right away. Now, in my case, I just want to go ahead and apply those changes. So, I'm going to press A. So, it quickly made the fix, added the params here. And the nice thing is that once this is done, we can see this green circle next to that specific suggestion telling us we completed that part. So, when we have a lot of them, we can just go down the line, apply changes, fix them as we see fit. The other way to use the Code Rabbit CLI tool is going to be directly with our AI agent. Now, this is actually really cool. So, here's how it works. I'm going to go ahead and add a feature to this project here. So, I'm going to go ahead and open up this chat window here, and I'm going to say, let's add in a login page and put all articles behind a authentication wall. So, that's my standard prompt. And now I can follow up with this and say run code rabbit review hyphen prompt hyphen only to ensure secure and clean code. Now what's happening here is actually really cool because we're telling our agent to run its normal command to build something out but then we put in a self-verification process and now the agent has access to code rabbit. It can run these commands and we specify exactly which command to run. So, it's a self-verification process. That's really neat. If you prefer it that way, you could just check your work as you're building. So, the last step with this command is just to go ahead and hit enter. Let this go through its standard process. And as this runs through, I'm just going to go ahead and let this skip towards the end. We're going to see where Code Rabbit actually was triggered with our agent. So, here we can see that our agent went through its standard process, crossed off all these tasks, and here as one of those steps, we can see that code rabbit review. So there's no need to go into the details here, but if you want to review that, test it for yourself, you can actually see what that result looks like. So that's code reviews in a nutshell. Now, the beauty of this is that Code Rabbit is highly customizable and it actually learns off of your codebase. So for example, if your team likes to use two tabs instead of four, it could actually catch that. And if a team member uses four next time, it can flag that as a warning and say, "Hey, we use this practice. Maybe you should try to do this." If your team likes to use snake case instead of camelc case, it'll also catch these things and learn off of what the team is doing. If you want to be more precise, you can give code rabbit a set of rules and patterns to enforce on your team. So this way you have full control over what code rabbit enforces. So while AI code reviews are one solution to help us fix some of the AI slop and performance issues, remember there is no substitute for a human review. So use it cautiously.

Get daily recaps from
Traversy Media

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.