Rubber Duck Thursdays!

GitHub| 01:14:57|May 8, 2026
Chapters8
Welcome and set up for the stream, including audience check-ins and topic asks.

Cadia demos Rubber Duck Thursday: using Copilot CLI with GPT 5.5 and Opus 4.x to critique code, explore change logs, and preview enterprise plugins and secret scanning.

Summary

Cadia hosts Rubber Duck Thursdays on the GitHub channel, sharing hands-on riffs with Copilot CLI, GPT models, and the new rubber duck critique agent. She toggles experimental features with the slash commands, shows plan vs autopilot modes, and demonstrates how Rubber Duck critiques a plan using a GPT model (opposed to Claude as the orchestrator). The stream dives into May's maintainer month, the change log, and practical updates like enterprise manage plugins, VS Code improvements, and secret scanning with the MCP server. Cadia also explores bringing your own keys, local vs cloud models, and how multiple models (GPT and Claude) can be used side-by-side in Copilot CLI and VS Code. The chat is lively, with questions about model choice, PR reviews, and workflows, plus a live walkthrough of a PR review cycle where Copilot reviews and then updates code based on comments. The session wraps with a nudge to explore the Awesome Copilot repository and stay plugged into maintainer events.

Key Takeaways

  • Rubber Duck is an experimental critique agent in Copilot CLI that uses a GPT model (opus 4.7) to review the output of another model, providing concrete gaps and suggested changes.
  • May is maintainer month at GitHub, with a scheduled lineup of events and a private Maintainer Community for collaboration and recognition.
  • Enterprise manage plugins are now in public preview, enabling admins to control which agent plugins developers can use inside Copilot CLI.
  • Secret scanning with the GitHub MCP server is generally available, helping catch secrets before commits or PRs are opened.
  • Copilot CLI supports multiple models in one session (GPT and Claude alike), and you can switch between them for planning, coding, and reviewing tasks.
  • VS Code integration and Copilot improvements are ongoing, with new blog posts, model options, and workflow enhancements highlighted in the change log.
  • Bringing your own keys means your usage is billable through the GitHub models system, but you gain the flexibility of local or custom deployments.

Who Is This For?

Essential viewing for developers who want to adopt agentic coding workflows with Copilot CLI and GitHub Copilot, especially those exploring cross-model reviews, secret scanning, and enterprise plugin management.

Notable Quotes

""Rubber duck allows you to kind of have like an agentic reviewer for your work.""
Introduction to Rubber Duck as a critique/review agent.
""May is maintainer month and this means we are highlighting a whole bunch of incredible maintainers this month""
Announcement of maintainer month and its purpose.
""The rubber duck is essentially for reviewing your work... critiquing the work that you did.""
Clarifies Rubber Duck's role vs orchestration.
""Secret scanning with the GitHub MCP server is now generally available""
GA of secret scanning feature in MCP server.
""I like having multiple models to use... the option of having local models in my environment""
Preference for multi-model workflows and local options.

Questions This Video Answers

  • What is the Rubber Duck feature in GitHub Copilot CLI and how does it critique plans?
  • How do I enable enterprise manage plugins in Copilot CLI and what are the benefits?
  • Is secret scanning available for MCP server and how does it prevent leaking credentials?
  • Can I use multiple models (GPT, Claude, local) in Copilot CLI, and how do I switch between them?
  • How does the PR review flow work with Copilot code review and MCP server integration?
Rubber Duck (Copilot CLI)GPT 5.5Opus 4.xClaudeGitHub MCP serverenterprise manage pluginsmaintainer monthsecret scanningVS Code improvementsCopilot CLI plugins
Full Transcript
You know, you found Hey, hey, hey. Hey, drip. What you love? Hello world. Hello. Hello. Welcome to Rubber Duck Thursday. Welcome to Rubber Duck Thursday. I am Cadia. So, so good to have you here today. How is it going? How's your week? I actually feel like we were just here. Hi, Amy. Welcome back to the stream. Um, welcome everybody. Let me know where you're joining from. Let me know how your week has been. Um, I can't believe it's already been a full week since we've been here last. That is insane to me. That is absolutely insane. But I hope y'all have been good. Turning the music down and switching it over. As you know, I get a little distracted by the All right. So today, today, what do we have in store for you? We have a lot of goodies in store for you, but I would love to know what would you like to see? Um, what would you like to see in today's stream? Let me know if you have any requests here. Uh, so we can take a look at stuff. I know last week we tested out, what was it? GPT 5.5. I think we test it out. And that was pretty good. That was pretty good. Um, so we're going to hop right into the change log. I actually haven't been on the change log, so I'm curious to see what's going on over there. But one thing I did want to tell you today is that May is maintainer month. I don't know if you knew that. I'm actually going to pop you guys into my other screen here so I can stay a little more focused on the screen. Hello. The week has flown by. It It actually has flown by. It literally said bye. It literally said bye to us. So, I don't even think I don't even think the typo is incorrect because the week said bye. Um, that's insane. That's insane. Um, Josiah, you are from sunny Seattle. Can you test the Chinese LLM? What Chinese LLM do you refer to? Are you talking about DeepSk? If you are talking about DeepSeek, I I haven't tried DeepSseek and I'm not prepared to install a local LLM on my personal computer. Like I would I would much rather use a separate computer to get that done. So I don't think that's something I can do today, Josiah. If you mean the Deep Seat model, but let me check something. Actually, I know there was a point where we could use open source models on on like GitHub proper, but let me see if that capability is still here. One second. But yeah, tell me how you're doing. Tell me how you how you are. What's going on in your world? I would love to know. I would love to know. Yeah. What model are you talking about, Josiah? I would love to know what specific model you're you're talking about here. Uh, let's do this one. Yeah, I'm not seeing Oh, here we go. Uh, auto mode, all these things. Okay. Yeah, I think that capability was removed. So, unfortunately, I can't really do that. I can't really check. I can't really, you know, install. Hold on. I think I'm seeing something. Hold on. Or maybe you're mean like let's see which one I have here. Okay. So, let's start by going to the um let's start by going to the change log and Okay, Josiah, you did mean deepseeek. I thought so. I thought you meant deepseek. Uh, I honestly I I haven't installed local models on my computer yet, and that's not something I would want to do live just in case anything happens here. I much rather, you know, install these um systems in a on a separate device, but I did just look up something. So, I'm going to show you what we can do. Let me know what else you guys want to do. Um, let me know what you what you've been working on, what you've been playing around in. But let's start by going to the change log. So, I'm going to just pull that up here so we can take a look at what the team has been shipping recently. Okay. So, if you're just joining and you don't know about um rubber Thursdays, so rubber Thursdays is where we go live every Thursday and we, you know, we build answer questions, we try new technology. Um sometimes I'm in the CLI, sometimes that we're in the IDE, sometimes we're on github.com, we're in documentation. like it's just an hour where we just vibe together and just learn about all this the stuff that's changing so quickly. Let me change share my screen here. Okay, so you should be seeing my screen. I'm going to just change this to the two duckies on the side here. All righty. So, let's take a look at the change log. So, right now I'm on the GitHub blog and it looks like there are also some really good blogs on here that we could take a look at. So, this blog says validating aic behavior when correct isn't deterministic. That's interesting. That's something that I think is interesting. Is that interesting to you? Um, something else I wanted to highlight is that May is maintainer month and this means we are highlighting a whole bunch of incredible maintainers this month and you know just like putting them at the forefront. So we have a ton of events planned throughout the month of May for maintainer month. So if you're a maintainer and you're not a part of the maintainer community, be sure to go ahead and join the GitHub maintainer community. It's a private community only for maintainers. And then if you want to see what we're going on with this month, you can go to the GitHub blog and check out this incredible um blog post about maintainer month. So that's super cool. That's super cool as well. I was trying to Okay. Yeah. Here's the full schedule that we can take a look at really quickly. Yeah. So you can see here, of course, we have open source Friday, drop it up Thursday, the state of open source in 2026 that's coming up on May 7th, which is today. So later on today or maybe this past 10, 11, 12. Yeah, I think this was maybe like an hour earlier this happened. And then next week we have even more incredible events going on. So definitely check out the maintainer month schedule and join an event if you can. I'm going to pop this link in the chat. Uh maintain it month. Yeah. So definitely go to the maintainer month site to check out all the goodies that we have in store. All right, let's see what else now. Let's take a look at what the team has actually shipped. There's a library of stuff. There's partner pack. Yeah, definitely go ahead and see what's going on on the maintainer month website. So, it's a month where we celebrate all things maintainers. Okay, so let's go back here and now let's go to the change log. Um, I did want to show this one as well. an event that's happening. Excuse me. So, let's see. Did I get to show you rubberduck last week? I don't think so. So, rubberduck is an experimental feature in um GitHub copilot CLI and it looks like there's been some improvements to rubber duck in GitHub copilot. So, rubberduck is the cross family review agent in compile CLI and it's now available using claude powers critic agent when your session is using GPT model. Okay, for sessions using claude as their orchestra orchestrator, we've upgraded a GPT model to use to seek a second opinion. Okay. So, if you don't know what rubber duck is, essentially if I go to um let me pull up the terminal here and let me just invoke copilot. Oh, you're not seeing my screen. One second. Okay. Okay. Now you're seeing my screen. Perfect. So, let's make this a little bigger. Let's see. Let's see. All right. Okay, so let's see what I don't know what model I'm using. Okay, so I'm on GPT 5.5. You know, guys, I I must say I love having access to all these models in one place. You know, I don't care what anybody else says. That's my favorite. That's my favorite. Okay. So if I do experimental slashexperimental and then do on, it's going to refresh my copilot CLI session and it's going to allow me to turn on experimental features in copilot CLI. Okay. So now that I have the experimental feature turned on, it's on, right? It should be slashexperimental and then show. So it shows all these things. So the experimental feature allows you to enable all these experimental um abilities that we're testing. And one of them is the rubber duck. the rubberduck feature. And so essentially, rubber duck allows you to kind of have like an agentic reviewer for your work. So let's say you're using a claude model to plan out a feature, you if you enable rubber duck, it's going to use a GPT model to review Claude's work. And I think this is incredible because, you know, models can make mistakes and they do. And so having the same model review the the same models like work sometimes it's it's a little biased towards itself and so it it will miss things and so having like a GPT 5.5 review opus 4.7 work is so incredible. So let me show you how this works. I don't remember if we went over this last week but let me show you how this works. So now that experimental feature is on and I think it's on. Let me just try it again. Slash experiment. So to turn it on, it's just slash experimental on it refreshes. So that tells me that something changed, you know. But I think what the change log was saying is that um there's now the inverse. So like if you're using a GPT model then a quad model will reveal which is also really good. Waiting for this to load a little. So let me show you what that looks like in practice. Um oh come on. Oh okay. you were just doing that. Okay. So, what's a simple thing that I can show you? All right. So, I can just do this. Let's do this. This is pretty simple. So, with GPT 5.5, huh, let's try the new one. So, I can say to create a plan. I'm going to go into plan mode. Create a plan to show skeleton loaders while chapters are generating instead of a blank space. And you know you're in plan mode when right here it says plan. You just hit tab like shift tab to go into that mode. Enter. And then once I get a plan, I'm going to say okay, critique this using the rubber duck agent. And it should invoke the robber agent to review it using an opus model. And that's essentially what rubber duck is. It's a way for you to doublech checkck your work using aentic AI. Um, and it's just built it's going to be built into or like it's an experimental feature that we're exploring whether or not it's a um it should be built into copilot CLI. Oh, Amy, what was your experience like with uh GPT Mini to build that L landing page? Was it able to do it? Like did you have to do a lot of iterative work? What was that like for you? All righty. So, it looks like it has a plan here. And I want to make sure to say review. Yeah. So I'm going to say like suggest changes and I'm going to say review this with the rubber duck agent. I think that should do it. I don't know if I need to say like the rubber duck gpt um etc. agent. So it says I'll get the robotic critique of the safe plan then incorporate any changes any concrete gaps before presenting it again. Okay, perfect. So here you see the robotic agent was invoked and it's using clot opus 4.7 to critique the plan that GPT 5.5 just gave us. Excellent. It's like having um you know like when you do your work, you submit your PR, then you send it over to your teammate to review it. That's kind of what's happening here. So GPT 5.5 did the work and we told it to send it over to the to essentially Opus 4.7 to review the work just to have another eye on it just to have another perspective on what the plan is because sometimes there are gaps in the plan. So let's see what the robotic agent comes back with. Okay, so great question here Vanira. So is the rubber duck for orchestration? So the rubber duck is not for orchestration. Let's take a look at the change log. So the rubber duck is essentially for um reviewing your work. So it's for critiquing the work that you did. So let's say for example right here we just asked uh GPT 5.5 to implement a plan to I think to show skeleton loaders. We said to create a plan to show skeleton loaders while chapters are generating instead of a blank space. And so that's the plan it's going to to implement. And then I I implemented it using GPT 5.5. And then I told it to have the robotic agent to re to review and to critique it. So the robotic agent isn't for orchestration. It's for um critiquing the work, finding gaps in the work, and then making uh concrete suggestions as to how things can be improved. And this can be used for if you're planning out a feature, you're planning out um an implementation or even if you've implemented some sort of code, like if you've implemented some code using your agent and then you want an initial review of that code, Robbo is also really good, really good for doing that. So not an orchestrator, more like a uh the professor of critique, more like a a review a review agent or as they say a critique agent. So it's there to critique and find gaps in your work, which is really good. So what you would do is, you know, like typically if I'm using Opus 4.7, I'll tell the the model, hey, you know, feel free to um feel free to spawn like multiple agents to get this work done. And then it will like trigger maybe like three to five agents to get whatever work I'm doing done. That is the orchestration, right? Where like Opus 4.7 is the orchestrator of those models and then it's going to say here is what you asked me to do. Then I can use rubber duck agent to critique the work that the orchestrator did. So it it's just that critique layer that's there. Let's see here. Sheldon says, "Yes, the GitHub's new the GitHub new rubber duck agent capability to use a cloud model to review your pipeline powered by a GPT model is impressive." I agree, Sheldon. like it's found some really um really good gaps. Like I I'm curious to see what it found in the plan here. So the critique found two plan gaps worth fixing. The skeleton should remain visible through the completed handoff and its width should match the final chapter review to avoid a jump. Awesome. Like I think these are like two really good things, like two really good findings from the Rabbi Duck agents on on the plan. And so it just allows me to have a little that little bit more trust in the work that the agents are doing because unfortunately if you're waiting for, you know, all this AI stuff and all this agentic development stuff to go away, it's not it's not going to go away. it's only going to get um embedded deeper and deeper into our work and as the as metal as models become more complex and are able to do even more things, it's going to get even more complicated to do this stuff. So having having an agent that's able to review other agents work is is very important. And so I agree with Sheldon that it is quite an impressive experimental feature. I really like it. So, Alex said, "Is this better than using claude code by itself?" What do you mean? Let me expound on that a bit so I understand exactly where you're coming from. But I would say like you know with clot code by I've used clot code you know I've used clot code and the only thing I would say with clot code is that I only have one model to work with and I've gotten so used to working with copilot CLI where I can literally just you know vacasillate between models. I can vacasillate between you know GPT models and um claw models right here in my terminal. And then I can also add local models to my environment as well to work with. And so I really like that flexibility of going back and forth between models because different models are good for different things. You know what I mean? Like I don't always want to use Opus 4.5 or 4.6 or 7 to do something. I don't always want to use a GPT model for doing something. I may want to also have the ability to have one model check the work of another model. And in Copilot CLI I'm able to do that. So I don't I wouldn't say ne you know like saying better is is is a is a interesting way of phrasing things. I wouldn't say it's better. I would say I like having the option of having uh multiple models including local models in my environment that I can work with at the like in one interface at the same time. And you know this ability is also available in VS code. Like if you're not a terminal developer, where's my mouse? So, okay, where's VS Code, you guys? I'm struggling here. One second. Let me make this big. Okay, so if you're not, it looks like I need to update here. But if you're not a terminal developer, you can also use multiple models in VS Code. You know what I mean? other models. You see a whole bunch of stuff here. And you can also bring your own models. You can bring your own keys, bring your own models, and you know, create custom models that you can use. Not custom models, custom agents that you can use in VS Code. So hopefully that answers the question a little, Alex. I I like having multiple models to to use. All right, let's see. Hello, Rubber Duck. Hello, Jack. Welcome to the stream. Are you paying separately on models for invoking via API? Yes, if you bring your own keys to like let's say you want to add um you want to add your own um enthropic key to your environment. If you bring your own key here, you will be paying for that. Yes. But you're able to use it in the copilot environment. Yes. If you bring your own key, yes, you will be paying for that. LLM as a judge will fundamentally change the way we build our workflow pipelines. It's like literally changing so quickly, Sheldon. It's changing so quickly. Oh, welcome back Mora. So Mororrow, so last week Mora had an issue where she couldn't find any documentation on bring your own keys to aic workflows like um GitHub actions and she said she opened a GitHub issue and the team is already on it and some fixes are already live and she's hyped at how fast it's moving. They're just polishing up the last few bits now. I am so happy that that's happening for you and thanks for coming back to the stream. All right. So, okay. So, that is um that is the uh experimental rubber duck feature. I hope you'll try it, you know, like I hope you find it interesting like myself and Sheldon does. Um, okay. So, let's go back to the change log to see what else the team has been shipping this month because this was the first one. Uh we have some repository rule set improvements enterprise manage plugins in compil CLI are now in public preview. Okay, that's pretty cool. So if you have a GitHub copilot CLI subscription through your company, now your administrator can allow you to use enterprise manage plugins. And plugins are essentially those cute little things that allow that enables your agent to do uh even more advanced things, right? So if I go to a repository like awesome copilot. This gives us a whole suite of agents skills configurations and plugins that we can enable inside of copile CLI. Right? And so if I look at plugins, plugins are um curated bundles of agents and skills for a specific workflow. So let's take a look at what the plugins are here. So we have AI team orchestration plugin. Let's look at one that I can actually talk about. Okay, there's a context engineering plugin plugin plugin. There's a context engineering plugin and this is essentially tools and techniques for maximizing GitHub compiler effective effect effectiveness through better context management. Okay, so it includes guides for structuring code an agent for planning multiple file multi file changes and prompts for context aware development. So, this one context engineering plugin includes all these different skills and that's what I like about plugins because it in it includes an agent and then it has skills in here. There's a copilot SDK plugin, a whole bunch of stuff. If you haven't checked out the awesome Copilot repository, I'm going to pop it in the link so you can explore it. pop it in the link. I'm going to pop it in pop it in the chat so you can explore it. repo someone says how can I connect with you? Connect with me on LinkedIn. I am Kesha Kerr on LinkedIn. Um and you also see me on GitHub's pages all the time. Okay. Yeah. So, go to this repo here. Maybe I should add it as like a short linky here that I can Let's see if I have Yeah, I don't think I have it here. So, let me just show here. Check out um agent goodies here. That's what I'm going to say. All right. So, if you go here, GitHub aesome copilot, the awesome copilot repository has a lot of goodies for you to use with your agents. And so now with an enterprise being able to, you know, manage the plugins that engineers are able to use um within GitHub Copilot, that's incredible because now as an enterprise dev and as a developer at a, you know, big large enterprise, you're able to, you know, now use plugins without worrying about like using community supported plugins that may or may not have things hidden underneath is what I'll say. So I think that's pretty good. I think that's pretty cool. So that's cool. Let's see what else. Some updates to VS Code and GitHub Copilot in VS Code. Search updates here. Secret scanning with GitHub MCP server is now generally available. What? So, secret scanning with GitHub MCP server is now generally available. I love that. So, now you're able to scan for your secrets using the GitHub MCP server. So, run secret scanning. This is how it would appear in the CLI. Secret detected. Do not commit. Um, GitHub personal access token confidence high. Love that. So, GitHub secret scanning in the GitHub MCP server is now generally available when you use an MCP compatible AI coding agent or IDE. You can scan your code for exposed secrets before you commit or open a pull request. So, leaked credentials don't make it into your repository in the first place. And so, what I love about this is this could literally be a skill or even a specific instruction that you put into your coding agent. So like every time you do some work, you know, it automatically checks for secrets in your um code before making that commit for you and before pushing the PR for you to GitHub. So that's awesome, you know, that's awesome because we all sometimes do it. We all sometimes mistakenly put secrets in the wrong place or forget to move them cuz we're like, "Oh, I'm just going to rest this here real quick, test this out, and then I'm going to move it." and then we forget to move it and secrets are still being pushed to GitHub at an alarming rate. So, I love that the team has enabled secret scanning with the GitHub MCP server. And if you're not using the GitHub MCP server, what are you even doing? Um, first of all, the GitHub MCP server comes along with Copilot CLI. And I personally love using the GitHub MCP server. I use it a lot to create um like to bulk create issues for me whenever I'm uh working on a huge um product and it's incredible. Like sometimes even like one of the recent workflows I've been doing is um like I'll have copilot code review a PR on GitHub and you know how when you do that copilot code review leaves these actually let me show you I can show you more than I can you know better than I can tell you so you can try it for yourself. Let me hide this banner. Oh wow lots of chatter happening. Um, I will I will be in the chat soon. One second. But I want to show you something that I've been using the GitHub MCP server to do because it's really really good and really powerful. Enjoy some music as I, you know, find a PR to show you this. I'm going to play dance pop. Enjoy some music. Be right back. Hold on. Hold on. Hold on. I just want to find a PR for you here. All right. All right. All right. We are back. We are back. But let me send you the link probably here. Okay. Thank you for waiting. Thank you for waiting. Okay. I see you got you're all still here. Perfect. The GitHub MCP server is here. It's just GitHub uh GitHub MCP server. If you look that up on the Googles, the internet. Are you still googling, by the way? What was when was the last time you Googled? I googled this morning, so I'm still I'm I still Google sometimes, but are you still googling? I'm curious. Um, okay. So, but the GitHub MCP server is automagically in Copilot CLI. And I want to show you one of the best things I've been doing with the GitHub MCP server because you know like sometimes it's not just about the code. Sometimes it's managing everything else that surrounds the build. You know what I mean? Like PRs and code reviews and um dependency upgrades like keeping dependencies upgraded. Um, recently GitHub actually allowed like made it possible for Copilot to be able to um do dependabot updates and I just think that's incredible because if you know anything about dependabot it is dependable, okay? Like it it creates a it can create a lot of noise in repos. And so being able to have your agents review the updates, I think that's incredible, but that's not what we're talking about. So right what I want to show you is how I use the GitHub MCP server in Copilot CLI to help me with PR reviews because I don't know about you but sometimes agents not sometimes but agentic development and agentic engineering is very fast and it's really hard to keep up with the pace of like the implementations right so what I've been doing is I'll have compil CLI implement something locally I'll like run my security check using a custom security agent. I have a skill, you know, like do checks and updates and all that jazz. And then I'll have copilot CLI push all the work to GitHub. Once it pushes it, you know, it opens up a PR, it does a description, all that jazz, right? Then I'll come over here and I'll uh I'll use Copilot as my code reviewer. So I'll request the review just by clicking the request button here. It's going to take a few minutes to run this review, but once the review is done, then I have all these comments that I need to fix before I can, you know, merge the PR or before I can even send the PR for review from somebody else on my team, right? And so that was a big bottleneck for me because I was like, this is taking entirely too long. So, what I've been doing lately is once Copilot finishes its review on a PR, I go back to my agent, like I go back to Copilot CLI, and I say, "Hey, take a look at PR19, look at all the comments and update everything based on the comments, like address the comments, like fix it." and it does it because it's able to grab those comments in the PR and immediately implement the fixes based on the comments. And I think that's so good. I think that's so good. So, I want to show you what that flow looks like here. So, I'm just waiting for Copilot to finish its review. And while I wait, I will go to the comments. I'll go to the comments. Let's see. Let's see. Okay. So, Bruno says when I cuz I asked, "Do you still Google?" He says, "Of course I still Google, but I'm no longer on Stack Overflow." I mean, maybe that's the real question. Do you still use Stack Overflow? I think I used Stack Overflow the other day for a very niche DNS issue I was having. Um, and my agent just it just it was just going around in circles, let me tell you. And turns out I just forgot to add my MX records for the email to work. You know, like whenever you're transferring your D, like I transferred my DNS from whoever my do my domain is purchased from to Netleifi who I deploy on and I forgot to add my MX records so my emails weren't working. Okay. And I was asking the agent what is going on. Anyway, Stack Overflow is the place that helped me. So that was a recent experience, but apart from that recent experience, I really haven't been on the Stack Overflow that much. How about you? Okay, let's see what else here. How long is the stream? It's cool in here. Oh, Alex, I'm happy you're having a good time. I'm having a blast if you can't tell. So, usually we stream for about an hour. sometimes an hour and a half um when we're having too much fun. And the stream happens every Thursday at 1:00 p.m. Central. There's also a stream early in the morning um in the EU time zone. And there's also another stream in Spanish. I think it's around 11:00 a.m. Central time. Um definitely, you know, follow GitHub on LinkedIn, YouTube, and we have like everything scheduled there um for all the streams that we have. But yeah, it's it's actually pretty cool. Just chill. Just chat. So, secrets in GitHub reminds me of finding people's bank information on Limewire in the early 2000s. Whoa. What? That is insane. Actually, people's banking information were just chilling on Lime Wire in the early 2000s. What a time. I feel like we're in another one of those times, right, where everything is so new, so fresh, growing so fast, moving so rapidly, and all these big mistakes are going to happen. That is insane, Brian. Thank you for sharing. Great topic. Awesome. I'm glad you're like I'm glad you're enjoying the stream. Secret scanning absolutely needed. Absolutely needed. Brian, scan for other people's secrets, listen. Oh, this is interesting. So, I'm especially interested in how agent workflows can become reproducible and independently verifiable, not just logged after execution. That's the direction I'm exploring with Digi EMU Core. That sounds like a plug, Brian. But what is what is Digi EMU Core? Tell us more. Let me see if Copil is finished here with the review. That's why I'm chatting. I'm just waiting for the review to be done. And she's not done. Let's see where it is. I can click in here view session to see the progress on the review. And as you see, it's doing a full and thorough review. It's using the GitHub MCP server. It's using the Playright MCP server to do a full review and it's reading the code. I actually love that you can see all the work here. It's searching through the files. So, it's actually doing a full review on this work. And so, it will take some time, right? It will take some time. It won't take as long as it takes you and I to do a review, but it does take a few minutes. I think like up to like 3 to 5 minutes for Copilot to do the review. And it looks like we have a review. Okay, let me check the comments one last time. Um, and then I'll go back to this. Amy says she's still googling. Is it code review? Yes, that's what we're doing right now. Amy, I'm I'm actually happy to know you're still um googling. I still Google as well, but you know that Stack Overflow question is the is the real deal. Wow. Jay Tyler says Stack Overflow stats have dropped to almost zero. That is really sad to me. Um, that is insane. It's literally insane how rapidly things are changing. Um, and it's it's insane how um how much agentic engineering has just like transformed software engineering almost like like over the past 3 to four years. It feels like like overnight. It feels like overnight because it's only about the past year or two where it's just been All right. So, I'm going to keep going here. I want to show you. Yeah, it's definitely sad. I I so agree. I so agree. It's definitely sad. Sheldon, you pose an interesting question. I'll I'll go to it after here cuz that's that would be a really good workflow actually. Okay. So here you see that copilot code review has reviewed our code. Um it did a whole thingy here. It's findings and now we have like specific comments on very specific sections of the code. One comment, two com Oh, this one is two comments. I'm used to seeing like five, six, seven, eight comments, and it's like eight different things to address. But I want to show you what I'll do next. So, like usually because there are so many comments, I get a little overwhelmed. And so, I go to here and I'll say like, "Hey, let me get out of plan. Let me go to autopilot mode. Can you take Can you take a look at not not issue but like PR number 19 and you see that it comes up here because look at MCP server is here and look at the comments and update the code as needed. Something else to note is that many times in compiler code review, it will like make a suggestion on what you should do here and you can actually if you you know if you look at the code and you're just like, "Oh yeah, that's actually a really good catch." You can commit the suggestion right here on github.com. Like you don't have to do it the way I do it. But I just think the way I do it is not is cool. So I'm going to show you anyway. I'm just going to show you anyway. But that's just something that you can do. And also like you can hit this fix with copilot button and assign it to the copilot agent on github.com and you can choose any of these models to assign that work to to update the code review. Right? So there are a few ways that you can do this, but I want to show you how I do it. So this is what I said. I said, can you take a look at PR number 19, look at the comments and make up update the code as needed. And now it's going to go through and use the GitHub MCP server to take a look at those comments in the PR and make changes. Fingers crossed that it works. You know, like sometimes demos demos don't demo. Okay. But here you'll see I'll inspect the PR discussion and change files and apply the requested fixes locally. So now it's using the MCP server. You see my name here, the repo and the pull request number. And now it's going into the comments. So it has two unresolved comments both in o.tsss which is correct. That's what we saw copilot uh code review said. So I'm curious, have you tried this type of review thing before? What are you using for your code reviews? I'm curious. Um, so is it possible to get the copilot name in GitHub itself while trying to prompt for code review? Can you expound on that? I don't think I'm really understanding your question. Can you ask me in a different way? I'm curious about this question, but can you ask me in a different way, please? Please. If anyone else understands this better than I do, let me know what what it is and I can try to answer it. Yeah, perfect. Are you saying perfect to like the review flow that I'm showing here? What are you saying perfect to? Ah, I'm currently doing a code review. This is great work by Copilot. I'm telling you. I'm telling you, it's been the It's been the biggest on block for me because the models, the agents, they code too fast. Like I remember sometime this week I told um I think I was working with GBC 5.5 and I had what was I doing? I was doing something. I was I was doing some sort of refactoring. I was doing some sort of refactoring on a payment system in this app I'm building and I literally, you know, like I laid everything out in an issue and then I had some additional instructions and I told I told GPT 5.5 and Copilot CLI, hey, look at issue number 20 and here's some additional context. Here's some additional information. I'll be right back, but go forth and build and you know like deploy a fleet of agents to do the work. I kid you not, in like less than five minutes, the work was done. Like, I didn't even leave to go do what I want to do yet. And there were like there were just so many changes to review. And I I sat there and I was like, this is insane. It's incredible, but it's insane that this is where we are in 2026 with um LLMs and aentic coding. It's absolutely insane. But I was able to use this flow to get the code review done pretty quickly before sending it over for review by an actual human. So I'm glad I'm I'm glad that that's my long-winded way of saying I'm glad you think this is valuable, Bruno. I'm really I really am. Okay. So which part is GPT and which part is using Copilot? Great question. Poor Numa. So, copilot CLI is an agent decoder. Simple as that. So, compiler CLI is an agent coder and GitHub copilot is um is an agent coder. So, GitHub copilot can be used in multiple ways. You can use it in your terminal with copilot CLI. You can use it in your IDE here in VS code or you know any other IDE where GitHub copilot is available. And then within GitHub copilot you have access to multiple models. So in here you see in VS code we have auto mode we have Opus 4.6 we have GPT 5.4 and there are auto other models here that you can add and also within oh this is working here. Let's open up another. If I do slashmodels, you can see I have access to many models between GPT models and clawed models, right? And so when you say which part is GPT and which part is using copilot, I would say you know copilot allows you to use multiple models as your coding agent. Uh so within GitHub copilot within the copilot CLI you can choose whatever model you want to use and go forth and build. So I wouldn't say which part is X and which part is Y. I say it's it's both. They they both work together. Hopefully that made sense. Okay, let's see. Let's see. Did Copila finish the stuff? Let's see. Still working. So, let me keep going through the comments. I'm just going to skip over that one. What's the Oh, so what's the hot key for switching to autopilot mode? Great question. So, I'm going to remove this comment from the screen so I can show you in my terminal here. Uh, let's go here. Okay. So, what's the hotkey for switching to autopilot mode. So if on your computer you click shift tab. So it's shift plus tab. So shift tab allows you to switch between plan mode, autopilot mode, and just like regular mode where you can like add files here. But yeah, shift tab is the hotkey here that you can use to switch between stuff. Great question. So instead of using an IDE if I want to do code review from GitHub browser mode. Um so I think uh Sonali is following up from her question earlier. So is it possible to get the copilot name in GitHub itself while trying to prompt for code review instead of using any IDE? If a user wants to do code review from GitHub browser mode, do they get the option to select a model to start? Oh, are you asking if you're able to select a model to do the code review? Is that the question? Sali comment. Is that the question? Are you asking if you're able to is to select a model on github.com to do the code review for you? Is that the That's what I just understood. You guys have a lot of questions today. So Nalia waiting on you. Okay. If your question is, can you choose a model to do the code review for you? I don't think so. I don't think so. I don't think I don't know why I'm talking like that. I don't think that's an option right now. Um, I don't think that's an option right now. Uh, let's take a look at copilot code And just so you know, you can enable automatic reviews on all PRs. Um, like when whenever someone submits a PR, it automatically gets reviewed by Copilot. That's something that you can enable. But to answer Sonali's question, I don't I don't you cannot choose the model to use with Copilot code review. It's it's here. You click the button, it reviews it. If that's the question you're asking. Um, okay. Yeah. So that if that's the question you're asking, no, you cannot select the model for to use for copilot to review the um the PR. Uh, okay. So let's see. I think GitHub is finished with the review like with the updates. I think Copilot is finished with the updates here. So, so it committed the fixes to the PR work tree and pushed them to the PR. I will say one, there's a feature in Copilot CLI where like if you highlight any text in the CLI, it automatically copies it for you here. You know, sometimes I like it and sometimes sometimes I'm not into it because because sometimes, you know, like to read I sometimes have to highlight stuff to see it better, but then Copilot copies the entire thing, you know. Anyway, anyway, um so it committed the fixes and pushed them to the PR. Looks like it had some issues adding an issue comment uh because of my GitHub O. I always have issues with GitHub off because I have like an O token and I'm also doing authentication through SSO. So that's a different issue. But looks like it made the changes and it pushed it right and it tells us exactly what it did. And so it was attempting to you know like update the comments It was attempting to like you know address the actual comments but the MCP server request failed due to my authentication. But here you can see it pushed the change here and now I can mark it as resolved. You know what I mean? And if I want to, I can request another review from Copilot to look at the code again. But that's the flow I've been doing for um for PR reviews and it's been working out. Hopefully more of you find it interesting. Um I know at least one person said that, you know, it was good for them. You guys, it's 2:06. What? An hour flew by so quickly. Let's see what other questions there are. Why should I pay when I can get local LLM? That's a great question. Local LLMs are good. A lot of them are hosted on GitHub and you can play with them if you look up GitHub models. Oh yeah, I looked this up for the first person who said that they wanted me to use DeepS. So you can actually use deepsee with GitHub models. So if I do try models in the playground, I can select Yeah, I can select Deep Seek R1 RB3 here and I can use it right here. So I mean like local LLMs are great. I have no issues with local LLMs. They are great and there is a there is a AI coding agent called open code and you can use uh you can use your GitHub copilot subscription in open code and you can use open like local models in open code. So you know it's it's all a matter of preference. It's all a matter of preference. Oh, for for GitHub models, it's through the marketplace. Cadesha, automagic. I know I say automagic all the time. Automagically. Automagically. Automagically. I love saying it because honestly, it does feel like automagic. You know what I mean? Like I know like this stuff feels very normal to us because we're in it every single day. Like me, I am in like a coding every single day. And so for me, it's like it's normal. Oh my god, it's so great. It's nor it's not normal. All this stuff is not normal. And it's such a new way of thinking, a new way of working, a new way of, you know, interacting with code that it's it's very interesting like most of the population isn't using um models and aentic tools the way that we are. It's insane to think about. And so I just think I just think a lot of this stuff is magic. And so I that's why I started saying automagically. It's definitely a word I use across my brands. But guys, I think um I think I'm going to wrap up here. Hopefully, let me know if you found the session helpful. Let me know in the comments right now if you found today's session helpful because you know it's it's probably that time. Let me know. I had a great time with you today. I'm happy that we went over so many different things. Be sure to connect with me on LinkedIn. I am on LinkedIn. Kadesh Shakur. So, what does it do in comparison to Cloud Code and Codeex from OpenAI? Darius, that is a huge question and it feels like it has to be an episode on its own. So, it's like compile CLI versus cloud code versus codeex. It also sounds like a YouTube video. Who knows? Maybe I'll do that YouTube video. Let me let me write it down right now. Hold on. Cuz I don't want to forget cuz that sounds Who would watch that video? I feel like I would watch that video. Compil CLI versus clawed code versus codeex. differences and um etc. That's a great question. But Darius, did you to do that I would have to you know like we would have to plan something to give to all the the coding agents and then you know have them go side by side to see how they all do it. That's a huge question but it's a great question. All right guys, those are all the comments I can go through. Oh, let's see. Let's see. Thank you. Got a drop. Bye, Tyler. Thank you so much for joining today. Do we need to pay to access for models? Uh, you mean for GitHub models? I don't don't quote me on this. Now, let me uh let's do a quick Google search before I wrap up here. I'm not going to say anything. Is GitHub models free? GitHub models billing. Okay. So there is free usage for GitHub models. So you can go forth and explore. Go forth and explore. But for organizations I think there is billing. Okay. Okay. So go to the docs. Look up GitHub models billing. go to the docs and read all about it. It's pretty magical. I agree. Thank you. Good job. Thanks. Let's do a collab sometime. Send me a DM, Brian. Send me a DM on LinkedIn. It does take me a while to respond on LinkedIn. I'm not going to lie to you. I get a lot of DMs. Um, but eventually I do I do comb through them all. Thank you, Chad. I'm glad you'd liked to. Yes, nobody I know can understand what I do. Thanks for the session. I just learned something new again. Thank you. You're welcome. I'm happy that you learn every time you come here. That is always the goal. It was very helpful. Thank you. All the emojis. Okay, you guys. I'm gonna go. Uh, sirly. Uh, you know what? I'm taking a screen a picture of this right now Sirly, the person who does it is me. The person who does it is me and my team. So, I'm taking a picture of this right now so that it's on my phone so that when I get off, I can look at this because you've asked me for this so many times and I'm so sorry. This is uh to enable the captions, right? I even think someone asked about captions on YouTube as well. So, I will I'm going to go to the team. I'm going to go to the team. All right. So, I automatically support Agentic Coding LM as judge and rubber Thursdays. Yay, Sheldon. I got you to say automagically. Uh, go forth and explore. Go forth and lose. All righty, guys. I'm gonna go. I'm so happy you enjoyed today's session. Can we make whole apps like other AI on GitHub copilot? Abdul, yes. I literally held a workshop on Sunday with about 20 engineers and I built a um I built a full like you know those voice apps where like you can voice get transcription then you can hear your voice, you can copy. I built a full product in two hours using Copilot CLI. It was incredible. They learned so much. I had a lot of fun. Uh, so make sure you're following me on LinkedIn to see all that stuff. Okay. But no, not captions. Oh, what is it? It allows Turbo subscribers to rewind the stream while watching. Oh, okay. I'll ask the team about it. I'll ask the team about it. Oh, it would be helpful if we went over how to start and set up agentic development for someone who knows how to code but is a total beginner with agents and explain what project it actually I love this. Thank you for the suggestion. I'm going to take a picture so I don't forget this either and I'm going to send it to the team because as Claude would say, you are absolutely right. You are absolutely right. That would be so helpful. It's almost like aic engineering for beginners, right? And because we're saying agentic engineering, we'd be attracting software engineers and software developers who've been coding for years, but they're new to this new landscape of agentic engineering. I love that. Definitely going to send that to the team. All righty. Thank you all so much for joining. Um, I'm so happy that you all learned that and you thought it was a great session. I am Cadasha. Thank you so much for being here and I hope you have a really good day. This has been Rubber Duck Thursdays. Bye.

Get daily recaps from
GitHub

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.