How Senior Engineers Actually Build With AI in 2026 | Build a Full Stack Systems Architecture App
Chapters17
Describes a future where senior developers design systems and let AI implement them, introducing Ghost AI as a real-time collaborative tool that maps plain-English system descriptions to a production specification and code. It emphasizes spec-driven architecture, the six-file context system, and the goal of teaching how senior engineers think and work with AI.
Senior engineers are now designing with AI by their side, using a six-file spec system and agentic workflows to ship full-stack apps faster than ever.
Summary
JavaScript Mastery’s latest deep-dive introduces Ghost AI, a real-time, AI-driven full-stack architecture builder. Chris, the creator, argues that by 2026 most seniors aren’t typing all the code—they’re outlining systems and letting AI implement them. The video walks through Ghost AI’s stack (Next.js, React, Live Blocks, Clerk, Prisma with Postgres, Vercel Blob) and a six-file context kit that keeps AI from drifting: project overview, architecture, code standards, AI workflow rules, UI context, and a progress tracker. Crucially, the approach is spec-driven: every feature is planned with a detailed spec before coding, then executed by an AI agent reading the six context files. The tutorial demonstrates prompts and workflow, from an initial clean Next.js app to a fully interactive, real-time canvas with multiplayer AI agents, background jobs via Trigger.dev, and automated spec generation. Along the way, Chris emphasizes design docs, architecture-first thinking, and the discipline of keeping scope tight with in-scope/out-of-scope definitions. The video also showcases practical tips—how to test and review AI output (Code Rabbit, CodeRabbit VS Code extension), how to manage memory with context files, and how to deploy production-ready infrastructure. By the end, viewers see how one developer can build a production-grade app with AI assistance, and they’re given a free six-file context template and a brief roadmap to the related specdriven agentic development course.
Key Takeaways
- Ghost AI uses a six-file context system to align AI agents with a project’s architecture before coding begins (Project Overview, Architecture, Code Standards, AI Workflow Rules, UI Context, Progress Tracker).
- Spec-driven prompts replace vague wishes with concrete unit specs; each feature is broken into a standalone spec that the agent executes in a controlled, testable way.
- The stack combines Next.js with Live Blocks, React Flow, Clerk for auth, Trigger.dev for durable background tasks, and Vercel Blob for storage to enable real-time collaboration and long-running AI work.
- A workflow of planningAI prompts, validating output with a progress tracker, and bookending work with PR reviews (Code Rabbit) helps prevent drift and maintain code quality when using AI.
- The video demonstrates end-to-end from a fresh Next.js app to a fully functioning Ghost AI canvas with real-time collaboration, authentication, database-backed models (Prisma/Postgres), and AI-generated design specs.
Who Is This For?
Essential viewing for senior frontend engineers, system designers, and AI-assisted architects who want to ship robust, maintainable software foundations with AI as a partner, not a replacement.
Notable Quotes
"It's 2026 and most of the senior engineers I know aren't really writing code anymore. They designed the systems and let AI handle the implementation."
—Sets up the premise that AI is shifting how senior engineers work.
"The six-file context system turns an AI agent from a guesser into a developer who already knows your codebase."
—Introduces the core framework for stabilizing AI output.
"Specdriven development keeps the thinking with you and gives the agent a system to execute against."
—Distinguishes spec-driven approach from vibe coding.
"You write the spec, you give it to the agent, and the agent ships unit by unit with a progress tracker."
—Outlines the end-to-end workflow reforging AI into implementer.
"The clearer your understanding of what you're building, the better the AI output."
—Justifies the architecture-first mindset.
Questions This Video Answers
- How can I build production-grade apps with AI without losing control over architecture?
- What is the six-file context system and how does it prevent AI drift?
- What tools (Clerk, Live Blocks, Trigger.dev) are used to create a real-time collaborative AI-driven app?
- What does spec-driven development look like in practice for a full-stack project?
- How do you deploy a complex AI-assisted app to production with Prisma and Postgres?
Full Transcript
It's 2026 and most of the senior engineers I know aren't really writing code anymore. They designed the systems and let AI handle the implementation. And the gap between developers who can do that and developers who can't is dividing the industry right now. This app is what that looks like when you do it well. Realtime multiplayer SAS AI agents running in the background, full production code, and I didn't write a single line of it. An agent built the whole thing. And by the end of this video, you'll have built it too. The same app, the same stack, the same way I did.
And here's what makes this different from other AI build tutorials. I built this app using the exact methodology the app itself is designed to teach. Specs first architecture defined every feature planned before we start building. I've been doing this architecture work by hand before every serious project for years. And at some point it hit me. I'm a developer. Why am I doing all of this manually when I could build the tool that does it with me? So I did. This is Ghost AI, a real time collaborative workspace where you describe a system in plain English and an AI agent maps it onto a shared canvas live.
Your team edits the design together and when it's ready, the app generates a complete technical specification you can build from. Now, if you've tried building anything serious like this with AI, you already know the wall. The first few hours feel incredible, and then a week later, the agent has forgotten every decision you've made. One new feature breaks three others, and the codebase you were excited about just starts fighting you. That's not an AI problem. It's an architecture problem. And it's the same wall all of you are hitting in your careers where the senior dev advice online sounds great as long as you're already senior.
The developers who win in this market aren't avoiding AI and they are not handing everything over to it either. They're learning to think like senior engineers and then using AI to build at the speed of one. And that's what this video teaches. By the end, you'll know how to design a system before writing any code, how to use the sixfile context system I write before every project, and how a senior engineers actually thinks when working with AI. Oh, and that sixfile context system I just mentioned, I've packaged it into a free guide you can grab and use on any of your upcoming projects, not just this one.
The link is down in the description. Next, the stack is Nex.js, JS React 19 live blocks for real-time collaboration with agents trigger dev for background work with AI agents clerk for o and user management prisma and posgress for data versel blob for storage all production grade and deployed by the end of the video. So, one person with the right system can now build what used to take a team. And by the end of the next few hours, that person is you. So, let's build it. Before we open up a single tool, I want to spend the next 10 minutes on the part of the video that I think is most important and that decides whether what you're building ships or falls apart in the third week.
There's going to be no syntax and no code in this section. Rather, what I'm teaching you is the way that I think before I write a single prompt and the way I plan a build before I touch the agent and the system that I use to keep AI from drifting halfway through a project so that by the end you'll know the conversations to have before you build, the six file context system that turns an AI agent from a guesser into a developer who already knows your codebase and how to break any project into units the agent can ship cleanly one at a time.
So, if you've been frustrated by AI breaking your code, then this is the video that fixes it. The intro touched on something I want to sit with for a minute. You heard me say senior engineers a lot. And if you're early in your career, that framing probably hit a nerve. So, let me be direct about what I mean. The job market for developers right now is harder than it was two years ago. Some entry-level work has been automated. Clients who used to hire freelancers for straightforward projects are now doing it themselves. And a lot of people who learn to prompt without learning how to think are now flooding the market.
So if you're worried about that, you're not wrong to be worried. But the developers getting squeezed aren't the ones who learned deeply. They're the ones who learned just enough to execute without understanding the system behind it. That was always a fragile place to be. And AI just made that fragility visible faster. So the way through is to learn the kind of thinking that AI cannot replace the architectural thinking, the systems level judgment and then use AI to build at the speed of someone twice your experience. The clearer your understanding of what you're building, the better the AI output.
Which means that learning properly isn't a waste of time in the AI era. It's the best investment that makes everything else possible. So when I say things like think like a senior engineer in this video, I don't mean you need 10 years of experience to apply this. These are learnable habits and you can start building them today on this project. And here's something most developers outside of big tech don't really know. At Google, Amazon and Netflix, before any serious project starts, engineers spend weeks writing documents and sometimes months. Design docs, one-pages, RFC's, the format changes by company, but the principle is the same.
Figure out what you're building before you build it. And senior engineers at these companies sometimes go months without writing production code. They're designing systems, making architectural decisions, reviewing what other engineers ship. So software engineering has never really been about typing the most lines per day. It's always been about thinking clearly about what should exist before you build it. And AI didn't invent that discipline. It just made it the most important skill in the room. And that's the whole foundation for the specdriven agentic development course that I've been developing for a long time now. So if you want to stay up to date on how that's going and receive an occasional email where I share my thoughts at the current state of the industry, I'll leave the link down below so you can subscribe for the newsletter.
But let me immediately make it concrete in this video. Here are two different prompts that two different developers might write to build the same feature. The first one says, "Build me a SAS app with authentication and a real-time canvas." And the second one says, "I'm adding a livelocks room provider to the workspace route. O is already handled by clerk middleware. The canvas uses react flow. Room tokens should be issued only after verifying project membership. Wire the provider into the existing workspace layout without touching the sidebar or navbar." Both developers want roughly the same thing, but the second prompt reveals a developer who knows their off layer, understands their component boundaries, and knows what should and shouldn't be touched.
They've thought about the system, so they're communicating decisions and not wishes. So, the AI isn't smarter when it reads the second prompt. The developer is. And that is the difference between vibe coding and what I'm going to teach you which is specd driven development which is also the premise behind that specd driven agentic development course that I'm actively developing. Vibe coding focuses on the outcome. You describe what you want, let the agent run, and react to whatever comes out. For a weekend prototype, that's fine, but for anything you're going to maintain, it just collapses. New features break the old ones.
The codebase starts contradicting itself and you spend more time untangling AI mistakes than actually building. Specdriven keeps the thinking with you and gives the agent a system to execute against. You stay the architect and the agent becomes the implementation engine. So how do you actually build that system? Because that's the part that nobody really teaches. It starts before you open up any AI coding tool with a conversation. When I get an idea for something I want to build, I open up a planning AI, Chad, GPT, Claude or Gemini, whichever is on hand, and I talk through it.
What does this thing actually do? Who uses it? What are the core flows? Where are the complex patterns? And what could go wrong? I push back on the answers and let AI pressure test my thinking until the system becomes clear in my head. This conversation is the work. It's what senior engineers do before they build. Except they usually do it in their head or on a whiteboard, but doing it with AI externalizes it and makes it faster. When the system is clear, you write it down. And that's where the six file context system comes from.
Not from sitting at a blank page trying to write documentation, but from taking the output of the architectural conversation and organizing it into documents that travel with the project for its entire life. For Ghost AI, I organized everything into one folder called context, six files. And what matters is that before your AI agent writes anything, it already knows what you're building, how it fits together, what the rules are, and where things stand right now. That's what you're going to learn about in this video. But very quickly, here's what each one of them does at a glance.
The project overview covers what the product is, who is it for, the core flows, and what's deliberately out of scope. The architecture file defines the text stack, the boundaries between layers, and the invariance the codebase must never break. The code standards keeps the agent consistent across every unit of the build with shared TypeScript and Nex.js conventions. The AI workflow rules keep the agent disciplined, defining how to scope work and what to do when something needs a decision. The UI context holds the design tokens and component conventions so the UI stays coherent across every page the agent ships.
And the progress tracker, which is the one most developers skip and most need, holds the current phase, what's in progress, what's complete, and the architectural decisions made along the way. It's the only file that actually updates constantly throughout the build. And it's how the agent picks up exactly where you left off in a single prompt. These six files are what make the difference. An AI agent that drifts and one that executes. And you don't have to build them from scratch for every project. I've put together a free blank template of all six files with step-by-step instructions on how I generate them using AI for whichever project I'm working on.
And it's not specific to Ghost AI. It works for any application. Click the link down in the description to get it so you can apply this methodology to your own projects starting today. We'll open up each one of these and I'll walk you through what's actually inside in the next lesson when we set up the project architecture for real. But yeah, once these six files exist, the build runs in units. We're going to break down the project into specific scoped pieces before we start. Not vague phases like build a dashboard, but concrete units small enough to build in a single focus session with clear conditions for what done looks like.
That means that together for ghost AI, we'll map out the entire build. Each lesson will have its own spec file. The spec defines the goal, the design decisions, the implementation details, dependencies, and a checklist of what has to be true before the unit is complete. You write the spec in the same way you write the context file through a conversation with a planning AI. Then you give the spec to your coding agent in one prompt. Make sure to read that spec, mark that unit as in progress in the progress tracker, implement it exactly as specified without going beyond scope.
The agent will read your spec, read your context file, and build against a defined system instead of guessing. You review it against the checklist and if it passes, you close the unit, push the code, and move to the next spec. If something's off, you write a focused corrective prompt exactly what's wrong, exactly what you expect, fix that specific thing, and move on. That's the entire workflow. The same one I used to build Ghost AI, and the same one you're about to use to build it with me. Oh, and last thing before we begin. These files take time to write.
The conversation, the architectural decisions, the unit planning, all of it is real upfront work. And a lot of developers skip it because they want to feel productive immediately. I mean, I've done that as well. So, you open up the agent, type a prompt, and start watching code appear. But that's the trap. The time you save by skipping this is the time you'll lose in week three debugging AI output that's drifted away from anything coherent. So do this work once and you'll do it faster every project after. And finally, it's time to build. Okay, enough talk and let's build.
Open up your desktop and create a new folder. call it something like ghost AI and then simply drag and drop it into your favorite code editor. For this video, I'll be using VS Code as we have the file explorer on the left side. We have the code in the middle and then we have the chat window which you can open up by pressing command shiftp and then just search for chat or I think it's just command shift I to open it as well. No matter which agent you're using, is it Copilot, Claude, Codeex or whatever, the interface is more or less the same.
This is how it looks like on Codex. And this is just general chat. So this video is completely agentic tool agnostic. The thinking and the specs are what matters. The tool is completely your call. I'll personally be using cloud code as that's what most people are using and what we're using internally within JSM. If you want the budget option, you can go with Codex. There's the Go plan, which is super cheap. The Plus plan, which is also cheap, and I think they're also offering a month for free. And if you're already paying for something like Cursor, Windsurf, or any other AI agent, just use that.
I'll make sure you don't spend a lot of tokens, no matter what you're using, and that you'll learn a ton. Okay, so let's get started. We'll start with a fresh Nex.js project. I'll do this manually with no prompts and agents. It's just a single command and it takes a couple of seconds and that'll give us a clean and predictable foundation to build everything else on top of. So simply open up your integrated terminal and run MPX create next app add latest dot which is going to create it in the current repository. It'll ask you whether you want to install the next.js installer.
So you can just say why yes and proceed. And I also want you to know that you don't necessarily have to follow along by using Nex.js. Of course, the majority of the context for AI agents we'll be working with is going to be featured around Nex.js. But if you want to build this in Tanstack, Angular, Vue, or anything else, the concepts will still be as valuable. So let's just say Y for now. It's going to ask you whether you would like to use the defaults. So just press enter which is going to install React, TypeScript, ESLint, Tailwind CSS, and the app router.
Let's give it a moment until it finishes. Once the installation finishes, we need to clear out the default boiler plate. That's going to be within app and then page. NextJS ships with a lot of placeholder content that honestly we don't really need. Now, you could go ahead and clean it up manually by opening up your globals.css CSS and cleaning everything besides the Tailwind CSS directives, deleting some SVGs and other stuff within the public folder and replacing the page.tsx with a minimal component or we can use this as our first interaction with the coding agent. So go ahead and open up your agent of choice.
I'll just press command shift I to open up my chat sessions. And what I love about VS Code is that it's not shying away from the agnostic approach to agents. You can use their agents right here, such as Copilot CLI, or if you head over to extensions and then install an extension like Claude Code, which has like 12 million downloads, or something like Codex, which has 8 million downloads, and I installed both. You can actually just navigate over to them by selecting additional views and then choosing the extension you want. Of course, alternatively, you can also just run all of these agents within the CLI by typing claude and you're in.
For the longest time, I've been using Claude CLI, but the Claude VS Code extension recently got so much better. And honestly, it's working the same way as the CLI does, but with a bit of a nicer UI and the graphical interface that allows us to use it in an easier way. So, that's what I'll be proceeding with. I'll expand it so we have more space to work with. And you can notice that there's a little microphone button right here. So, you can either tap it or hold the command D key to speak into it. With Codex and other agents, you can either type it manually, or what you can do is use a tool like Whisper Flow, not sponsored by the way, which is what I use to write prompts when I speak with agents.
Speaking is just much faster. If you install it, you can just press the command key and start speaking into it. Like, clean up this Nex.js boilerplate, strip globals.css CSS down to just the Tailwind directives. Delete all SVGs in the public folder, but keep the favicon. Remove page.module.css. Replace page.tsx with a minimal component that just renders a center div saying ghost AI. And I can stop it right here. And you can immediately see the output. Cool stuff, right? Go ahead and write something like this. And let's see how well agent handles it. Also, just so we don't have to give it permissions every single time whenever we're doing something.
For now, I will say edit manually. Oh, and of course, we have to talk about the actual models which we'll be using. In this case, we're using the default model, which is Opus47 with a million token context, which is amazing, but you're going to hit the limits very soon. Instead, you'll be able to follow along this entire build with using Sonnet 46. It's fast, it's inexpensive, and it sometimes get lost unless you have the context files, which I'll teach you how to build so you guide it in a bit of a better way and make it work much more like Opus does.
So, that's what I'm going to select. If you're working with codecs, you can also go ahead and choose any model you'd like, like 54, 53, or anything else. Okay, let's give it a shot. It's going to think for a couple of seconds, read all the necessary files and apply all four changes. Globals, just the import page, minimal centered ghost a component, deleted five SVGs, but kept the favicon. And this file that I tried to trick it with doesn't really exist, so there's nothing to delete. You can see all the changes in the diff right here.
And you can also see which files have been modified on the left side. There we go. just simple ghost AI and just the import for Tailwind, which means that these classes right here should work to center the div. Before we run it, let's actually head over to package.json. Make sure that the project name is set to ghost AI and let's run it by running mpm rundev. It'll run it on localhost 3000. So, if you open it up, you should be able to see something that looks like this. This means that our project is initialized. It's cleaned up and now we are ready to move to the part that actually determines just how well everything goes.
Setting up our context files and planning the build before the agent writes a single feature. So, let's do that next. Not that long ago, I talked about how senior engineers spend weeks designing systems before anyone writes a line of code. They write design docs. They define boundaries and they make decisions early on so that everything else becomes predictable. So let me show you how we are going to approach that. We're going to create a new folder right here in the root of our application which we're going to call context. Everything inside of it is what your coding agent will read before it does anything.
This is how it knows your project and this is how it stays consistent across every session, commit or unit of the build. So you can switch multiple agents and share the context file with another developer so he can continue working with his agent from there. That's the catch. We're going to have a couple of different files within the context folder. Now I don't want you to do a lot of copy pasting. So I'll provide you with the final zipped context folder so we can easily review everything together. So, delete the context folder you just created.
In the video kit link down in the description, get the zipped version of the folder. Unzip it and then just drag and drop it in. And here you'll see six, well, seven different files with this agents.mmd. Might seem scary at first, but don't worry as I'll walk you through every single one of these files and show you how you can create them for yourself in the future. Let's start with the project overview. And let me walk you through why it's structured the way it is. Because understanding this is more valuable than just copying the file.
Currently, we're looking at the markdown version of the file. But in VS Code, I believe this is done by default. You can also open the preview to the side by pressing command K and then V or just pressing this icon at the top. And then you can close the actual MD and just see the formatted version. I think it's a bit easier to read this way. I typically open up these project overview documents with a one paragraph summary and then a numbered list of goals. This gives the agent the big picture immediately. So when a requirement gets ambiguous 3 weeks into the build, and it will, this is where you resolve it.
The agent uses this constantly to understand intent when a spec isn't specific enough. Notice the goals are concrete and measurable. Not build a good canvas. Instead, let authenticated users create and manage architecture projects or let AI generate an initial architecture from a natural language prompt. The agent knows exactly what success looks like. And yeah, this is useful for us while we are going through the process of building the app. Just so you know what we are building, just so everybody in the team, in this case me teaching you and you following along and you building it with me, are all on the same page.
Ghost AI is a real time collaborative system design workspace. Users describe a system in plain English and an AI agent maps that system onto a shared canvas. Collaborators refine the architecture and the app generates a technical specification document from the resulting graph. Okay, great. We have the overview, we have the goals, and then we have the core user flow. So, let me zoom this in and let's go through this together because this sequence matters. The agent can sometimes lose track of how features connect to each other. By defining a full flow from signin to the spec generation, we make the logical sequence explicit.
So the agent won't try to build a spec generation feature on the login page because it knows the user has to create a project and the design architecture first. In the features, we get specific about technologies. And I want to take a second on this because the tools I chose for Ghost AI weren't random. I took a lot of time to choose what actually makes sense. Starting with authentication for user signin, route protection, project creation, ownership, and collaborator access, we're using clerk. Could you have gone and created the full o from scratch? I mean, sure, but it would take you a couple of days up to a couple of weeks to do it properly, even with AI.
And when you do it, it's never going to be as secure as something like Clerk. And nowadays, the speed of shipping matters more than anything. If you have a specific product you want to push, you want to get it in front of potential users as soon as possible. And that's why for many of these agentic builds, it's just a no-brainer to plugandplay clerk into it, especially considering just how well clerk works with your agents. You can either just copy the prompt or use clerk skills with whichever agent you're using. That way it'll immediately become a professional developer at using clerk and it'll be able to implement it within any project.
And there's also the MCP which is a server that allows AI agents like claude cursor or others to access clerk SDK snippets and implementation patterns and basically implement everything for you. Oh, and not to mention that clerk CLI is also being worked on. You'll just be able to ask your agent to use the Clerk CLI to add O to your app, allowing you to not even have to leave your terminal or copy and paste any API keys. Clerk right now really is the leader in agentic development. And with a completely free pricing of up to 50,000 monthly recurring users, it's a no-brainer for me to build all of my applications with it.
So, while we're here, I'll leave the link down in the description. you can click it and then sign up so that we can very soon more easily get started with building. Okay, on top of O the most important part of our application is the collaborative canvas and in this case I wanted to make it real time. So for any kind of canvas, it makes sense to use React Flow. And if you want to make any part of your application live with cursors, so you can see what other people are doing, presence indicators, and node or edge editing.
I mean, it just makes sense to use Live Blocks. Livelocks is the leader for anything multiplayer. But not only apps, agents as well. Let me show you what I mean. I mean, this is the app without Live Blocks. And then if you add it, you immediately get live avatars. You can get the AI chat to generate something. And you can also leave comments and track what people are doing within your app, which is super useful for the collaborative canvas we're building. But what's even cooler is interacting directly with AI assistants to do something within our application.
I mean, if you try to build all of this from scratch, it would definitely take some time. But with their AI assistance feature, you can just watch an AI do it for you. Oh, and this whole React Flow thing you're seeing, that got released recently, which means that in this video, we're building the latest stuff out there. Oh, and like Clerk and the many other amazing dev tools that are adapting to agentic development, LiveBlocks also offers the agent skills, which you just have to install and your agent will immediately know what it has to do to make the LiveLocks integration work.
I'll teach you about all of that as we continue with the build. After the canvas, we have the starter system design, which I decided to have because it's very difficult for people to start with a blank canvas. So, I want to have some kind of a curated library of pre-built system design templates where users can import a starter template into the canvas at any point during editing. but also if the template doesn't do what you want it to do, we're going to allow our users to generate a system design from a prompt with AI. Then that output will be structured as canvas nodes and then we'll write it onto the canvas.
And finally, we'll take a look at everything that is on the canvas and we'll convert it into a technical specification in a markdown format and then users will be able to view and download the generated specs. Oh, and an additional thing that I want to teach you is while we're generating stuff with AI, that'll obviously take time as the user will likely provide a long idea of how they want to architect their app and then the AI output is going to take potentially a minute or two and running a more than 60-second AI generation call inside a Nex.js API route will just time out in production.
So, that's why I'll also teach you how to use trigger dev. It'll allow us to run background tasks that can run as long as they need to with the retry logic and status tracking built in. The way in which we'll combine live blocks and trigger dev is pretty amazing. So we'll have to be very clear in letting our AI agent understand that it doesn't have to invent a custom websocket implementation when liblocks is already in stack or it won't try to run AI generation inside a request handler when triggered dev is defined as the layer for that work.
Naming the tools we're going to use for the project in advance is crucial. Oh, and Trigger allows you to do so much more. Recently, they've been diving deeper into allowing you to build and deploy AI agents, which is something we can explore in an additional video. But in this one, we'll also focus on many of its features, specifically running background tasks and reporting back to the front end. So, while a long task is happening, we can keep the user updated on what's happening behind the scenes and allow the user to continue doing whatever they're doing on the application because we're no longer blocking the front end side while handling a specific task.
So I'll leave a special link pointing to trigger as well as live blocks down in the description so you can create your accounts and then we'll be able to immediately dive into the development. Then we have the scope part which contains the in scope and out of scope features and it's maybe the most important section for keeping your build focused. The out of scope section is doing serious work right here. billing and subscription systems, enterprise permissions, version specification history, and so on. This is telling the agent, don't even think about them, but we can add them later on after we build the base of our application.
This is great because it keeps every session focused on what we are actually building. And finally, there's success criteria. Here we define what actually matters. These are the benchmarks that your agent and you can verify against after each major feature lands. Not does it look right, but can a signed in user create and open up a project? Can multiple users collaborate? Can the graph be converted into a persistent markdown specification? Very simple yet concrete. So that's our project overview written in a way that makes sense to agents but also other team members working on the project.
Next, let's take a look at the AI workflow rules. This file is different from others in a way that it's not about what we're building. It's about how the agent behaves when building it. And the most important rule right here is to work on one feature unit or subsystem at a time. We don't want to combine unrelated system boundaries in a single implementation step. And that single rule prevents most failures that the agents cause. Basically, all of these rules right here tell the agent stay in your lane. Focus on one part of what you're doing and then move on to the next one.
After that, we have the code standards. So, you can open up that and let's quickly take a look. This file is what keeps the codebase consistent from our first all the way to the last chapter. Without it, the agent would drift away. like specific patterns that it uses for API routes when implementing feature 5 might look different from feature 16 or maybe it's going to change some things regarding the TypeScript types or how it uses Nex.js GS how it styles things. But here we add that consistency like we tell it make sure to have strict mode enabled and avoid using any or we're telling it to add use client only when the component needs browser interactivity or for styling we're telling it no raw Tailwind color classes reference the tokens through the Tailwind utility names.
I think you get the point. We want to stay consistent. Then we have the UI context which as you can guess dives a bit deeper into theming. Like when I was initially coming up with the design for this application, I wanted to have something that is simple yet functional and modern. And I spoke with AI a bit to generate this theme. Dark mode, no light mode. Uh all colors are already defined and we're telling the AI to use these colors. And again, if you're wondering how exactly I created all six of these documents by speaking with different AI agents, that's something that I'll cover in much more detail within the specdriven agentic development course.
So, you can join the weight list in the description. But I think you get the idea that I didn't just sit down and handpick every color in the file. I described the aesthetic that I wanted to AI, dark, technical, precise, something that feels like an engineering tool. And then I went back and forth on the pallet um the token names and this is what it came up with. Also some font border radiuses and so on. That's exactly how you should approach your own UI. You don't need to be a designer, but you need to know the feel you're going for and then AI can help you get there.
Oh, and while the project overview gave some information about the project, the architecture context will give it the blueprint on how to build it. Here I specified the text stack like every technology that we want to use alongside the role that it has within the application. For O we're using clerk for user identity and route protection. For databases it's going to be prisma and posgress. For canvas it's going to be live blocks and react flow. For real-time collaborative canvas for background tasks it's going to be trigger dev. And for storage we're going to use versel blobs.
We also define some system boundaries like where we're going to put the request handlers. Trigger is what we're going to use for the long running background jobs and some other folders that we're going to use. Then we define the storage model where and how we're going to save something. And specifically here I decided to use a hybrid storage model which is another senior level decision. We aren't going to be stuffing massive JSON blobs or three-page markdown files into our database. Instead, we're going to use Postgress only for metadata and Verscell blob for the actual files.
And this keeps our database lean. And this is important. We also need to tell it how different tools are working together. Since we're using clerk with liblocks, we need to set a strict rule that project memberships must be verified before a liblocks token is ever issued. This ensures that our ghost AI isn't just collaborative, but secure. And finally, invariance are the rules that the system must never violate. For example, request handlers do not run longived AI works that belongs in background tasks through trigger.dev. Metadata and large artifacts are stored in separate layers. O and ownership are enforced at every mutation boundary.
Client components only used when needed, and the canvas schema must remain consistent. Of course, we'll dive much deeper into this when we actually dive into building the application, but I wanted you to have a good idea of what it is that we're building. And finally, there's the progress tracker. This is the only file in the context folder that'll look completely different by the end of this video. Right now, it is completely empty intentionally because it reflects the actual state of the project. And right now, nothing has been built yet. So, as we complete each lesson, we're going to update this file, the current phase, what's in progress, what's complete, and what's coming next.
And remember what I said in the beginning about agents having no memory between sessions. This file is the solution to that problem. At the start of every new session, whether that's tomorrow, next week, or 6 months from now, one prompt is all it takes to restore full context. Our agent will read the progress tracker, understand exactly where the project stands, and pick up exactly where you left off, so you don't have to reexplain yourself. Even though it's going to be a small file, it does more work than any other file in the project. Oh, and finally, there's the agents.md file.
Within it, we're going to wire everything together. When you install Nex.js, that file was already created for you automatically at the root. This is the entry point file and every major coding agent has one. They just name it differently. Claude calls it claw.md. Cursor windsurf and others use different names, but the idea is always the same. It's the first thing that the agent reads at the start of every session. So, Nex.js has already added the first section for you and it tells the agent that this is a recent version and its training data might be outdated.
So, read the installed documentation before writing any code. Good default and we can leave it. But now we can add our own part below it. So copy your agents MD, delete it from the context and instead move it right here. It's not long. So let's see what do we have within it. We're basically instructing the agent to read all six context files in order before implementing anything. And then to update the progress tracker after each change. And you might think, isn't this going to waste context and tokens? Well, I mean, sure, it has to read through the files, but that's not even onetenth of how many tokens you're going to save because there's going to be less back and forth and less mistakes and bug fixing and correcting while you're implementing the features in the first place.
So, that's the system and now that you understand it, let's start building with it. Now that the context files are ready, before we build a single feature, there's one more thing we need to set up. And skipping this one is the most common mistake I see in AI assisted projects. And that is the globals.css file. This one right here. We've defined all of our color tokens within our UI context right here. But if we don't translate those tokens into actual CSS custom properties within the globals.css CSS file. Well, it'll just write the inline colors. So, instead of maybe saying something like text faint, it'll just write #505 060.
And the moment you want to change it, you have to change it across all places. So, let's set it up correctly. And this is also the first real demonstration of the specd driven workflow in action. So, let's do it properly. Create a new folder inside of the context folder and call it feature specs like this. And then inside of it, create the first spec file 01 design system. And then within it, we want to tell it something like read the agents file before starting. Then we need to tell it what we're doing here, like adding the design system and UI components.
Then we are telling it to install and configure chat CN UI. And then we wanted to add the following components. I figured we're going to use these across the rest of the application. We never wanted to modify these files after the installation. So we can specify that we can also ask it to install lucid react for icons and create a lib utils.ts ts with a reusable class names helper for merging tail class names. Although I think it would do this by default, it's good to mention it. Finally, we want to ensure that all of the components match the existing dark theme within globals.css.
And then we want to apply some checks when it is done. For example, we want to make sure that all components import without errors, that the CN works properly, and that no default light styling appears. This is what we call a feature specification or a feature spec. Every unit we build from here on has one. The spec tells the agent exactly what to do, exactly what not to touch, and how to verify when it's done. No guessing. So let's open up our agent by pressing commandshift I. As I said, you can use anyone. I'll use claw code.
I'll start a new chat. And you can even open up that file which will automatically put it within its context. And then you can tell it something like read add01 design system update the progress tracker MD file to mark this as in progress and then implement exactly as specified. This is the same template we'll use often, but the only thing that's going to change is going to be the spec of the feature we're developing. So let's go ahead and run it. And notice what will happen before the agent writes a single line of code. It'll read the specification and then update the progress tracker.
You can see how it's going through all the files we prepared for it. And only once it has enough context, it'll write and save the plan and then execute it. And only when it has a full plan, it will update the progress tracker.md and then start doing it. So we can already take a look at the progress tracker. And I'll open it up so we can see what it is doing. And you can see that the current phase is feature01 system design. The current goal to install and configure shaten with dark theme. And that is currently in progress with the feature 02 to be done.
And it's also adding the architectural decisions needed for every step as well as the session notes. It's going to ask us whether want to run this command to install shaden. So I'll say yes, go ahead. And after it installs everything, it'll verify it with TypeScript. It looks like it completed with zero errors. And finally, it needs to update the progress tracker to completed. So we can already open it up right here under progress tracker. And it should move it from current phase over to completed. So now at any point in our application if we come back two months later it'll know that it's using SHAT CN tailwind v4 with the following components only using dark theme and all the additional helpers as well as the architectural decisions.
So once it finishes the agent will have configured chat installed the components and verified it all works. So, what do you say that we test it out? Back on localhost 3000, the first good sign should be that we are now officially in dark mode. And if you head over to the homepage, that's going to be within app page. We can try to use a chaten button component coming from components UI button. And we can try to make it say something like click me. You can see it's coming from there. And it follows our UI theme.
That's it. Our first feature being the theming and chassis and setup is done. And that's the spectrum workflow I was telling you about. We define the specifications. We run the prompt. We verify the output. And then we move on. But just before we move on to our second feature, I want to add just one little extra step to this whole workflow that's going to make our codebase even more scalable, predictable, and less errorprone. and that is whenever we implement a specific feature, let's also review it with code rabbit. There's been a report that says that AI code creates 1.7 times more problems, which means that even though we're faster, we're producing more issues per PR.
Also, the code becomes less readable naturally and the error handling isn't being done properly. Security also suffers. So, let's add that one additional line of defense. Let's review every single feature that we add to our project. In the same way that biggest teams such as the developers over at Nvidia are doing it. I'll leave the link down in the description so you can create your free account and follow along. You can log in with GitHub. And once you're in, you'll be able to see that I already gave it access to many of my repos. But first, we got to push our project over to GitHub.
So, head over to github.com/new and create a new repo. You can call it ghostai. and just create it. Next, you can copy these commands one by one or you can just copy all of them together. Open up your agent and paste it. Then press enter. It's going to ask us whether we're going to allow it to stage all the changes. So, I'll say yeah, go ahead. And they'll also allow all future commits as that's going to help us speed up the workflow as well as adding a remote. And finally, push. You can see that the changes have been pushed successfully.
So if you come back and reload, you'll be able to see the current list of changes over on the repo. It's always good to make your project descriptive. So you can remove the releases, deployments, and packages and add a short description such as Ghost AI is an interactive systems architecture builder. We can route to the deployed website. For now, I'll just route it to jsmastery.com. And here we can put the different topics such as Nex.js, React, Live Blocks, Clerk, Trigger Dev, and even Code Rabbit, which we'll be using for reviewing your code. And I like how AI always adds nice commit messages and not random gibberish that I'm used to.
But yeah, now back within Code Rabbit, you can head over to add repositories, sign in, and give it access to all your repos. And then when you're back, you can just find your project right here, which means that it's automatically being tracked. So as soon as we start adding real features to the app, we'll be able to open up a PR for every new feature as we're developing within a large organization and then get it reviewed and only when Code Rabbit gives it a green light, merge it over to main. Now that our colors are in and the initial chassis components are in as well, we are ready to start building the actual application and the first thing that we need is the editor including the top navbar and the left sidebar because they're the foundation of every feature we're going to build from here on.
We're not yet touching off and we're not building this project creation. We just need to establish the layout so when these features come, they have somewhere to go. So inside of feature specs folder, create a new file called 02 editor.md. And within it, we're going to follow a similar structure we followed before. We want to start with explaining what we need, such as the base Chrome components that frame every editor screen, the top knob bar and the left sidebar shell. These will be reused and extended in every chapter that follows. Then we need to focus on what are we actually creating.
That's going to be the editor navbar. So we wanted to create a new component within the editor editor navbar file. And then we want to specify some requirements that follow. We want this navbar to be of a fixed height at the top. We want it to have both left, center, and right sections, which you can see right here. On the left, we open up the sidebar. In the middle, we have the name. And on the right side, we have additional actions. That's exactly what we explained right here. And of course, it's going to be of a dark background with a subtle bottom border.
Next, we want to explain what we want to do with the project sidebar. So, right below, we can say create a project sidebar component. It should float above the editor canvas. Opening it should not push the page content. So, you can see it remains where it is and slides in from the left. It has the header that says projects and title and a close button right here. It can use the shhats and tabs component and both tabs show empty placeholder state full width new project button at the bottom with the plus icon. So something like this.
Oh and finally when we click create new project we need this kind of a dialogue uh a new popup that shows. So for that we can say something along the lines of dialogue pattern use the existing color tokens from globals for dialogue styling. It supports title description and footer action but don't build any dialogues yet. We just want to create the component for it. Finally to check whether it is done we can check whether new components compile without TypeScript errors, no lint errors and diagram pattern is ready for future use. So hopefully you were typing this out with me or maybe you paused the screen and typed it with your own words.
But this course isn't at all about typing. It is about understanding what we're doing right here. So if you don't really feel like typing, nor you should. In the I'll provide you with all the prompts that we're going to use throughout the rest of this course. So, if at any point I'm going too fast when explaining them and you just want to have it on your end and listen along, you can totally do that. But yeah, let me quickly walk you through the structure because this is the template that every spec file in this project will follow.
We start with a goal, one or two sentences. What does this unit produce when it is done? Then we go over some specific design decisions. Is it visual or structural or something specific to the component like layout behavior responsiveness? And this is where we can refer to that UI context file so that the agent isn't guessing the colors. Then we go over into implementation. And in this case, I decided to separate it into sections. So we have the editor navbar, the project sidebar, and the dialogue pattern. And finally, the verification checklist. So let's open up our agent.
Whenever you're building a new feature, always open up a new chat and don't use one of the older chats. That's because we don't want some stale context lingering around. Only remain within the same chat when what you're about to do next is related to what you've done before, such as when you want to fix specific issues. You can see that it already has access to the editor file. So I'll say read this file and update the tasks on the progress tracker and then implement it exactly as specified and press enter. And in about a minute it's all done.
It developed the editor navbar the sidebar and also the dialogue pattern. Type script and eslint both pass clean. Progress tracker also got updated. So let's check the progress tracker first. It'll be right here within the progress tracker. So if you check feature 2 completed, editor navbar and project sidebar both implemented. You can see them right here under editor editor navbar and project sidebar. We just created them so far, but they're not yet being used within the layout. And even though this wasn't part of the checks, I want to actually be able to see them. I want to tell it to use the navbar and the sidebar right now within the project.
So I'll open up the chat in the same context window and tell it to use these two components within a layout. And within a minute, it put it to use. It even created a placeholder page. So we can navigate over to the editor and check it out. So head over to localhost 3000/editor and you can see a canvas coming soon but there is a top bar and a left sidebar which opens up the projects. So you can see that what we requested indeed got implemented. Since we're not yet at the point where we need that because we don't even yet have the editor.
I actually want to show you how you can undo the changes at least right here in cloud code. The only thing you have to do is press this back arrow on the message that you used to create these components and then say rewind code to here which is going to bring us back and only give us what the specification wanted and that is the components that we can then use and that compile but we're not using them quite yet. Perfect. So now that we have the navbar and a toggable sidebar, let's quickly check them out.
The editor navbar is pretty straightforward. And the sidebar accepts some props such as is open and on close and uses the chats and tabs to modify what's being shown. But when you're writing code yourself, you have a natural understanding of every decision you made. Heck, you wrote it. You know why a function is structured in a specific way or why you used specific props. But AI generated code doesn't come with that context. The agent made decisions that were reasonable most of the time, but you didn't make them. And that means that reviewing AI output isn't optional.
It's the step that keeps you in control of your own codebase. So, that's the perfect use case to test out the code rabbit edition that we added in the last lesson. I'll open up the chat and tell it to push all the current changes to a new branch called development. Typically in real production databases, you often have multiple branches such as dev, staging, and only then main. So for now, we want to push this over to the development branch, get it reviewed, and only if it's good, merge it over to main. So by giving it this quick message, it'll run a couple of git commands, figure out that we're currently on main, that we need to switch over and push to that new branch.
And that's a little pro tip. Uh, I mean, sure, you could run these commands through the terminal, but I find it super easy to just stay in flow and tell the agent to push the code for me, which you can see it just did. So, if you head back over to your repo, you'll see that the development branch had recent pushes 3 seconds ago. So, let's go ahead and compare them and open up the pull request. We have 333 new lines of code across five different files. So, Code Rabbit immediately hooked itself onto the PR and we'll see whether it'll be able to pull out some bugs out of the hat.
Let's give it a minute and then I'll be right back. And we got back the walkthrough where it says exactly what we introduced in this project such as two new React components for an editor UI Chrome, an editor navbar, and a project sidebar. These are super simple as this was a simple review. Later on, these are going to get much more detailed. But yeah, let's check whether we have some potential issues even on a simple PR such as this one. There's one major issue that says hide the offscreen sidebar from focus order and assistive tech when closed.
So specifically, this is an accessibility fix, which is definitely a good implementation. Since we haven't yet utilized this component in our app, I'll leave this so we can add it later. There's also a minor issue where the spec lists only the is open for the sidebar, but the implementation also requires on close. So actually it's suggesting to change the specification. This is interesting because sometimes you're going to miss some stuff from the spec. And it's okay if AI tries to fix it or add some stuff, but it's equally as important for Code Rabbit to flag it because the whole reason why we're writing specs in the first place is so we can have predictive output.
So basically you can just copy this part right here. Go back to your codebase where we're saying accepts and then it's going to be is open prop but we're going to say accepts both is open and on close props. Little change I know but now our codebase is consistent with the spec. And that's it for this simple PR. As we continue developing more components the reviews are going to get significantly more detailed. So for now, I'm going to go ahead and merge it, which means that we are ready to continue developing the next component. For this type of application, we don't really need a traditional homepage.
Most tools like this drop you straight into the editor. You sign in from a simple sign-in page and you manage everything from the canvas. And that's exactly what we'll do within our app. But we need O sign in, sign up, and other similar pages where you explain how your application works and then allow users to sign back in. So to get that set up, click the clerk link down in the description and sign in. You can sign in with Google or GitHub. And once you're in, you can head over to applications and create a new app on the dashboard.
You can give it a name such as ghost AI and choose the sign-in options such as email, Google and since this is a development website, we can also do GitHub. Then click create application. We are building on Nex.js. So what you can do is just install add clerk/nextgs by copying this command and paste it straight into the terminal. And while that is being installed, you can also set up your clerk API keys by copying them from here and creating a new file called env.local and then paste them right here. Now, before we hand anything over to the agent, there are two things you need to know about clerk and nex.js16 specifically because if you skip this, the agent will get it wrong.
And it says it right here. If you're using Nex.js JS15 or lower, name your file middleware.ts instead of proxy.ts. The code itself remains the same. Only the file name changes. But because most agents were trained on NexJS 14 and 15 code bases, it'll almost certainly create a middleware.ts file by default. So we'll have to specify proxy.ts explicitly in our spec so it doesn't have to guess. Oh, and another important thing is that just by adding middleware, it doesn't automatically protect all routes. As you can see right here, by default, it leaves all the routes public.
And this catches a lot of developers. You have to explicitly define which routes are protected and which are public, which means that we have to configure everything intentionally. And this is exactly why reading the updated documentation before building with AI matters. The agent knows Clerk, not necessarily the version of Clerk you just installed or the version of Nex.js you're running on. The spec bridges that gap. Oh, but we can also use agent skills. As I told you at the start, most major frameworks and libraries now publish official skill packages specifically for this problem. It gives your agent up-to-date knowledge, current APIs and patterns, and the best practices of the library you're using.
So, if you search on Google or within the docs clerk agent skills, you'll be redirected to this page. Then, simply copy the installation command. Head back over to your codebase and paste it in your terminal. MPX skills add clerk skills. Press enter. You might need to install the skills package by saying Y and then enter. And then by pressing the arrow up and down keys and the space key, you can select specific packages such as core clerk, you can either select some additional clerk features or some additional clerk frameworks like in this case I'm going to go with clerk next.js patterns and press enter.
Then you can select additional agents that you want to add it to. By default, it's going to work on codeex cursor anti-gravity. But if you want to add clot code, you have to select it here and press enter. And we can install it in project scope via sim link. That's the recommended way. So just proceed with installation. Perfect. Our skill is installed and we'll be able to invoke it later on once we focus on implementing the odd functionality. So now we are ready to write the spec. Open up your context feature specs and create a new file called 03 o.md.
And once again, the full feature specs files are linked in the description in case you want to just copy them and then follow along or you can slowly type it out with me. Let's start by telling it that clerk is already installed and connected. So we just need to wire it into the next.js GS app provider o pages redirects route protections and the user menu. When it comes to the design, we just want to use the clerk's dark theme from the add clerk UI themes as the base. And then we want to override the clerk appearance variables using the app's existing CSS variables with no hard-coded colors for the sign up and signin pages.
This is what we want to develop on large screens. We just want to have a simple two panel layout. On the left side, a logo, a tagline, and texton features. On the right side, a centered clerk form. And on small screens, forms only. We don't want to have any kind of gradients as that's going to seem AI-ish. No oversized hero sections, cards, or scrollheavy layouts. Keep the layout minimal and professional. And then we can dive into a bit more details on the full implementation. So we can say wrap the root layout with the clerk provider using the clerk's dark theme.
Create sign in and sign up pages using clerk components. And as I told you before, we have to be specific and telling it that it should use the proxy.ts file name at the project route instead of the middleware.ts. And then we have to define public routes using the existing sign in and sign up environment variables protect everything else by default which means that we have to update our homepage so that when authenticated users visit it we redirect them to the editor or when unauthenticated users visit it we redirect them to the sign-in page. We also want to implement Clerk's built-in user button to the editor navbar right section for profile settings and logout.
We want to keep clerk's default user menu and profile flows intact and not rebuild heavily customized clerk internals and want to use existing clerk environment variables without renaming or inventing new ones. Finally, we can specify additional dependencies such as clerk UI if we need to install it. And then when it's done, we want to check that proxy file is there, that all routes are protected except public off routes. All pages use CSS variables with no hard-coded colors. Clerk provider wraps the layout and the build passes. This is our complete authentication implementation. Now you know the drill.
Go ahead and open up your agent. Give it access to this file and tell it to read this file and update the progress tracker.md file accordingly. Then implement the o feature exactly as specified in the o.md file. Okay, you can see that the built-in microphone feature right here still isn't perfect. Let me try to do the same thing with whisper flow specified in o.md file. There we go. That's a bit better. And specifically, it's 03. MD. Perfect. Let's go ahead and run it. First, it's asking me to run some bash commands to check whether clerk has been properly installed.
And we'll say, yeah, go ahead. You can run it. It took him about 2 minutes or so to go through all the context we shared. And only then it'll start to implement everything in parallel. And this is much better than if it started right away and then just ended up with a bunch of mistakes. So the list of updates that it put right here is to update the env.local with clerk signin and signup URL varss. Create a proxy with clerk middleware route protection. Update the app layout with the clerk provider and dark theme. Create signin and signup pages with a two panel layout.
Update the app page to redirect based on the off state. add the user button to editor navbar and create editor page and then update the progress tracker and run mpm run build to verify. It's going to ask me whether it has the ability to create some directories and I'll tell it, yeah, go ahead. And in the future, we can give it some more permissions so it can do things a bit more freely. And after it came up with the initial plan, it actually built out everything pretty quickly, maybe even in less time than it used to think how to approach it in the first place, which shows you just how important the initial context and a proper task are.
And there we go. The build passes. And here's the summary of everything that was implemented. I think all of these things right here shouldn't come as a surprise because we initially specified them in this spec. Then it updated the to-dos and now it just did it. And yeah, it's pretty interesting that it even notes right here that it would have gotten confused about this middleware proxy thing if we didn't specify it properly. But thankfully, it did it in the right way thanks to the research that we've done at the start. So, what do you say that we take it for a spin?
The application is running on localhost 3000, but whenever I make some bigger changes, I like to rerun it. But initially, if you head over to localhost, you might get redirected to this clerk handshake part, which is going to lead to a broken page. But after you reload, it should properly redirect you back to the homepage. And after that, it should just work. This is something that we can polish and fix up later on. But yeah, this is looking interesting. Definitely not quite as nice as the original application that I've showed you that's deployed. So, what are some of the things that we can do to make it look more similar to that one?
Well, step one is to search some kind of design websites online like Dribble or award-winning websites and then just take a screenshot and try to get it to match to that closer. Or in this case, you can head over to the deployed version of this application, take a screenshot of this whole UI that you can see right here, and then we can feed it over into our chat. So, go ahead and open up the chat. Keep in mind that we can still stay within the authentication window that we worked on because now we're fixing some parts about that specific implementation.
So, what you can do is just click this plus right here and upload from computer or just drag and drop the screenshot. then select it. And then we can further point out to some things that could be improved, not just the layout. It seems like it didn't properly read the fonts that the right side of the screen is taking a larger portion. So maybe we can split them 50/50, same as it is right here, and make the font size a bit larger. So it's not just the layout. We needed to review the screenshot and update the UI of our current application to look more like the one on the screenshot.
means 50/50 left and right side layout with some kind of a color on the left side to differentiate it from a dark background as well as we need to fix the fonts so that it uses the ones outlined in our UI guidelines. I think for now this is going to be enough for it to get closer to the design we want to get. So press enter and let's see how it handles it. And there we go. The build passes. It updated the globals from circular font reference to the one that should be correct right now.
The body was pointing at itself. So, this font was never actually applied to anything. Same fix for the heading. And it also implemented some other fixes. Let's check it out. The design now looks much closer to the finished product. The fonts are being properly applied and that makes a big difference. So does the increase in font size and this shift between the two different background colors. Of course, later on we can come up with a unique logo that we can put right here. But for now, this is looking great. And believe it or not, we have a fully functional authentication system built in right here.
So, what do you say that we go ahead and test it out? You can head over to signup to see whether that works. And it does. Later on, if you want to, you can modify the contents on the left side depending on whether you're in sign in or sign up page. And then let's use something like GitHub to sign in. I'll authorize it. And the redirect redirects us to a page that currently breaks. So I'll try to head back over to localhost 3000 one more time. And now when we get back, it actually redirects to the editor properly.
You can see that we have the sidebar. We have a space for the canvas that's about to come in the future. And then on top right, we see all the information about our currently logged in account, which is beautiful. I mean, we get complete user management within a single prompt that we've done. Let's also try to sign out for now. And we do get one issue when we click that button saying there's an unexpected response received from the server. So, if we head back over here and open up the terminal, we can see an unhandled rejection.
Unexpected response was received from the server. And then in the terminal, we see something like this, which doesn't really tell us much. Now, what we could do is just copy this, open up the chat window one more time by heading over here, and then just pasting this error and telling it to fix it. But I'm actually glad that this error happened because I can teach you a bit better and more precise way to handle these issues and so that when you try to fix them, you don't cause new ones. So instead of pasting this right into the chat, we're actually going to open up a new file right here within context and call it current issues.md.
You can even do it within feature specs. Either way works. Then you can paste any kind of errors that you have and explain what's happening. So I'll explain that when I click the log out button the following error appears and now I can paste this error message and we can also specify that sometimes when we log in we get redirected to this weird long URL which doesn't show anything on the page and then we can just paste this URL right here. Again, the errors on your end might be a bit different from what I'm seeing right here.
Whatever they are, I don't want to teach you how to copy and paste. I want to teach you how to solve the problems for yourself. Explore the current issues file and deeply analyze the problem. Only when you have the analysis, give it back to me with the idea of how you're planning to solve it and then wait for me to give it the green light to execute it. So yeah, writing something like this makes sense because that way it doesn't go into the spiral of trying to fix its own bugs while breaking 10 other things.
You provided the error. It's going to come back with the analysis. You're the one deciding whether that analysis makes sense and whether it can actually execute it. So let's run it and see what it comes back with. And after some thinking, it's back with the analysis. And this is so much better and so much more detailed than if we just told it to fix it immediately. Now it actually tried it out and has a deep idea of what's happening. The server log is the key clue. Proxy took 373 milliseconds out of a half a second total request.
Almost all time is spent in the proxy. So that already points it in the right direction. As a human person, I would never figure this out on my own. at least not from such a vague error message. But yeah, here it figures out that this button needs an after sign out URL. So it's just going to add it. And a similar thing is happening with the handshake URL. It basically needs to configure that after signup URL and that way it's going to route it properly. So it provided a twoix plan. So this looks plausible to me.
Looks good. So let's just tell it to execute the plan and fix the issues. There we go. So, the fix now seems very apparent. We just needed this after sign out URL. And if you want to learn more tips and tricks just like this one about actually analyzing and fixing the errors and how I approach building these production level applications, definitely check out the It's not out yet as it's going to be super detailed and it's going to follow the best practices from the strongest developer teams out there. But yeah, it'll be out soon. But I'm still super glad that while I'm developing that, you can still learn how I develop these applications with the agentic ways with this new video that you're watching right now.
And you can let me know down in the comments how you like this new type of video. I get that it's completely different from manual coding, but I still think there's so much to learn and we can have a predictable development workflow even with AI doing the writing for us. So let's see whether this actually fixes it. And the build passes. The after sign out URL prop was removed from the user button as it belongs in clerk options and is set via environment variables. So we'll soon be able to verify whether that actually fixes it.
But before I want to open up my terminal, stop it from running and then rerun it again on localhost 3000 because we changed the env. So we want to make sure that they're read by the browser. So now heading back over to localhost 3000. Maybe you're signed in already, which is fine. You can simply sign out now. And that brings us to another error, which is the same one we've had before. And another thing you can do is head over to inspect element, switch over to the application tab, and then clear all the cookies. So find the cookies for localhost 3000, clear them, and then reload the page.
You'll be redirected back to the homepage. And now we can retry with a clean slate. So, I'll sign in using GitHub the same way I did before. That works. I automatically got redirected over to the editor, which is great. And now I'll sign out. And that worked. So now head back over within your terminal. Run git add dot git commit-m implement o and then get push. This is going to push all the changes to origin development allowing us to open up a pull request over to the main branch. Then you can open up a PR and let's wait for Code Rabbit to review it.
This time we had many changes but not many of them are directly related to what we did in the code. We just added these agent skills so that everybody else's agents working on this codebase also well become smart in the technologies that we're using for the project. And then yeah of course we implemented a couple of different files that Code Rabbit will verify. But primarily what we've done is right here within app signin and signup pages we have added some features that would display on the left side and then the sign-in page where on the right side we just render the signin component coming from clerk.
Similar thing happens over to the signup page. So right here we're just rendering the signup UI. Then over in the layout, we are wrapping everything with a clerk provider, setting the theme to dark, and setting some custom variables. That's it. Everything else remains the same. And then within the editor page, we're simply showing the navbar and the sidebar as well as the rest of the content. In the navbar, we display the clerk user button, allowing us to see more info about the user and allowing us to log us out. Pretty straightforward so far. As our components and features get more detailed, we're going to do deeper dives into the codebase.
But so far so good. Let's wait for the review. And quickly, we're back with a full walkthrough. This pull request integrates clerk o into a nextjs application and establishes comprehensive agent skills for clerk integration across multiple frameworks. It also adds O middleware, signin and signup pages, clerk provider setup, and introduces five new skill definitions for supporting scripts, documentation, and so on. So, obviously, a lot of the checks right here from Code Rabbit are going to be about the skills that we set up, but we can skip those for now and focus on the ones about the actual files we implemented.
In this case, it looks like there's one critical issue, and that is within the context current issues.md. Well, you never want to publish current issues to GitHub anyways because you want people to see your code and not your mistakes. Um, what we're doing here is even worse. We're exposing the handshake or the JWT token from the track docs. So, this is a real security issue. So, what we need to do is delete this file. Obviously, this won't delete it from the git history, but that's fine for us because this JWT is no longer in use.
So, right here over to get ignore, I'm going to add our forward slashcontext forward slash and that's going to be the current issues.mmd file. And I'll also remove everything that is within it. Not that it matters right now because we're adding it to get ignore anyway. So let's go ahead and push those changes by saying get add dot getit commit update.getit ignore and get push. Immediately the changes will be recognized which means that we can merge this over to the main branch. And while we're here we can also head over into context on the main branch and remove the current issues file right here from GitHub by simply deleting the file and committing the changes.
We can also do the same thing on the development branch by heading over to context and heading over to current issues and just removing the file so that it's no longer here and it's not going to be pushed to GitHub any longer because we added it to get ignore. Perfect. With that in mind, we've successfully implemented the full UI and functionality for the authentication within our application. Now that our authentication is done and we can actually sign into our application, let's make the sidebar actually do something. As right now it is just static. Before we hook up any real data, we need the UI in place.
The create, rename, and delete dialogues plus the editor home state. So we're keeping this prompt focused on UI only with no API calls yet. Head over into context feature specs and add a new file called 04 project dialogues. Within it, we can specify that the goal is to build the editor home screen and add the project dialogues and sidebar actions with no API calls yet. So, what does this specifically mean? Well, it means that on the homepage, we want to reuse the existing editor layout without modifying the navbar or sidebar behavior, but in the center of the page, we want to add a heading, create a project or open up an existing one, and a description.
Start a new architecture workspace or choose a project from the sidebar and a new project button with a plus icon. keeping the layout minimal without wrapping this content in cards. And then clicking new project should open up the create project dialogue. Let me actually show you what I mean by all of this by heading over to the finished version of the application and quickly signing in. Notice how right here in the middle we have some text greeting us even though the canvas is empty allowing us to create a new project which then opens up this dialogue.
That's exactly what we want to achieve. So, we'll have a dialogue for creating a new project as well as for editing one and deleting it. So, let's specify these three dialogues below. We're going to have one for creating a project that takes in the project name input, the live slug preview based on the name, and preview updates as the user types. Then, the one for the rename with prefilled project name input, and the one for delete. Finally, on the sidebar, we want to add the following project item actions. Rename and delete. Show actions only for the projects that we own and hide actions for the shared or projects belonging to collaborators.
And on mobile, tapping outside the sidebar closes it. And want to add some kind of a backdrop scrim. Finally, let's specify the implementation right here by telling it to create a dedicated hook to manage the dialogue state, the form state and the loading state. That way, we can reuse the functionality. And then we want to wire the editor home new project to create dialogue, the sidebar create button to create dialogue as well, sidebar rename to rename dialogue, and sidebar delete to delete dialogue. For now, we only want to use mock project data with no API calls.
And to check when it is done, we need to check that sidebar actions are wired, that slug preview works, that no TypeScript errors exist, and there's also no linting errors. So, let's open up Claude code or your AI agent of choice. And then we can tell it to read this file, update the progress tracker.md to mark this as in progress, and then implement it exactly as specified. And let's run it. And in about a minute or so, feature 4 is done. Here's what's built. New file hooks use project dialogues which centralizes the dialogue type form including the name and the slug the loading state and mock data which exposes these different functions to deal with the dialogues.
Then there's the dialogue component as well as within the project sidebar we also call the dialogues when needed. So let's quickly check this hooks file that it implemented. You can see that it is strictly typed right here with the project interface. the dialogue type to slug which takes in the name and turns it into a human readable ID and it even created some mock 3 projects that we can verify. It came up with a lot of different use states uh keeping track of all the projects, the dialogue type, the selected project, the name, slug and loading and all of these are going to be used within the dialogues themselves.
Then it returns the data so we can actually use them. Let's check where the dialogue is being used. Most often it is right here within project dialogues where we have the actual code for how the dialogue looks like. If the dialogue is of a type create, then we show this one. If it's of a type of rename, we show this one. And if it's delete, we show the one below. Before we test it out, let's check out the progress tracker that says that the current phase is the feature 4 dialogues. It has the goal to do it.
And it has actually completed the project dialogues with all of these different components. So now, if you come back to the editor, this is going to look much better. It's no longer just a blank screen, but rather in the middle it says create a project or open up an existing one. start a new architecture workspace or choose a project from the sidebar. And if you click create project, it actually opens up a new project dialogue where you can type something like my project. It automatically creates a slug at the bottom as well. And if you click create, you saw that creating loading.
And of course, the data is currently static, but you can see how it's going to look once it actually picks the data from the database. So we have ghost AI core which you can select to edit its name as well as to delete it. This means that the UI for all of the dialogue functionalities has now been implemented alongside this centerpiece of the application. Which means that now that the majority of the UI is done in the next lesson we can start focusing on implementing real data with a real database to make our app come to life.
But before we dive into the database, let's make sure that our current code is good. And I want to show you another Code Rabbit feature allowing you to review your code directly within your VS Code, which means that you don't even have to create a PR and you can be that much faster. You can install the Code Rabbit extension, authenticate to your account. It'll notice the changes that you have right now. So you can…
Transcript truncated. Watch the full video for the complete content.
Get daily recaps from
JavaScript Mastery
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.






