So You Want to Build an AI Agent?

Laracasts| 00:09:32|Apr 17, 2026
Chapters7
Demonstrates starting with a fresh Laravel install and planning to prompt AI via an artisan command.

Build a basic AI agent in Laravel by making a CLI command that calls the OpenAI API with HTTP requests and a prompt, then iterate with Laravel prompts for dynamic input.

Summary

Laracasts' episode walks through a hands-on approach to creating a simple AI agent in a fresh Laravel install. The host demonstrates wiring an artisan command (php artisan make:command) to query OpenAI via HTTP, and shows how to store the API key in config/services for easy access. He uses the OpenAI Responses API and even speculates about a future model—GPT-5.4 Nano—while explaining how system prompts and the input payload shape the conversation. The video emphasizes handling a multi-message output (which could include tool calls) and how to extract the actual text response from the JSON structure. With Laravel prompts, he makes the prompt input dynamic, pulling user input from the CLI and displaying a spinner while waiting for the API response. Small gotchas are noted, like the first output item sometimes being a tool call, so you shouldn’t depend on always grabbing content from output[0]. The result is a clear, practical approach to building an AI-driven CLI workflow that’s easy to extend later with more agent-like features. Finally, he riffs on small UX niceties, such as importing helper functions and presenting a live prompt example (What is on your mind?), then testing with a query like “In one sentence, how long until AI completely captures my programming job?” followed by a demo that returns a concise four. The episode promises to dive deeper in future installments while keeping the current example approachable for Laravel developers.

Key Takeaways

  • Create a CLI command in Laravel (php artisan make:command) to encapsulate an OpenAI API call and return the JSON response.
  • Store the API key securely under config/services (e.g., services.openai.key) and reference it via Laravel's HTTP client.
  • Use the OpenAI responses API with a model like GPT-5.4 Nano and include system and user messages to shape the reply.
  • For multi-message outputs, treat output[0] as the default text source, but remain aware it could be a tool or function call.
  • Leverage Laravel prompts to collect dynamic user input from the CLI, including labels, placeholders, and required flags.
  • Add a spinner feedback while awaiting API responses to improve perceived responsiveness during long requests.
  • Illustrate a practical prompt flow by querying something like 'What is on your mind?' and handling responses in the CLI.

Who Is This For?

Essential viewing for Laravel developers new to building AI-enabled CLI tools or agents. It shows a concrete path from a fresh Laravel install to a working OpenAI call, with practical tips that reduce setup friction.

Notable Quotes

"All right, here's the game plan. I'm going to take this step by step, but with the understanding that each step is going to move pretty quick, so watch me for the changes and try to keep up, right?"
Sets the pace and expectation for a fast-moving tutorial.
"you just do it via an HTTP request and then you include the API token that you would generate on uh the respective sites."
Explains the core interaction pattern with the AI API.
"the model, why don't we use GPT uh 5.4 Nano. That's fairly advanced, but also very cheap to use."
Speculates about a future, cost-efficient model for the setup.
"The main thing for now is output. Now, output is not a simple string and it's not even an array. It's an array of arrays."
Highlights the structure of the API response and a common pitfall.
"What is on your mind and if I click through here you'll see a bunch of things we can provide… the placeholder will be thinking about that."
Demonstrates using Laravel prompts to capture dynamic input from the CLI.

Questions This Video Answers

  • How do I set up an OpenAI API call in Laravel using the HTTP client?
  • What is the OpenAI responses API and how does its payload differ from older endpoints?
  • How can I implement a spinner in a Laravel CLI prompt while waiting for an API response?
  • What is GPT-5.4 Nano and should I use it for production AI prompts in Laravel?
  • How do I extract text from a nested OpenAI JSON response that contains tool calls?
LaravelLaravel PromptsOpenAI APIHTTP ClientCLI ToolingArtisan CommandsOpenAI Responses APIGPT-5.4 Nanosystem promptsmulti-message payloads
Full Transcript
All right, here's the game plan. I'm going to take this step by step, but with the understanding that each step is going to move pretty quick, so watch me for the changes and try to keep up, right? Okay, so here I have a fresh install of Laravel. I haven't done anything else. So before we even talk about agents, let's first figure out, well, how do we prompt AI and receive a response, right? Let's make sure we're all on the same page and then we'll continue. All right, why don't we do that via an artisan command like this? php artisan make me a command. I'm going to call it how about um chat command or something like that. All right, let's open this up. Why don't we give it a signature of chat? Um receive a an AI response. All right, so yeah, how do we interact with something like OpenAI or CL? Well, you just do it via an HTTP request and then you include the API token that you would generate on uh the respective sites. So, I already have an AI key or an open AI key. Here's what I can do. I'll go to my environment file. At the bottom, I will say open AI API key. And behind the scenes, I'm going to paste in my token so you don't steal it. Next, in the sidebar, let's go into config. Services is a fine place to put that. Uh, and at the very bottom, yeah, we could do something like AI or OpenAI. And let's add our key here. Looks great. So now, anywhere we can access it via services.openai key. And in fact, I'm just going to copy that because I'm going to use it in just a couple seconds. Cool. So, let's return to our command and we'll use Laravel's HTTP facade to include our token. And I'll paste that in here. And then we're going to make a post request. So, a post request aware. Well, just check the documentation, right? Or have AI do it for you, which is what you're going to do anyways. Uh, anyways, here is the URL. We're going to use the Open AI responses API, which is relatively new. It might be a little different from how you did it a couple years ago. Anyways, as part of the parameters, of course, we need to say, well, what is the input? What is the prompt? Right? What model are we going to use? Right? Stuff like that. So, this is 2026 as I record this. For the model, why don't we use GPT uh 5.4 Nano. That's fairly advanced, but also very cheap to use. All right. Next, the system instructions. Yeah, you've seen this a 100 times, right? you are a helpful assistant is the most generic one. But yeah, you can imagine if you're building an agent for like helping first graders or something, the system instructions could specify that all of its responses should be optimized uh so that a first grader can clearly understand it. An example should be provided that would benefit a first grader. Stuff like that. All right. Finally, our input. What is the prompt? Right? Uh now our input could be an array of arrays, right? It's not just a single how are you today? It could have multiple messages. It could have responses. It could have tool calls and stuff like that. For now though, I'm going to say we have a single message. Uh and the role is user. Role is what role are we playing here? Is it the user? Is it the assistant providing a response? Is it a function call? Uh we're going to be explicit about that. And then finally, the content is going to be something hardcoded for now, like how are you today? All right. Finally, if there's any um exceptions or something I just wanted to throw, I don't want to swallow those. And then I want to get the JSON response. Save it here. And then at the bottom, we will dump the response uh to the console. All right. So, are we on the same page? We created a new artisan command. When we run the command, we will use the HTTP facade to make a request to the Open AI responses API. We're going to send through a simple prompt. How are you today? that will receive a big weighty uh JSON response that we dump to the console. Let's go. PHP artisan chat. And there we go. Here's our response. Okay, a bunch of stuff here. Don't be overwhelmed. The main thing for now is output. Now, output is not a simple string and it's not even an array. It's an array of arrays. So in this case we can see all right the first item in the output is our message and then the content itself is an array of arrays uh that includes the text here. Now the reason for this and we're going to talk more about this in the future is because the output again could be multiple things. The output could be hey I want to call this tool and then I want to call that tool and then I'm going to provide a response right so it could be a collection of outputs. For now, we're going to be a little bit naive and just assume the first item is always the text response when it actually won't be. But it's fine for now. Okay. So, if I wanted to grab I'm doing well, thanks. I would go output first item content first item text. Let's do that now. Response um output first item content first item and then uh text. Yeah, just remember you can't depend upon this uh because sometimes the first item might be a tool call or a function call. Let's give it another shot. PHP artisan chat and now we get it. Cool. So we could pass this to info. And we have our our our initial interaction with AI. And what I want you to notice is it's just pretty simple if you think about it. It's a simple HTTP request. include your token, make a a post request to the proper endpoint, include the parameters that you'll get straight from the documentation, and uh you're all set to go here. So, of course, the next step would be let's make it dynamic, right? So, why don't we introduce a variable, and we'll call it prompt. And here's what we're going to do. I'm a big fan of a firstparty Laravel package called Laravel prompts. um it's it's for working on the CLI and it includes lots of um helpers and elegant little tools and such. So let's pull that in. Composer require Laravel prompts. And now we can do things like this. I could say text give me a text prompt. And I want you to notice how we import that at the top. So you'll see we use the function Laravel prompts text. So I can say um what is on your mind and if I click through here you'll see a bunch of things we can provide uh the label the placeholder the default whether it's required or not in this case it is required I need a prompt in order to make this request so I will use named parameters to be explicit uh that required is true in this case okay so now we're making this dynamic right we get a prompt and then we include that prompt with our request let's go Beachb artisan chat. Um, what is on your mind? Um, in one sentence, how long until AI completely captures my programming job? Be honest. Give it a second. No one can give an exact timeline, but in a bestcase scenario, routine parts of programming will be largely automated within the next 5 to 10 years. I think it maybe is sooner than that, but we'll see. Uh, nonetheless, this is pretty cool, right? And it just doesn't require much effort at all. Now, two things I want to show you and then we're done with episode one. First of all, of course, maybe we extract this into a method, something like run model. All right, so now this encapsulates the uh HTTP request. We receive the response. Um, what's the issue here? unhandled. Yeah, that's fine. We're going to ignore that. Uh, next, because we're using Laravel prompts, we can provide a little more feedback when the AI is or when the HTTP request is um waiting for the response, we can use a spin function for that, which is cool. This is one of my favorites. So, we give it a closure and then we give it a placeholder message effectively. So, I could say right here, this is what we're running. the uh placeholder will be h thinking about that and then that will return the result of this uh function call here. All right, exact same thing. So we prompt the user, we display a spinner while we pass that prompt uh to our model call. Once it's done, we have response and then we echo the text content. So, one more time. I'm sorry, PHP artisan chat. What is 2 + 2? Thinking about that, we get four. And we're done with episode one. Let's keep going.

Get daily recaps from
Laracasts

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.