Laravel AI SDK Full Review: Agents, Images, Audio, Tools & More
Chapters11
The teaser hints at capabilities like attachments and image generation, suggesting a reveal of a powerful AI feature set. The speaker teases what’s coming next.
Taylor Otwell’s Laravel AI SDK (NuNo Maduro demo) makes building AI-powered agents with images, audio, attachments, and web-search effortless and configurable in Laravel.
Summary
Nunomaduro walks through the Laravel AI SDK and shows just how quickly you can spin up an AI-powered agent inside a Laravel app. He starts by installing the package with composer and enabling the AI service provider, then creates a personal assistant agent with a few keystrokes: artisan make agent personal assistant. The demo highlights that AI-related work lives under an AI folder and demonstrates basic prompting, including setting a user name and preserving conversations with a built-in remember-conversation feature. He tests JSON output by specifying a schema, showcases sending image attachments for analysis, and uses the image tool to generate a new image (e.g., a Laravel developer in a Lambo) and store it in storage. The talk then explores streaming responses in the UI and reveals built-in tools like web search, with domain-scoped queries (laravel.com). He explains model selection options (use cheapest model, use smartest model) and notes defaults (GPT-5.2), plus advanced capabilities like image generation, audio generation (with MP3 storage), transcription, and translation, including Portuguese output. A key point is the platform’s failover: if OpenAI is down, Entropic can act as a fallback. Throughout, Nunomaduro emphasizes practical testing strategies (fake the agent during HTTP-layer tests) and ends with a strong personal verdict on the SDK’s potential. If you’re building AI-enabled features in Laravel, this session is a hands-on blueprint for moving fast with real results.
Key Takeaways
- Run composer require and publish the Laravel AI service provider to enable AI features in a Laravel app; the demo uses composer require Lavar AI and automatically wires the service provider.
- Create a personal assistant agent with PHP artisan make agent personal assistant, then configure the AI suite under the AI folder for quick, project-scoped access.
- Use the remember conversation feature to persist dialogue with an agent by calling the remember-conversation helper and persisting user IDs and conversation IDs for truly ongoing chats.
- Specify output formats with a JSON schema so the SDK returns strict, structured data (define a schema and implement the interface to get clean JSON outputs).
- Leverage built-in tools like web search to constrain internet lookups (e.g., querying laravel.com) and optimize API usage by choosing between cheapest or smartest models by default or on-demand.
- Generate media assets directly: create images (e.g., a Laravel developer driving a Lambo) and store them in storage, and generate audio with MP3 storage and transcription features for seamless multimodal interactions.
- Enable robust failover by routing AI calls through alternative providers (e.g., Entropic) if OpenAI is unavailable, ensuring higher reliability for AI-powered apps.
Who Is This For?
Essential viewing for Laravel developers who want to drop a ready-to-use AI agent into their apps, especially those curious about image/audio generation, streaming responses, and robust failover. Great for teams evaluating end-to-end AI capabilities inside Laravel with real-world examples.
Notable Quotes
"We have attachments as well. We can generate images. Oh my god. It's done, chat."
—Shows how easy it is to handle attachments and image generation right from the SDK.
"This works stupidly simple. That was three commands. Prompt done."
—Demonstrates how quickly you can scaffold an agent and set a prompt.
"If OpenAI is down, it will rely now on entropic after that."
—Highlights the built-in failover capability to keep AI functionality resilient.
"We just type composer require Lavar AI. That just worked."
—Mentions the smooth dependency install and auto-wiring of the provider.
"This is how streaming works. By the way, on the UI, you just return this. This is how easy it is."
—Demonstrates real-time streaming of AI responses to the frontend.
Questions This Video Answers
- How do I create and persist a Laravel AI agent using the Laravel AI SDK?
- What are the best strategies for JSON structured output with Laravel AI SDK?
- Can the Laravel AI SDK generate and store images and audio files automatically?
- How does the Laravel AI SDK implement failover to other providers if OpenAI is down?
- What is the difference between the cheapest model and the smartest model in Laravel AI SDK?
Laravel AI SDKLaravel 10+Taylor OtwellLaravel AI agentsimage generationaudio generationJSON outputweb search toolmodel selectionfailover/fallover strategies
Full Transcript
We have attachments as well. We can generate images. Oh my god. It's done, chat. It's done. I'm going to open it. So good. OMG. You guys heard it. Wait, what? I don't know if you guys are ready for what is about to come on this live stream. Laravel AI SDK built by Taylor. Okay, so we just typed composer require Lavar AI. That just worked. Uh, kind of expected. A vendor published Laravel AI service provider. Let's just do that. We have the configuration file, the agent stuff. You can customize your stuff and then we have migration chat.
I want everyone going down and click like on this video and subscribe this channel. Okay, we are going to build our own personal assistant. PHP artisan make agent personal assistant. We just type enter and bada boom bada boom we have it here. Okay, just right there. All the AI stuff will be within the AI folder which is very interesting. Okay, very interesting. Let's type here you are Nuno Maduro. Be polite, concise, blah blah blah blah blah blah blah. So this will be new personal assistant. And let's prompt what is my name? The artisan example. Let's type enter.
Work work. Here we go. Your name is Nuna. This works stupidly simple. That was three commands. Prompt done. Okay. Good job. Good job, Taylor. I want to see how can I preserve a conversation with the agent. So if you want to if you want to remember conversations you just have to use this straight remember conversation. I'm going to import this which is within the AI concerns name space. So we have a user and we are going to continue and persist the conversation with the ID first conversation using this given user. Meaning that in theory and I mean in theory if I do while true I should be able to literally have a non-stop conversation with the agent.
Okay, are you guys ready? Everyone type W ready if you're ready. I am so ready for this example. This will prompt for something. Here we go. We just see a prompt right here. Okay, I'm going to type Okay, so I I just give him some information which is I am 35. How old I am and you are 35. Here we go. Here we go. Chat, we have this beautiful remebrance conversation trait which is 100% a [ __ ] beast man. W Taylor everyone in the chat. If you need JSON output, you ask for something specific and you want the response back to be just JSON.
Again, the only thing you have to do is specify a schema and then you go all the way up and you implement this interface as structured output and that just works. Okay, then you have beautiful JSON on the console really just works. Okay, we have attachments as well. So we need to import image from Laravel AI image and then is from path. Yes, storage path profile full.png. Now, what is the contents of the given image? And the response is the image is a close-up studio portrait of the person facing the camera. Serious expression. And by the way, this is the image.
Not bad, Chad. Not bad. So, this is how easy is to send attachments to AI. Boom. This is how streaming works. By the way, on the UI, you just return this. This is how easy it is. Okay? Just type personal assistant in our case. And then you just return the stream back. Bada boom, bada boom. It just works. Okay, what else? Let's go all the way down. I want to see tools. Let's start by using a built-in tool. Okay, do we have web search? Okay, web search is a tool that allows agents to go to the internet and it's supported by OpenAI, meaning that this client or this model we are using supports this stuff.
So let's just import web search all the way top. We place this beautiful new web search just like that. All right. So I want to kind of scope the web search to a specific set of domains. Let's actually just copy this stuff and do laravel.com to begin with. And in theory, my personal assistant now can go to the laval.com website. Okay, Chad. Just typeaboom bada boom bada boom. What is the contents of laval.com/ai? Chad. Oo, what we have here? Here we go. It's able to go to the internet. Laravel.com/ai is a product landing page for Laravel.
This is web search which is a built-in tooling. Okay, really just works. So W tooling on AI SDK really just works. Okay, let's see. Use cheapest model and use smartest model attributes allow you to automatically select the most cost effective or most capable model for a given provider without specifying a model name. This is useful if you want to optimize. Oh, if you don't specify any of these attributes, Laravel SDK by default will use GPT 5.2. However, if you specify use the cheapest model, Laravel SDK will rely automatically on GTP 5 Nano. However, if you use smartest model, Laraveli SDK will use GPT 5.2 Pro.
Very interesting, man. This is dope. [ __ ] We can generate images. Oh my god. Generate image. Type a function. Bam. Bam. Boom. Okay. Easy peasy. And then you just type image store us. [ __ ] We just have to type image store us. And that will just work. Okay. Laravel developer driving a Lambo. Okay. So that's the image we're about to generate. I'm going to store that image and then I'm going to just type this info. Done. Okay. Okay, we are generating an image of a Laval developer driving a Lambo and is processing as you guys can see here.
Okay, is done chat is done. Okay, it's done. So, we now go to storage and then we hopefully see the image.jt go. Here we go. Shad, here we go. The image is right here. I just opened it. I kind of want to try something real quick. Okay, I want to try if I can send an attachment. I'm going to ask for an image of a guy on the attachment driving a Lambo. The attachment will be Nuno's picture, which is this one. This beautiful dude. Okay, right here. And then I'm going to ask OpenAI to generate an image with that.
We start as the image and it is done. Okay, let's go all the way up. Bam, bam, boom. Type enter and see how it goes. Okay, it's done. Chat, it's done. I look like this. Come on. Come on. I look a little bit better than this. No, don't you think? Come on. I look a little bit better than this. Let's go. It is working. We have audio. Oh my god. This is getting [ __ ] ridiculous, man. Honestly, can I store it? Yes, I can store it as MP3. Oh my. So, we ask for the generation of audio.
That was too fast. Almost too fast that I don't believe it. Okay. Private. And I have the audio. Indeed. I'm going to open it. So good. OMG. Did you guys heard it? Wait, what? so good. OMG, this is this is getting ridiculous at this point. Jesus [ __ ] what the actual [ __ ] What is voice? I don't even know. Like, this is getting ridiculous. All right, let's use this now. So good. OMG. Okay, this is the female version. So good. OMG. Wait, what? Does that worked? Well, Burnley is like the female version of that audio. Okay. Uh, so we have female, we have male.
What is the default though? The default is like just you don't specify it. Maybe perhaps. Try the voice method. What do you want me to put on it though? Cuz the voice method, Charlie, is asking for like a shrink. Wait, what is ally? So good. OMG. All right, that was the result. So, type here something in Portuguese like nooad. Okay, this is Portuguese. Okay, chat. and uh say it with a Portuguese accent. Let's see. That was not bad. Although that doesn't look like Portuguese. That looks like Portuguese from Brazil. You can also do transcriptions which is literally taking that audio MP3 and just it's audio to text basically.
So you can in theory translate they give an MP3 to actual text as well. That's something you guys can do too. So in theory if I do audio.mpp3 so if I do this so again I'm reading I am reading from the given audio and just outputting that information. Okay. Oh yeah is oh so good. So the audio was still Nuno is a meaning that if I read the audio here we go and the output of the transcript was just topnotch. That was in Portuguese by the way. This is insane. This is absolutely insane. What do you guys think?
WS SDK hardcore. What do you think about this failover? Wow. This is mind-blowing. By the way, this is also insanely important. If you are developing an application that relies on AI, you cannot simply stop working with cloud code is down. You want to have a way of AI to fall back to other providers if something is down. So, if OpenAI is down, it will rely now on entropic after that. All right. Nice. Nice. Sh. Don't forget, go all the way down, click like on this video and subscribe this channel if you want to see this number going up.
Testing [ __ ] This is [ __ ] important. Okay, if you have a job or potentially HTTP layer that uses a personal assistant, an agent, you can continue to test that HTTP layer, but you need to fake that agent. So, it doesn't actually prompt AI and then you just assert that it got prompt. So, it really just works. So, this is Laravel AISDK. So, I want you guys to give me a classification from zero to 100. What is your ranking about this AI SDK? Mine is 100. I think this is absolutely mind-blowing. Super happy about this. I cannot wait to actually build something with this.
Okay, chat. Love you all. See you tomorrow. Peace out.
More from nunomaduro
Get daily recaps from
nunomaduro
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









