A Practical Guide to Enhancing Applications with AI
Chapters5
This chapter argues for small, focused AI features in a regular Laravel app rather than a giant chatbot, showing how to sprinkle useful AI across a few practical use cases to improve user and developer experience.
A practical Laravel-focused guide shows how small AI features, like audio transcription and draft replies, can boost user and developer experience without building a giant chatbot.
Summary
Laravel demonstrates a no-fuss approach to adding AI: use targeted features that improve real-world product interaction rather than a flashy AI showroom. Kristoff walks through turning a normal Laravel app into an AI-assisted support channel with three concrete use cases. He highlights the Laravel AI SDK, the OpenAI model, and the importance of clean, non-hyped AI outputs. The walkthrough covers recording customer audio, transcribing it on the fly, and displaying the transcription in the admin UI. He also shows drafting replies with AI, first using a simple agent and then scaling to a knowledge-base-driven approach. When the volume of data grows, he introduces a vector search tool with embeddings to power the knowledge base, ensuring accurate and safe responses. Throughout, the emphasis is on accurate, helpful responses and avoiding invented behavior. The session wraps with a reminder: these are small, practical AI features that enhance normal Laravel applications, not a single massive chatbot."
Key Takeaways
- Install and use the Laravel AI SDK with an OpenAI model and an API key to add AI capabilities to a Laravel app.
- Implement audio transcription for support messages by using the AI package's transcription helper and store the result in the database.
- Generate draft customer replies by building an AI agent, writing a concise instruction prompt, and feeding it the current message and knowledge base information.
- Use a knowledge base with FAQs to guide AI-generated responses, and constrain outputs to avoid fabrications about product behavior or promises.
- Scale knowledge retrieval from simple string matching to vector search with embeddings for larger datasets to improve accuracy.
- Demonstrate practical UI enhancements like showing real-time transcriptions and AI-generated drafts to improve user and agent efficiency.
- Emphasize safety and accuracy by instructing AI to rely only on provided data and to politely escalate when information is uncertain.
Who Is This For?
Laravel developers who want to lightly sprinkle AI into existing apps—especially for support workflows—without building a full-scale AI assistant. Ideal for teams aiming to improve UX with transcription and auto-drafted replies.
Notable Quotes
""The goal is not to show off fancy AI. The goal is to make normal product features a little more useful for your users.""
—Sets the guiding principle for the video’s approach to AI in Laravel.
""All that you need is to make sure you have the Laravel AI SDK installed.""
—Introduces the practical prerequisite for integrating AI features.
""We can draft here a reply, which is already really good with our agent here through the Level AI SDK.""
—Shows the effectiveness of AI-generated drafts in customer responses.
""Vector search is very, very useful.""
—Highlights the shift to embeddings-based retrieval for larger knowledge bases.
""Do not invent product behavior, shipping promises, or discount.""
—Underscores safety constraints when prompting AI.
Questions This Video Answers
- How do I add audio transcription to my Laravel app using the Laravel AI SDK?
- What is vector search and how do embeddings improve a knowledge base in Laravel apps?
- How can I generate draft customer replies with AI without risking fabrications?
- What are practical AI enhancements for Laravel dashboards besides a chatbot?
- How do I implement a knowledge-base-driven AI assistant in Laravel using guards and prompts?
Laravel AI SDKOpenAIaudio transcriptionsupport automationknowledge basevector searchembeddingsagent patternLaravel
Full Transcript
AI and apps alone always have to mean a giant assistant. Sometimes the best AI features are focused, small, and simple. We're not building a giant AI chatbot today. Instead, we'll take a regular Laravel application and add a little bit AI around three simple use cases. The goal is not to show off fancy AI. The goal is to make normal product features a little more useful for your users. We're going to take a look today at this wonderful The Artisan Supply Shop with has some wonderful products for you like The Artisan One for commands that deserve a little drama, or the migration time machine, a queue worker lunchbox, or a rubber duck debugger pro listens patiently, judges silently.
So, there's some really good products for you. And yet, today we're trying to sparkle a little bit of AI here and there. Nothing fancy, but still enough to give um you and your users a better user and developer experience. Okay, so how are we going to do this? We have here a support page with some information. And what you can also do is you can record an audio message for the support team. So, let's do this here. [email protected]. Let's start recording. Okay, I just bought The Artisan One and it's not working as expected. I wish there was a little bit more drama.
What can I do to adjust this? Okay, once message received, we're going now to the back end here where we have um where we can manage a little bit of the shop here and on the support replies, we see now this recorded message. Yeah, I hope you could hear that this is working. We can draft here a reply. But what we don't have yet here is a transcription of the audio. So, what can we do about this? So, we can go to the plate file. Here it is. I've already prepared here a paragraph. Um transcription of the audio.
And yes, so we have a little bubble here where we could show the transcription, but we don't show it yet. So, what can we do? We can just write it ourselves. We can use some external tools to generate and then paste it in here. But, wouldn't it be cooler if we just create it on the fly when we get a new uh um support request, a new audio file support request? Okay, and of course, yes, we can do this. All that you need is to make sure you have the Laravel AI SDK installed. I think I already got it here.
Yeah, it's just being upgraded here, but I already have it inside my application. This is our AI tool set with a bunch of very cool tools. We have a couple of videos here on the channel, but this one is about some small features that can help your UI and your user experience. Okay, so we have it in there. By default, it's using the OpenAI model. I have already added an OpenAI key to my environment file. So, if you haven't done this yet, you should add it now. Of course, you can use other AI models as well for this.
And then, how we're going to do this? Let's go to the support message controller. So, we are validating the request, the audio message. We are storing the audio message, then we create here a new support message. And here we already had a field for the description, and now we're just using the transcription helper of the Laravel AI package. We use from path and then from what was it called? From the audio. Uh that's which is an array. We just get the path here, and then we're going to generate the transcription, and then we get an object back, and from that we use the text there, which we're storing inside the database.
Okay, and then inside our blade file, what we're going to do here is instead of this, we're going to just get the transcription of the message, which should be this. Okay, so far so good. We don't see anything here yet for the given one, but if we create a new one, let's try this here quickly together. Hey, I just ordered the USB Rubber Duck Pro and I'm not feeling like it's really the pro version. So, am I doing something wrong or what can we do about this? Sending voice message. Okay, this is looking good. Okay, sending voice message.
Okay, this is looking good. Let's go back here to our dashboard. We should see the new message here and yeah, we already have the transcription here. So, it's not a rocket science, but yeah, it's very useful to have a transcription if you're working with audio because maybe someone doesn't want to hear it, can't listen to it right now. You just want to read. You maybe want to copy something. It's always good to have a transcription here in place and with the Level AI SDK, it's really easy to add it to your Level applications. Okay, now let's reset this demo application here.
So, I'm getting rid of all the entries. Then, let's go here to the support page again and let's try this again. Kristoff [email protected] Hey dear support team, I just got my Artisan Wand and I'm really wondering why it's not really running real commands. I've tried my best, but it's not working yet. Please help me out here. Okay, so I just sent a new support request. Let's go back in here. Let's log in. Okay, and we have this new support message here. Already have our transcription here, which is nice. And now we can draft your reply and then um about what we want to send here and then send it and store it for later and send it then back to the user.
But wouldn't it be also cool if AI could help us with responding? Because what we also have here in this application is we have a knowledge base where we have a question like does the artisan one run real commands? And then we have the um replies to those questions. And wouldn't it be nice if we could feed this to an AI agent, to a model here, and then help us create a new draft message here for this support message. And that's exactly what we're now trying to do. Okay, I'm here in the support replies blade file and I have here a button here already prepared.
Let's take a look at the generate draft. Okay, beautiful, but we're going to click it, nothing is happening yet. So, we still have to do a little bit of work ourselves. So, we also got here generate support reply controller already in place, which is connected to the button to the form, and this is empty now. So, let's think about what we want to do here. First, we want to generate um reply message. That's what we're going to use the Laravel AI SDK for, and then we need to update the support message to include a draft reply, which is a field in the database of the support messages.
Okay, so how can we do this? In order to talk to an AI model with the Laravel AI SDK, we need an agent. So, let's create one. PHP artisan make agent. We're going to give it a name, support reply agent. We don't need to generate structured output because we are asking agent a question and we get just text back. Okay, under AI agents, I have the support reply agent. Okay, which was just created by us. And it's pretty empty so far. So, again, think of an agent as PHP class inside a Laravel application, which is kind of an AI conversation with an AI model with some very specific use cases.
And in our case, what we want to do is we want to talk uh about the support message that we received, and we want to create a reply from our knowledge base. And the way we can do this here is we can use this nice syntax here. What is it called? Is it adhere? Adhere? Something like this. I'm not sure. And here we're closing it. All right. And what this allows us to do is it allows us to write multi-line string here, where we can also embed some variables. Okay, so what we're going to do is I'm going to paste something in here.
Okay. And I've provided here Yeah, I'm good instructions here. You write concise customer support email trust for the Edison Supply. Um use only the supplied customer message transcript and the knowledge knowledge base base below. So, what we're going to provide um just in a bit. And yeah, if the knowledge base does not know the question, say that the team will check and follow up. So, we have to be very precise here, because yeah, I think we also have it here somewhere. Yeah, do not invent product behavior, shipping promises, or discount. Because let's be fair, this is what's happening if you use an AI model, because those AI models are trained to make you happy, to make the users happy.
And if they are grasping for a response, it's going to make something up. This just can happen, so that's why we have to be very um specifically what we want to do. Okay. And then here, what I also want to attach here is the knowledge base information from the database. This um works here for this simple example, because it's not much. If it is much, then we would do something else, which I'm going to show you later. But for now, I'm going to grab here Wait, I have here a new method prepared. Okay, here we go.
This method knowledge base instruction it just gets all the FAQs, what's what it's called which model is called. We get the question and the answers and then we're going to map this into a string where we have the question and the answer in a string which is very good to read for those AI models. Okay, then here I'm going to load this. Instructions. Knowledge base instructions and then we can just provide this here. Like this. Okay. And again, these instructions these are things that every time you start a new conversation, this is going to be provided to the AI model to the agent you would say.
So, you have to be yeah, careful here because if it's too much, then this can be pretty bad for the behavior of the models. So, try to make this as small as possible. But, we're going to take a look at a better or a different approach later as well. Okay, so we have the agent, we have some instructions, we don't need any messages or tools here. And this means back inside our controller, we can now create a response by making use of our support reply agent. We're going to create a new one and then we are going to provide a prompt here which is a string where we're going to explain what we want to do.
And for this, um I've also prepared a little bit here which is a little prompt prepared similar to before. We're getting the support message. Um the text here um if the transcription is given where you can using the transcription and then we're providing here some information about the customer. The name, the email, everything that we have already and then we provide here the message. And this I'm going to add in here. Mm prompt for and then we're going to provide the support message. Okay. So, again, we're creating here a new support reply agent. Inside the agent, we have already defined some instruction, which are some information, which are always be loaded with this agent.
But, now we're asking the agent a question, or we are we have our prompt here, and we provide this here through this method, where we're just providing some information about the user. We already told the agent how we want it to reply, but the agent needs this information about this current message, this current support request, and that's what we're providing here. Okay. And then, after this, we need our support message to be updated. And for the draft reply, we're going to um get the text from the response. Which should be um the draft message to this user.
Okay. And then, we're going to return back here, um providing a status, and the status is generated draft message. I think that's fine for now. Okay. Again, we're clicking the button here. We are getting the support message here. We are creating a new agent here. We are prompting with our request here. We get a response back. We're updating the support message with the text. I think this should be it. Let's give it a try. So, we are clicking this button here. Oh, we can see something is spinning here. So, that's what I already prepared inside our plate file, and we got a response back.
Hi Chris. So, thanks for reaching out. The Artisan One is a decorative desk prop, so it does not run real commands, automate deployments, or connect to an API. It's designed as a novelty item, rather than a functional tool. If there's anything else you'd like to check um check about it, feel free to reply the Artisan support reply. So, yeah, it's a bummer that the Artisan One is not working as we would want it to. But, yeah, it's pretty cool that we could draft here the reply, which is already really good with our agent here through the level AI SDK.
And now we only need to send it back to the user if we are satisfied with this auto-generated reply. And as I said before, this does work with our current solution because if we go to the knowledge base, you can see we only have here like, I don't know, 10 um different FAQs here, so that's not that much. We can provide this in the instruction instructions to the AI agent which we are doing right here again. We're reloading all of those inside here. But there's also a good chance that your knowledge base or where you're getting this information from the database from is way bigger than just those 10 entries here.
You maybe have a couple of hundreds, and then this solution is not working um out here anymore. So, for this, I have already prepared something else which I'm going to show you. Let me just update this Inside the tools, I have a new What is it called? Knowledge knowledge base tool. And this tool now provides the AI model access to our database through vector search, which is now way more efficient than just looking for specific strings which we did before. So, now we're using vector search here in order to get the correct results, and for this we also need embeddings which I also have already prepared before.
There we have already other videos about this as well, but you need to create those embeddings which you're storing inside your database which are exactly what you're looking for, which again works way better than just searching for strings. So, for this, a vector search is much better. Okay, so we have already provided you these tools, so now the AI agent knows that we have it has these tools and it can use this. And we also need to make sure that we tell it in the instructions instructions um that for this um you only use the knowledge base which you get from the tool knowledge base.
I think I think this should already do the trick. Let's give it a try. We're going back to the shop. New support message, Kristoff. Uh level Come by the way, let's check again. Mhm. Can I return the migration machine? So, this is something that we have inside our knowledge base. It is something that I can ask. Hey, I just got the migration time machine and I want to return it. Is this possible? Okay, back to the support replies. This is a new one. Here, just got the migration time machine. I want to return it. Is this possible?
Let's generate another draft here through our agent. And now it uses vector search and it should be able to still work. Hi Kristoff, yes, you can return the migration time machine, but only before you bought it. If you need help with anything else, just let us know. The AI is in support team. And again, I think this is exactly what is here. Yes, but only before you bought it, which is pretty cool because now the AI is not making anything up. It only uses information that we already have here in the database. And through vector search, it's very easy for the AI to find the right things even if the questions are not as clear as they could be, which is always the case with customer support messages.
So, that's why here vector search is very, very useful. And that's it. practical ways to bring AI into normal Laravel application, all supported by our lovely Laravel AI SDK. Not as one giant chatbot, but as small features that help users with things that they already want. If you want more Laravel AI examples, let me know in the comments. Thank you for watching and see you the next time. Bye.
More from Laravel
Get daily recaps from
Laravel
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.









