Laravel AI SDK: Q&A with Internal Docs (Stores / File Search / Embeddings)

Laravel Daily| 00:08:21|Feb 18, 2026
Chapters8
Intro to the Laravel AI SDK and the course, with focus on building a chatbot from documents.

Laravel Daily walks through building a document-aware chatbot with Laravel AI SDK, using OpenAI embeddings and a vector store, plus a peek at cost and future PostgreSQL-based storage.

Summary

In this follow-up to the Laravel AI SDK series, Jeffrey from Laravel Daily demonstrates how to enable a chatbot that answers questions from uploaded documents. He uses a Markdown doc from Laravel Boost as the knowledge base, uploads it locally, then pushes it to OpenAI via the SDK’s vector store and embeddings. The demo showcases two core concepts: embeddings generation and vector search, and how they power the Document QA agent with tools like file search. Jeffrey also walks through the code path: a DocumentController stores the file, dispatches a processing job, and a Livewire component handles the interactive Q&A UI. The agent is configured with a provider (open AI’s smartest model) and tools that fetch files or search within the vector store. He highlights practical considerations, especially the cost: embeddings and API calls can quickly add up (roughly $0.50–$0.70 for several interactions) and discusses how local PostgreSQL vector search could lower these costs in a future video. By the end, viewers should grasp how to wire up a document-backed chatbot with Laravel AI SDK and what to budget for, while anticipating a more economical, self-hosted option with PostgreSQL.

Key Takeaways

  • Uploading a document locally and then indexing it into a remote vector store (OpenAI) creates a searchable knowledge base for the chatbot.
  • The Document QA agent uses embeddings to turn a user question into a vector, which is then matched against the document vectors for an answer.
  • The Laravel AI SDK exposes tools like file search and vector stores, which can be plugged into an agent to perform tasks during chat.
  • Costs matter: embeddings and model calls (e.g., GPT-5.2 Pro) can add up quickly, often tens of cents per question in the shown setup.
  • The example uses a Markdown document (Laravel Boost docs) and demonstrates a realistic workflow from upload to a human-language answer.
  • Local PostgreSQL vector search is presented as a future alternative to lower costs by keeping more processing on the user’s side.
  • The workflow includes an Eloquent document record, a processing job, and a Livewire UI to simulate a chat-like interaction.

Who Is This For?

Essential viewing for Laravel developers who want to add document-backed Q&A to their apps, especially those considering OpenAI-powered embeddings or exploring cost considerations. It also serves as a practical walkthrough for anyone curious about Laravel AI SDK’s agent architecture.

Notable Quotes

""So this is kind of step number one. Now we can ask questions. For example, question does boost support codecs and we ask that it is thinking and now it is not in the queue. It is actually asking open AI.""
Shows the end-to-end flow from uploading to querying via OpenAI and how the system processes the prompt.
""Documentation states that codec support is typically enabled automatically.""
Represents the kind of factual output the doc is expected to provide and the confidence in the embedding answer.
""You should be prepared to spend quite a lot of money, especially if you're using smartest model here like I'm doing here.""
Highlights cost considerations when using embeddings and advanced models.

Questions This Video Answers

  • How do I set up a document-backed chatbot with Laravel AI SDK and OpenAI embeddings?
  • What are vector stores and embeddings, and why are they needed for document QA in Laravel?
  • Can I use PostgreSQL for vector search to reduce OpenAI usage costs in Laravel AI SDK?
  • What is the role of the Document QA agent and how do tools like file search integrate with it?
  • What are the cost implications of running AI-powered Q&A on large knowledge bases?
Laravel AI SDKOpenAI embeddingsVector storeDocument QAFile searchLivewirePostgreSQL vector searchCosts of AI modelsLaravel Boost docsAgents in Laravel AI SDK
Full Transcript
Hello guys, this will be the second video in the series about new Laravel AI SDK launched. Yesterday we talked about generating images, YouTube thumbnails and this comes from my new course with six practical examples of Laravel AI SDK and today in this video we'll talk about chatbot with documents. So you upload the document and then you ask the questions. So I will show you how this form works under the hood with Laravel AI SDK and these are other examples if you want to have the full course. The link to that for premium Laravel daily members will be in the description below. Now let's see how to work with Laravel AI SDK and chat with your uploaded document. It may be a help file. It may be some documentation. It may be internal knowledge base something like that. And this can be done in a lot of various ways with a lot of various tools. But the topics around that the things that you need to learn is vector store and embeddings. For both of them, you may use external providers or you may use local posgrql. Not MySQL. Only posgrql currently supports vector search. And I will show you both in action. Let's first demonstrate how it works and then I'll show you the code. So first we will use provider open AAI to store the document and then search the document for answers as a document. I've chosen the official documentation of Laravel boost. So markdown file and then we will ask questions around that document. So the first step is to upload that document which will upload the file locally and then upload it to external provider. I think I haven't launched the queue which I will do now. Now it is running. So 20 seconds was just the preparation and in 4 seconds it is done. So now the document is uploaded as a file locally but then with embeddings and vector stuff it is uploaded to open AAI. So this is this file in our local database MySQL file path to local storage. But then we have two things store ID and file ID and those are identifiers of vector store and file in remote store of open AI. So this is kind of step number one. Now we can ask questions. For example, question does boost support codecs and we ask that it is thinking and now it is not in the queue. It is actually asking open AI. So transforms that question into so-called embeddings. I will explain that in a minute after showing the code. And then it transforms the answer to a human language which as you can see it takes a lot of time. So like 10 seconds or so. But the answer is correct. Document states that codec support is typically enabled automatically. We have the chatbot. Now let me show you the code. First we have the document controller to well store the document and after we store that in local storage and add that as eloquent record. We dispatch the job to process that document. What does that process do? Here we see two things from Laravel AI SDK. stores create and AI document by stores it means vector store with external provider in this case open AI and AI document is a specific class it's not a storage document it creates the document from local storage and that document is added to the store from above so at this point both of those are in the open AI database and there's nothing much to see here in the document controller we just redirect to show and then we have livewire component to ask questions. So in that show blade we have livewire document chat and this is where the question and answer chatbot actually happens. So we have wire submit ask question. It could be actually a laral controller but for dynamic behavior I did wire model here and there also thinking while it is in process and then also on top if we have messages then they are shown like in a typical chatbot for each message we have either the user or the chatbot reply in that livewire component document chat we have document QA agent. So this is what actually happens when you ask a question. You prompt the agent with store id as a parameter. Basically we're calling the store from open AI and prompt our question and then we get the response into response text and add it to the array of messages. Now what's that agent? Document QA was created with PHP artisan make agent. It implements agent but it also has tools and we'll get to that in a minute. Also, you're probably familiar by now with these two attributes. So, we can choose the provider and in this case I use smartest model of open AI. Then we have instructions what it should do basically your document assistant your answering questions and stuff like that. Then we have tools. So you may add tools to your agents. Some of the tools are from the providers like for example file search is a tool that exists inside of Laravel AI SDK to well search for files within those stores. You can think about stores kind of like file system with files. And if you're using AI SDK like this, you don't really even care how those embeddings work and how it works under the hood in OpenAI side. You just call the agents with wrapper of Laravel AI SDK which is really convenient to read even if you don't have any idea how it works on OpenAI side. But the problem with this approach or the reality of this approach is the time and the price. So here I am on OpenAI dashboard and I've made a few experiments before this video today on February 9th and it's already 55 cents for a few documents and questions. And if we go to spend categories somewhere at the bottom you will see yep this one GPT 5.2 to pro 45 cents for basically asking those questions also related to that is a call for embeddings this one but it's a very small thing and I will explain embeddings in the next video where I will talk about posgrql version of that same feature chatbot but basically for such operation you should be prepared to spend quite a lot of money especially if you're using smartest model here like I'm doing here of course you need to experiment experiment with various providers and models. But the thing is that it's not really cheap. And keep in mind the boost docs are not that huge to be honest. It's a regular markdown file with I don't know how many lines of code like a thousand max. And if you have more documents or bigger knowledge base of documents, it will get more expensive. I'm actually going to refresh this dashboard and see if it refreshed the numbers with our recent experiments. Sometimes it does that in minutes, but it depends. So, 129. Good. So, we have some new numbers. And here, as you can see, we have file searches used, but it doesn't show the price. And also, if we go to spend categories, we scroll down and do we have embeddings? Yes. So, we do have embeddings used more. And probably GPT52 Pro should be even higher. Yes, 67 cents. So you can imagine that even one question may cost you like 20 cents. And that was a simple question on a simple document. Imagine someone may abuse your system by uploading something really big and then ask a very big or even nonsense question. But generally this is how you work with Laravel AI SDK with agents that work with file search and vector store and documents basically. So by now you should get the idea of the functionality and possibilities. And now in the next video I want to talk about the same thing but with local posgrsql vector search which will help us to kind of lower down the cost somewhat and we'll see how it works under the hood.

Get daily recaps from
Laravel Daily

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.