Building AI Applications with the Laravel AI SDK

Laravel| 02:05:19|Feb 18, 2026
Chapters6
Taylor Otwell is joined to discuss the Laravel AI SDK, its goals, and practical questions about how it fits into Laravel, including how it can be built with AI and used in real apps.

Laravel AI SDK unlocks a first‑party, unified way to build AI features inside Laravel, with agents, tools, streaming, and multi-provider support.

Summary

Taylor Otwell unveils the Laravel AI SDK as a cohesive toolkit that sits atop Laravel’s philosophy of batteries‑included design. He explains that the SDK provides a unified interface to multiple AI providers (OpenAI, Gemini, Anthropic, and more) and integrates with Laravel concepts like queues, file storage, and broadcasting. The talk compares the SDK to an eloquent API for AI: you can create reusable agents, attach system prompts, and reuse the same patterns across text, image, audio, and embeddings. Highlights include built‑in tools like web search, file search, and streaming with server‑sent events, plus structured output via JSON schemas. Taylor also covers practical use cases (issue summaries in Nightwatch, embeddings and vector search, image/audio generation, and knowledge assistants), and demonstrates how to scaffold a small project with cloud code to show how the pieces fit together. The conversation also touches testing AI interactions, failover between providers, and upcoming features like agent chats, human tool approval, and sub‑agents. Overall, the SDK is pitched as a pragmatic, Laravel‑centric way to experiment with AI while keeping code testable, maintainable, and aligned with Laravel’s conventions.

Key Takeaways

  • A single Laravel package (composer require laravel AI) provides a complete toolkit for AI work, covering prompts, context, tools, streaming, and attachments.
  • Agents in the SDK encapsulate prompts, system instructions, and tooling into reusable, easily testable classes that can be instantiated in controllers or Artisan commands.
  • Multi‑provider support with automatic failover lets the SDK transparently switch providers (e.g., OpenAI to Anthropic) on rate limits or errors.
  • Structured output via JSON schemas enables reliable parsing of AI results, reducing ambiguity compared to free‑form text.
  • Built‑in tools (web search, web fetch, file search) augment LLMs with real‑world capabilities like data retrieval and document analysis.
  • Streaming (SSE) and async queuing let you deliver AI responses progressively or run heavy prompts in the background, improving UX and scalability.
  • Attachments and embeddings enable practical use cases such as document analysis, vector storage, and semantic search, all within Laravel’s data stores and file system abstractions.

Who Is This For?

Laravel developers who want to add AI features to their apps without leaving Laravel’s ecosystem, plus engineers exploring multi‑provider AI and embeddings. It’s ideal for teams building chat assistants, AI‑augmented search, or knowledge bases inside Laravel.

Notable Quotes

""This is a streamlined unified interface for working with different AI providers like OpenAI, Gemini, Anthropic.""
Taylor explains the core promise of the AI SDK: a single API surface across providers.
""The agent class encapsulates the system instructions, the message context, the tools, and the schema of what you're doing.""
Describes how agents organize AI tasks in a reusable, testable way.
""Prompts are the basic way to ask the LLM to generate text in response to text you give it.""
Definition of prompts within the SDK’s framework.
""Structured output lets you respond in a defined JSON structure so you know what data to parse.""
Explains the advantage of JSON schemas for AI results.
""Tools let you supercharge LLMs with things they couldn't do otherwise, like cryptographically secure randomness or web searches.""
Highlights the power of tools in agent interactions.

Questions This Video Answers

  • how does Laravel AI SDK unify multiple AI providers in one API surface?
  • what are agents in Laravel AI SDK and how do I reuse them across controllers?
  • can I use vector search and embeddings with Laravel AI SDK for semantic search?
  • what is the role of Boost in Laravel AI SDK and how do I enable it?
  • how does the AI SDK handle failover between providers like OpenAI and Anthropic?
Full Transcript
And we are live. Hello everyone. Good to see you all. Uh we have a very special guest today. We have the Taylor Otwell. Uh Taylor, how are you doing this morning? I'm doing good. I'm rocking and rolling. How are you? I'm doing pretty good. It was a it was a good weekend for me. Hopefully everyone in the chat uh I know we have already had quite a bit of people um in the chat already. Uh so a lot of people waiting. So if you are here and you are uh ready to see what the AI SDK has in store for us and also just to hear Taylor's thoughts on the AI SDK, why he built it. I'm excited for that. Personally, I haven't gotten a chance to talk to Taylor yet about the AI SDK specifically. But if you're here, let us know where you're watching from. And um I guess Taylor, what's a good what's a good question? What's something that's been on your mind like outside of work? Like maybe TV show, what something like that? Oh man, I don't even know. Everything's been a blur since I did [clears throat] the uh AICK and then I was in um India in Dubai for Laracon India. So like I feel like Yeah. I feel like this is like maybe the first week I feel like I'm kind of getting back in the groove. Um there you go. Yeah. So I don't know. I don't even know what's going on in the world. [laughter] I guess what was your whenever you travel somewhere, what is like the the food or like thing that you try to make sure you you do? I try to just like have especially if I'm going somewhere like India, um try to have like whatever food that the people there really like. So, okay, for example, like the first time I went to India, I was the at the hotel and like the room service guy, I got back kind of late to the hotel and he was like, you know, do you want any food? Like what do you It was like almost midnight and he was like I was like just bring me like whatever you would eat at this time. Like whatever your favorite thing to have would be just like bring that and I'll just try it. Uh so I just try to do stuff like that. Um just so I get a feel for like what people are like, you know, what they're really doing in these places. I love it. I guess then for everyone watching, I already see a bunch of uh all all over the place. So, we got um Evan from Switzerland, Bakersfield, California, glad to see you here. Albania, Kuwait, UK. I love just like the uh how how global everyone is. India, um good day from Pakistan, I guess. Everyone, what time is it for you right now? I always love to see that. And then also kind of similar to Taylor, what is uh what is the thing that you always tell people to try whenever they are visiting your region? So food that you always tell people to try? Oh, that's a great question. Actually, I think like in Arkansas and maybe other states in the south, a lot of people like to try the barbecue, you know. And when I say barbecue, I don't mean um like, you know, like Yeah. Yeah. like barbecue with like barbecue sauce, like pulled pork, brisket with barbecue sauce, stuff like that. Around here it's a lot of like pulled pork, but um yeah, in Texas it's more like the brisket, you know. Yeah, that makes sense. I I have not been to Arkansas. In in Arizona, I guess I would tell people to try any hole-in-the-wall Mexican restaurant. Like that would be my go to try unless We have a lot of good Mexican places here, too. It's It's so good. Yeah, it's it's great. Unless you're like in Scottsdale, then there's some like good burger places that I would would recommend. Uh, good old Greg sausage roll. I have not had one, unfortunately. I've heard good things. Uh, 900 p.m. Dubai time. Well, thanks so much for staying up staying up late. Uh, we got 14 o'clock here in Brazil. Uh, 10:03 p.m. in Pakistan. I think that might be Oh, no, never mind. 10:03 a.m. right now in PH. All awesome. I love it. Macedonia 163. Glad or 1803. Glad to have you all here. Any Ghanian developers in the room? Says Ovac for you. Um Joshua, what is going on, gents? I'm excited for this. Well, we're excited to have you here, Joshua. Glad to have you here. Um and Liam says, "I'll buy you a sausage roll next time you're in the UK, Josh." Okay, sounds good to me. I am always down for some food. Uh I did have Taco Bell when I was in the UK, so and it was actually pretty good, man. I've got a Taco Bell. It's like right outside my neighborhood. I mean, I'm talking it's like across across the street basically. I didn't know I didn't know this about you. If you if you love Taco Bell, then that's awesome. I love it. It's so quick. I could like run over there and get like lunch and be back home legit like 10 minutes. Even if I eat at the restaurant, you know? [laughter] What's your What's your go-to current Taco Bell menu item right now? Oh man. Like if I just want to be like really efficient and in-n-out, I'll do just like two bean burritos and a drink and Okay. crush them and then dip out of there. Um, it's like 5 minutes. What do you get? I I get um like either the cheesy bean and rice burrito. Like that's kind of like one of my go-to or I always love like anything chicken from them. So like the there's the the cheesy chicken chipotle flatbread thing. I like that. Um but yeah, like cheesy bean rice burritos I've always had since I was like in high school. So those are kind of like my, you know, uh go-to for nostalgia sake. Yeah, for sure. And then a Baja Blast. I used to do Full Sugar. Now I'm Baja Blast Zero. My daughter does the Baja Blast Zero, too. Yeah, I I I tried it full sugar again like for the first time in like I don't know 10 years and I was like, "Wow, this is this is a lot. I can feel it." Awesome. Well, glad to have everyone here. It's It's crazy to see how many people are here. Mostly I to see Taylor and also to hear about this AI SDK. Um, so if you do have questions in the chat, be sure to drop them. I'll try my best to be watching, but mostly the goal of this live stream is one to get Taylor's thoughts on the AI SDK, what it is, how you would build with it within Laravel, maybe like why it was built, and then two hopefully like the towards the latter half of this stream being able to to build something. There's there's some projects that I've I've built um that we can kind of show off the AI SDK and just like what's capable of, but I think like the the perfect part is being able to say, "Okay, what actually is capable when you're building things?" Um and then the why. So again, if you do have questions, again, everyone's kind of shouting out where they're from. I love it. Keep doing that. Um it's awesome to see people from all over the world being able to come together and say, "Hey, this is some cool things that we're building with this cool software that we love." Um, so Taylor powered by Taco Bell. Uh, what was the reason like why did I I I think like the first thing I saw you start talking about the AI SDK? It was probably I don't know probably in October, November of last year. I could be wrong on the timeline, but why did that come into [snorts] fruition? That was one of your projects I think that you kind of picked up and you're like, I'm just gonna I'm gonna dedicate my life to this. Yeah, I mean obviously everyone was just kind of tinkering around with AI stuff as they have been for the last few years and um you know this is like such a important part of the dev workflow at this point. It felt like we needed some sort of firstparty opinion on interacting with AI providers. I mean, just like we have like opinions on sending email or like queuing jobs and just like this [clears throat] is becoming such a common part of like what people do when they build apps. Um, that made a lot of sense to bring something first party and like have like an offering here. I usually try to think of like if I'm going to build something into the framework, I kind of think to myself like is this applicable to like 70 80% of devs, you know, in in the Laravel world. like if I'm debating whether something needs to be in the framework. Yeah. Um and then like for a package like the AI SDK is maybe a little bit more lenient like is this applicable to maybe half of Laravel devs like in the world like probably so right with AI uh and probably increasing dramatically over time. So made sense to have something and yeah I did build it myself and it was the first package I've written at Laravel in a couple years. The last stuff I wrote was uh Livewire Vault and Laravel Folio. And then as we were like building cloud, building nightw watch, I didn't really write a lot of packages at the time. And now I'm kind of like back in the driver's seat with the AI SDK. I just kind of like one [clears throat] most other people here were busy with other stuff, right? So it's [laughter] like so someone has to build it, but everyone's busy. So I was like, well, I I it'd be good for me to like get back in the code and like building something uh fresh and it sounds fun. And so like I'll tackle it and plus I had some ideas for what I wanted to do with it. Um there was already some cool stuff being done in the community around AI stuff with like TJ Miller's work on Prism which we kind of built upon in the AI SDK. I kind of see like that as like almost like a query builder and eloquent type of relationship. Like, you know, I think the AI SDK I tried to put like a little bit more opinions on top than people might typically build in a in like kind of a generic AI SDK. Yeah. Um Yeah, that's that's awesome. I guess like for for those in in the chat, some some quick questions um as well. So uh um uh will be paid or free. It is already out so and it is it is free. It's a free package free package to use for the AIDS. I guess for people who are just joining us as well. Thank you so much. We're going to kind of take the first little bit talking a little bit to Taylor about his thoughts about why we built it, what's it for? Um and then the second half of this stream we'll be able to jump into some code and see okay what what does this look like practically building um in today's day and age. Uh but uh for those joining, what is your favorite outside of the AISDK? Because that's a little bit of a curve or or home run type of question. Uh what is your favorite Laravel package? Either first party or third party? Love to hear it in the in the chat. Um Taylor, what's your favorite package that you've built outside of the AI SDK? Oh man, if it's like something that's in Laravel, it' probably be eloquent. Um it's also the hardest thing I've pretty much ever built. Um, if it's like a package, I'm a pretty big fan of like, um, I think reverb is cool. Um, I think echo is cool. Like kind of the real time stuff in Laravel. What What else do we even have? Um, we let me let me look at the docs. We've accumulated quite a few packages. There's quite there's quite a bit out there. I mean, I think Octane is pretty cool to some extent. Like the bulk of Octane is sort of handled by other people, right? like the Franken PHP team and swool and stuff like that and but uh just the ability to like boot the Laravel app and kind of keep feeding your requests really quickly. Um yeah, Pete Bishop and my favorite package is pennant cannot live without Fortify. It's crazy how many packages there there are that the Laravel ecosystem has especially when on our Laravel YouTube channel during the advent season of last year. So, for 24 days and then a special guest message from Taylor on the 25th day, uh we had one video for every package and there were some packages that we we didn't make a video for. So, enough to have at least 24 videos is pretty crazy and some of those could be split up as well. Uh yeah, I I think my favorite personal favorite is Cashier. And I didn't really get into cashier too much because when I started my Laraveville journey, I started with um Spark and so I didn't really have to touch Cashier too much. It was kind of already done for me. Uh but over the last year using cashier I was like, "Oh, this is actually incredibly easy, man." Yeah, we're working on some other some new cashier stuff and some new Fortify stuff actually. Um that hopefully we can show pretty soon. But like man, cashier is like that is a I don't think like back when um I first wrote cashier like integrating with stripe and all that it is a lot of work to like catch all the web hooks, store all that. So yeah, I think a lot of people find that package pretty valuable. Yeah. And especially if you look outside of the Laravel ecosystem, most people, you know, have their own set way of okay, hey, here's how to not make Stripe incredibly difficult to work with and everything like that. So, you know, it's, you know, it's valuable. Uh, we got some people saying, uh, reverb, uh, Q's and jobs. I, we'll call that as part of a package. Working with Q's and jobs is incredibly easy. Yes, love for Octane. Uh, Prism is insane. Uh, I agree. We got inertia. Love Laravel Nova. Uh, favorite. Love. Uh, Octane. Um, Sale Octane Reverb fortify. I love it. I love it. Reverb is also cool. I I do love I do love reverb. makes everything makes everything easy. Uh so Taylor, when it comes to the AI SDK, what is how is this different than maybe like working with AI in Laravel? So how is this different than like a boost for example or even MCP? Where does this uh SDK kind of fall into? Yeah, so like right now we have kind of three AI related packages at Laravel. Um um I I think MCP was actually the first one. So MCP is uh MCP stands for model context protocol. MCP right model context protocol I think. Um and it is basically a way to like I think of it as a way to like expose a standardized API for LLMs and AI things to talk to. So like I have an app on the internet and I want to allow like chat GBT to do things or some other tool to do things with that then I can use MCP to expose functionality and OpenAI started coming out with this new kind of like I don't even know if it's out of beta yet but eventually like sort of an app directory of things you can like interact with from Chad GPT and things like that but that's all kind of powered by MCP. Um so then there's uh Boost which is actually a little local MCP server that runs on your machine um that cloud code and cursor and other like agentic dev tools like open code can plug into and what it does is it just like makes available some tools to those agents so that they can query the Laravel docs, they can run tinker commands, they can inspect your database schema and so the idea is it improves like the quality of the Laravel code that things like claude code and cursor can write because they have access to the latest doc. So like if I release a feature, you know, tomorrow in Larave valve because we release every Tuesday. LLMs have like no idea that that feature exists, right? Because they've been trained on old data, but boost actually allows us to feed them like the latest info. So um they always have access to the most up-to-date docs. Um, if you're writing Laravel and you're using Cloud Code or Cursor, which at this point you probably should be using some sort of like AI assistance, um, I think installing Boost is just like a no-brainer because it's like a free package that is probably only going to make the quality of your code better. Um, so I would definitely install that. So that's the second AI package we released. And now the um AI SDK is the third which is sort of like a streamlined unified interface for working with different AI providers like OpenAI, Gemini, Anthropic. So I want to generate some text or I want to stream some text uh or I want to generate an image or audio. I can do that with the AI SDK um through a unified interface and kind of try different models, try different providers without going and like researching that what what are the you know what's the HTTP endpoint for generating image on Gemini and now what is it on OpenAI? Uh it's just a really streamlined way to work with AI. Yeah. Yeah. I love it. Um yeah, someone someone asked the question uh for someone that does not know what Laravel AI SDK is, how would you describe it in a few words? I think Taylor just did that. Uh, but I think if I could put it into another words and maybe Taylor you can correct me. It's like an eloquent way of interacting with all these things that now in this world of AI you would expect you have to interact with at some point in time. Yeah, I think so too. And then which I'm sure we'll get into a little bit later. It sort of is um integrated with a lot of Laravel's the rest of its stack. So like if I generate an image, I have a method to like store that image using the file system stuff in Laravel. If I want to like cue the generation of an image, it's actually integrated with the Q system in Laravel. So, um it's an AI SDK that really leans into like the full stackness of Laravel. I love it. I love it. Uh when you think of um and some people are saying uh your your video quality isn't doing too good. I don't know if there's a way for you to bump it up, but if that means I don't know you start stuttering, I don't know. Um, but we for everyone who's saying that uh Oh, I'm on standard definition. So, I can go up to Do I go up to full high definition? Is that recommend? Full highp maybe. I How do I do I look good now? Do I look better? Let's see. Um, I think I think it look crisper. Should look should look video enhance says someone. Uh, exactly. Taylor will get Yeah. Um, sweet. But yeah, uh let's uh let's take a turn into okay, all of this AI stuff is fun to work with, but I'm curious of maybe what you're seeing within um you know, current applications, enterprise applications, etc. Or just why would people start adding AI into their apps? What are the things that people are generally going to be using it for? Um what are some practical applications that you can now build with the AI SDK? Yeah, I mean I I think some of like the most common things that you see people build with it are sort of like almost text summarization type features. So like on nightw watch for us which we've already launched was like you have an issue in nightw watch right like an exception has occurred or you have like some slow query you can actually click a button in nightw watch to let the AI write like a summary of that. So, and it will tell you like, okay, here's kind of what happened. Here's some suggested solutions. Here's where you might look next or whatever. I think that sort of like category of AI usage is super common. Like, I have a I have a bunch of text that I need to basically like classify or summarize. It's super easy to do that with AI. Um, and pretty fast and affordable. Um, you know, I think another like chunk of AI is around like audio and transcription and sort of real time applications. That's another like popular um, use case for AI. I mean this is kind of I don't know if this is like people consider this like full-on AI in the normal sense of the word phrase but I mean I think it kind of is related is around like embeddings and vector search which is part of the AI SDK where we can generate embeddings from a given like string of text vector embeddings store them in a database and then do semantic search querying and then AI powered ranking of those results for like the most semantically similar [snorts] results. So um you know I think those are like some of the more common use cases. So I think that then there's like image generation which I think is a little bit more like you know I don't know if it's a little bit more on like the consumer side of like um the use cases for AI where you're generating sort of fun images and things like that but at Laravel we've been using it mainly I would say for like text summarization issue summarization things like that. Yeah, definitely. And and the neat part I think with the AISDK kind of like it said is now you have this one central point. I know in the past within things you know there there are great packages like Prism that I I know you said that the SDK is is built on top of and expands in a lot of ways. Um, but it's great that you can kind of have this one central place to say, okay, when I'm working with images or when I'm working with audio, the infrastructure behind it, the the code behind is exactly the same. I'm curious your thoughts and I and when we dive into building something within the code, I I I've already used this within things like cloud code and boost and everything like that. It's it's crazy how much better it gets because it's just one package for the LLMs that you're working with to say, "Okay, hey, this audio is going to be generated the same way that I generated this this chat." Do you think that's intentional that you built like how what are your thoughts on that when it comes to this new world of coding in this in that sense in terms of like how much effort do I put into sort of like the elegance of the API? Either either that or is that elegance of the API matter now? Yeah, I know. It's something I thought about myself. Um, I I think it does sort of matter in the sense of like the LLM still do well with things that are like easy to parse and understand and summarize and read about. I don't know if it's like, you know, obviously the LLM doesn't derive the same joy from it that like the human consumer might. You know, I used to put like a lot of effort into these sort of like whimsical fun APIs because as humans writing the code, it's kind of fun to like use these tools that are sort of like charming in a way. Um, and maybe it still is, but yeah, I I mean it's still important, I think, to have good API design just for the discoverability of that LLM to be able to figure out what the heck's going on, you know? Yeah. Um, and I think like we we've tweeted about this a little bit, but like frameworks that sort of lean into conventions and structure I think do well with LLMs. I think we've seen that with like Laravel and things like Rails where like the LLMs are doing a pretty good job because there's lots of training data. There's these very conventional structure to the projects where it's like there's a models directory, there's a controllers directory. So they kind of like it's very discoverable, you know. Yeah. Yeah. I I I completely agree and it's it's it's neat to see the you know I I feel like the joy from those clean APIs comes now of of seeing it uh when you're when you're looking at your maybe your code being generated or you're just looking back and it's very easy to know okay this is exactly what's happening rather than having to jump through 50 different files to say okay what actually is happening it feels like you're even if you're not handwriting all that code it still feels like you understand what's happening in that in that sense and I think the LMS do a great job at uh also saying um you know we're using this one package so we're going to be doing it the exact same way another another iteration of the application. Yeah, totally. Uh for those just joining, thank you so much for for being here. Uh I am joined my name is Josh. I'm joined by Taylor. probably know Taylor uh just from everything the creator of Laravel and being and now the creator of Laravel AI SDK which is what we're kind of diving into here on the stream. So if you're just joining us let us know where you're joining from uh what time it is there for you and then what's your favorite Laravel package. Uh but if you haven't gotten the chance feel free to like and subscribe this video if you're on our YouTube platform. There's a whole bunch of awesome videos that we try to put out as consistently as possible. But we're going to continue to take a look at the Larville AI SDK and then hopefully be able to have a chance to to build with it as well. Um, Taylor, what is your favorite part before we jump into like each aspect of the AI SDK? Uh, just from an overall standpoint, what is your favorite part of the AI SDK? One that you're like when you built it, you're like, "Yeah, this is really cool. I like I like this." Yeah. Yeah. Yeah. It's a good question. Um I really like the kind of like agent class concept which was a big part of like um my initial sort of like thinking behind the AISDK and you know for those who haven't seen it basically when you use the AIDK you make agent classes using like a make agent artisan command and it's basically a class that sort of encapsulates the system instructions the message context the tools maybe the schema of um sort of like what you're doing um with the AI provider. So you might have for example like I don't know I think in in the docs we use a lot this sort of like sales coach or like lead extractor agent that's like reading transcripts of sales calls and giving advice and things but I I think it's like this very Laravelesque way of working with AI where everything feels sort of like nice and tucked in this little class that I can reuse throughout my application. So maybe I use it in an artisan command in one place and I use it in a HTTP controller in another place and it's just sort of all like self-contained. also very easy to like test um where you know it had you can fake agents, you can fake other AI interactions. I think the testing story was something I tried to like make the sense. I think testing with AI is something that's often like overlooked one because testing AI can be hard. It's not deterministic. Um you know there's all kind of quirky things that can happen. So I tried to make that pretty streamlined. Yeah. No, I love that. And I I think that's an interesting standpoint that I' I've personally loved as I've worked with it just in like the past couple of weeks before launch and then also even just after launch. I've I've built probably a couple two or three different applications just familiarizing myself. And one of the things that I really love about the agent SDK specifically is that is that aspect where it feels like I don't know similar to a job where you would or an action I guess would be a better uh use case but where where now I'm uh abstracting that from every single job or controller that I'm using from that specific logic and it lives in one place. I can change the as we all know for if you've built anything within AI, you have a prompt and sometimes you might want to change that prompt. It sucks to have to change it in 10 different places that you're using that exact text prompt. So being able to have it in one place is really awesome. Awesome. I wanted to jump into the the landing page real quick and just kind of talk through some of the some of the aspects before uh getting into um the the rest of the actual code. I'm going to plop this open right here. Uh perfect. Um so, oh lost you for a second. There we go. Okay. So, for those who don't know that the Laravel AI SDK is out, it is out. It is laravel.comai. So laravel.comai. Uh and the it's the AI toolkit with batteries included. It's as simple as running this composer require command. Composer require laraveli. I guess one question for you Taylor is what was the you kind of mentioned it but what was the thought behind putting this in in one single package versus like hey here's a package for image here's a package for voice. what what's your kind of thoughts or idea behind that? Um, you know, I kind of just wanted people to have a complete toolkit for working with AI and not have to pull in a bunch of different stuff and kind of like overthink it. Um, so it's for me it's just kind of nice to be like composer require Laravel AI and boom, I'd sort of like have everything I need to do AI SDK related things, whether that's audio or images or whatever. um it doesn't really like you know significantly increase the size of the package to add these things. All the bulk of the work is handled by people like Gemini and Open AI you know to actually uh do this stuff. So it's just kind of nice to have this like one self-contained package to have a kind of a complete toolkit. I love it. Um so yeah just like Taylor said one SDK for every capability. Uh I wanted to walk through some of like the elements that we have here on the bottom of the page. Hey you that's us today. Uh but there's these kind of like uh you know there's seven different uh or eight different aspects and this is only again a small part of it but there's prompts context tools provider tools structured output streaming async and attachments. I kind of wanted to get your thoughts and walk through of like when you might use each one because I feel like especially in today's age, one of my thoughts is that as you are learning how to build with with Laravel, of course, maybe your LLM as you're building this within Laravel or adding AI SDK so that you can build and so for those who are installing the AI SDK, if you run and you are using AI agents, which like Taylor said, it's it's a good thing to be using them. I think now if you run boost install PHP artisan boost install after you install the Laravel SDK it will ask you to to install um the AI SDK skill which will help the LLM learn how to use this better. Uh but uh like I think Taylor it's one of those things where the more you understand what can be possible behind the scenes the better we can help prompt our LLMs how to do that and so that we know what's actually happening. What do you what uh just in maybe like a couple sentences maybe we can go through each one of these and I just want to hear your thoughts on when you would use prompts what is context and when to use it tool etc. Yeah [snorts] prompts are sort of like the basic way to ask the LLM to generate some text in response to like text you give it. So you know if you've ever used chat GBT and you ask it a question and you get it some text back that is like you're prompting the LLM. Very similarly like in the AI SDK. This is how you do that in code, right? So we're going to send the LLM some sort of like text, a question, uh maybe like a command. Like in this case, we're saying analyze this transcript and we're kind of passing it along this file and it's going to give us back some text. So this is like a super common use case for AI, right? Yeah. similar to that summarization aspect that you had that you had said before. Yeah, exactly. Um, and it doesn't have to be like you can see even in this case it's not actually like we don't have to use prompting just for chat so to speak, right? We can just like give it a task almost as a prompt and we we don't intend to ever revisit like the conversation, right? It's not an ongoing conversation, but we're just prompting the AI uh or the LLM with a specific task and we're getting a response and we're storing that or we're sending it to the user or whatever, but it's not really an ongoing chat. Um and and to your question like the context tab u or well this is actually good you can go back you can look at this is a good example of like that sales coach agent this kind of shows you how you can kind of like encapsulate the system instructions within this class and then we can kind of reuse this throughout of our application um so that we can just new up the sales coast agent and then give it a prompt and we can do that in a variety of places throughout our app without sort of like re-specifying those system instructions. Yeah. So that's super convenient. One one question I had here just I think this is a good place to talk about it is so um you kind of mentioned it's one central place to be able to interact with everything. So we have all these tools available to us that we see up here but you're we're not saying uh send a prompt to open AAI or even send a prompt to a specific model or even send a prompt to [clears throat] you know Gemini. How does all that work behind the scenes? Like do you have different sections providers? What happens? Yeah. So, a lot of that is driven by your configi file. Um, and that's where you specify. I don't even know if that's on this page, but we definitely talk about kind of like multi-provider and failover and stuff, but similar like in your Laravel app, if you've used Laravel, you probably know you have like a config database file or a config file systems file. When you install the AI SDK, you get a config AI file and that's where you configure, okay, I want my default AI provider to be open AI or I want it to be anthropic. And you can actually customize that based on the types of things you're doing. So you can have a default provider for text like how we're prompting here, but you could have a different default provider for images. So maybe for images I want to use Gemini and Nano Banana, but for text I want to use Anthropic. Um, and so if you specify that in your config file, you actually don't have to specify it here when you actually prompt. You know, it's just going to use whatever defaults you have. Of course, we can override that in the code, but uh, it's kind of nice. Uh, you know, you can just sort of like use it. You don't have to remember. Okay. What is the like code name for Claude? Haiku-4, whatever that is. Five. Yeah. Yeah. Exactly. You can just use the uh use the features. Yeah. And it seems like that's it. the eloquent way in a lot of ways where like you you're building out database queries in the same way where I could be building in SQLite in local production or local instance and then when I put into production none of my code changes just my config. Yeah, it's basically the same concept almost like a OM type concept concept for uh AI providers. I love it. What about context? What is that? Yeah. So context is kind of how you actually would build something more like a chat where you know what what's kind of interesting about like dealing with LLMs is when you ask LLM for some text it you know chat GPT and like claude give the illusion that there's this ongoing state you know with the LLM where it's like aware of all the previous messages but that's not actually the case by default you have to feed it the whole string of messages that came before your prompt so that it knows and you have to do that every time actually that you call the LLM. It has no concept of like historical interactions with you. Um so this is where you do that. So this is where like if you are actually are building more of like a chat like interface where the user is sort of like having an ongoing conversation with an agent, you need to store those messages in a database and then this is where you can pull them back out and give them to the RM. So, we're going to in this example, we're like pulling all the recent messages for a given user, maybe the most recent 50 in this case, and we're passing them sort of in chronological order uh to the LLM. So, this is an example of how to do that manually, but the tab you're on now kind of shows us this is built actually into the AISDK. So, like the previous tab showed, we allow you to do that manually, but if you use this members conversations trait like you have here, the AI SDK will actually just do that for you. Um, and the AI SDK actually is in beta. And I would say this is one of the areas that I'm like most working on between now and stable release is some kind of more features around this. But the the the kind of like bare bone skeleton of it is here where if I use this remembers conversations trait, it's going to automatically store conversations in a conversations table. It's going to put the messages in like an agent messages table and it's going to automatically pull them back out when you continue the conversation. It's just super nice because it's a lot of kind of like boilerplate code to store those messages, get them back out, make sure they're sorted in the right order, pass them to the LLM. So, to just be able to use this trait in like one line of code, it's like, oh, I just have like conversation state. Um, that's great. Yeah. To be able to not have to do uh any of this and store it yourself and then you're in this aspect of it. It just kind of obuscated for you. Yeah. I guess there was a question that I saw in the chat though was what was the most challenging part of building the level SDK? Is this it or is there a different part? No, I mean a lot of it was kind of like I wouldn't say there was any one area that was just like oh wow this is just uber hard. I would say it was a lot of just like sitting and thinking about what are the right like APIs or like how do I want to design it? kind of what we were talking about earlier of like I really like to design things that feel like really thoughtful and really like really like well-designed and easy to use and intuitive and the process of arriving there is usually just like a lot of sitting there and staring at the screen you know like being like or even just like writing pseudo code of like oh what would be like what would feel the best you know and sometimes I'll actually go into like an empty file and just start typing code as if I'm using the ASDK even code that just doesn't even exist to try to discover like, oh, this feels nice or this doesn't feel nice. And um so yeah, I I mean to get back to the answer, I don't think it was any one particular thing. It was just figuring out what's the most like delightful considered APIs to give people. Yeah, I love that. And to your to your point before where you're like you were kind of or the pose question, does it matter? Like I think it that that means it does matter in a lot of sense because it it becomes this unified platform where you know what's available to you even if you might not be handwriting every single piece of the code you know what's available and it feels unified which I think is the big Jason you mentioned uh Jason Torres okay fine I'll go install it you had your Santa app that was hosted on Laravel clouds I'm curious of if you swap out the AI SDK it'd be cool to Um, all right, Taylor. What is uh tools? I know we kind of talked about this a little bit on the streams before when it comes to like MCP and what's the difference between tools and stuff like that. What is tools in the context of the AIS SDK? Yeah, tools are one of the coolest parts I think of building agents and they kind of give you the most uh interesting possibilities for the kinds of things you can build because um it lets you like basically supercharge LLMs with things that they couldn't do otherwise. So, and this example is a very basic example, but I I'll share some other cool examples in a second where maybe you want the LLM to be able to generate like a cryptographically secure random number. I don't think that's actually built into LLM. So like if I ask chat GPT, hey, give me a number between 1 and 100. Yeah, that's probably not actually a random number in the true like mathematical sense of the word. But I can actually give um using the AISDK, I can define a tool that when I prompt the LLM, I give it a list of tools that it has available to it. So it's going to see, hey, I have this tool that can be used to generate cryp cryptographically secure random numbers. and it can invoke that tool which will then invoke this code on your end. This handle method that's like right in front of us will actually be called uh but the LLM will actually decide to call it. I mean it's it's pretty wild actually. Um and so a lot of times if you look at like model evals when OpenAI or Enthropic comes out with new models, it will actually have like an eval for tool use of how good it was at using tools, you know, and the LLM these days are are pretty good at this. So this lets us actually augment the uh LLM with cool stuff. So I mean a real world use case for tools that probably a lot of us are familiar with is if you've ever used cloud code or open code or cursor, you'll see that it like um it inspects files like it reads files, it writes files, it runs bash commands. Those are actually tools that the builders of claude code and open code have built, right? So they have like a run bash command tool that they have given to the LLM to run bash commands. And so actually the first time when I demonstrated the AI SDK out in San Francisco a few weeks ago, I actually wrote like a little tiny nano version of cloud code. Uh I think I actually called it nano code where I gave it like seven tools. I gave it like a read file tool, a write file tool or a bash tool and it actually worked. Like you could chat with it. It would update files. you could say, "Hey, make a new eloquent model and update the routes to like return all the data using the new model." And it like actually works. Um, so tools let you do all sorts of random stuff. And I think we'll get into a second. We actually give you some tools built in. Um, but this is how you can like start to integrate the LLM with your system. [snorts] Yeah. No, I I love that. And that's that's essentially what Boost is doing in a lot of ways too. You mentioned like cloud code, but boost has has um a specific number of tools that would you say then Taylor tools are a great place to to keep the things that your app likely does over and over or that your LLM through your app are doing over and over again. Yeah. Yeah, I think so. Um and some of the tools we give you or are some common tools that people expose are like search. So um go to provider tools. What do we have here? I think this is yeah this is web search web fetch uh and then file search. So like people use tools a lot for this kind of thing. Um so if I throw like the web search tool or the web fetch tool onto this agent and these are two tools that are just built into the AISDK. You don't need to write them yourself. It will allow the LLM to actually search the web for the latest data or fetch a web page from a given URL. And then with file search, it will allow it to search like vectorized PDFs or other data to find things that otherwise didn't have access to. So if we have maybe uploaded, I don't know, all of our company policy PDFs to a file store, we can now search this and have like a chat agent that can search over this data. Um, and that's data that the LLM otherwise wouldn't have access to. I love it. Uh, kind of qu question around these lines. How should we organize our tools? So I guess when you're building out an application with this, what do you think are the thoughts of one, how many tools MC can or should we use? Um, and then how should we organize our tools? I guess uh you know the SDK gives you a way to organize them within this tools name space, but what are your thoughts on this? Yeah, I I know it's been like kind of a a pretty hot topic of discussion around like the amount of tools that you should expose to um to an LLM. I I would say like you know when you you shouldn't have like 50 60 70 tools probably exposed to the LLM because you get what's called context bloat where kind of what I was saying where you have to send all of the messages to the LLM to provide historical context. You also have to send all of the tool definitions, what they do, and you have to send that on every message as well. So, if you have too many tools, you can sort of consume a lot of tokens and a lot of context. Um, I think that really comes into play on some of these local coding agents where you start to plug in like different MCP providers and they have maybe a dozen tools and you've got like maybe five of those and now you've got like a hundred tools and uh it can kind of get pretty out of control pretty fast. I think if you're using the AI SDK to build something in your own application, you probably aren't running into that quite as much where you have like a hundred tools. I think that's probably pretty rare. So, um yeah, I mean I I don't know what the exact number is, but it is a thing you have to watch out for. Let's put it that way. Yeah. Yeah. I am curious. I actually don't know the answer to this, but I am curious of if if there's going to be a way or if there's already one way to be able to say uh to to conditionally load up tools would be would be very interesting. I know there's probably people and what's cool with the AI SDK is you can do that a little bit in the sense of like [clears throat] you have the agent class and so whatever you return from that tools method can be sort of like determined at runtime. Okay. Um, so maybe maybe one user maybe maybe you pass a user into the constructor of the agent and then you're using that user to say okay well this user has access to this list of tools and maybe this other user doesn't. So you can do a little bit of like dynamic tool registration here. I know people are starting working on sort of like other things around tool discovery. Um so yeah it's pretty ongoing I think evolving topic in the AI world. Yeah. So that does make sense that that uh uh a great use case for like a tools array on a particular user would be maybe they're a paid user and would have access to specific tools that you might not have for free users for example. Exactly. Yeah. Uh awesome. For those just joining again thanks so much for being here. We have Taylor talking through the label AI SDK. we'll get to actually building things and like what that actually looks like practically here in a little bit. But I want to give a good overview bare bones for those watching in on demand of what it is, what do you how we think through different pieces of this because there's so many cool options that you can build with this. Uh even one app probably would only touch two or three of these. It's really hard to find a singular app that could have all of these uh things provided. Uh if you do have questions, be feel free to drop them in the comments. I am checking as much as possible. I can't promise that we'll get to all those questions, but we're glad to have all of you watching here. Feel free to like and subscribe to help us uh show more content to you in the future as well. Uh so Taylor, what is structured output? So I've heard of, you know, we kind of talked about in prompts that you get a text back when you're saying, "Hey, give me this." What is the difference between that and structured output? Yeah. So when you prompt an agent sort of by default, um you just get sort of free form text back in the response. So, very much like you would get from like chat GPT, you know, it doesn't really have like a defined structure or schema to it. You're just getting back a paragraph or two of text. Um, what structured output lets you do is actually say, okay, when you respond to me, I want you to respond in this structure. Um, and I think if you scroll down a little bit, you can see like in this case, we're saying give me back a score and that needs to be an integer and it's like a required field. So when the LLM responds, it's going to give you JSON that matches that schema, which is super convenient because then you can parse that JSON. You like know what to expect in terms of what data is on it, and then you can reliably store it or do whatever you want. If you're just getting back free form text from LLM, like you know, how do you parse that? I don't know. It's it's just it's just arbitrary text, you know. But um if you're getting back JSON, you know what to expect. And this is actually powered by um a new Laravel component, the JSON schema component which you can see there and like the type signature. Um so we actually used this on the MCP package um to define JSON schemas for your MCP tools and it's used here. So um we actually wrote this component because we needed it, you know, in the AI SDK in the MCP package. Uh but it's a really slick little component for defining JSON schemas. Yeah, I remember the first uh application actually the first Laraveville application I ever made was an AI application where I was that we didn't have structured output at the time. This was about like three years ago. I think it was GPT2 or something like that. Uh but I had to hope and pray that the agent responded in the way where I said okay here's use the response put equals equals equals and then then separate it and then I had to parse it out from that. Uh structured output makes that so much easier. Yeah. It's like I Yeah. I I mean even on uh Anthropics models up until recently like to get it to return structured output, you were literally telling it in the prompt, please respond with this this JSON structure. [laughter] It's pretty pretty wild. Yeah, it's crazy times that we we we live in now where we could we could have it be exactly the way we want it. And so the the way the AI SDK does that makes it extremely extremely easy. Um a couple questions that I wanted to go. Uh someone said for using SDK we need to purchase multiple model subcriptions like Gemini cloud code. Uh so cloud code would probably be like your local one and then Gemini if you wanted to use that within the AI SDK whether that's returning text responses or even um you know image generation. Uh multiple models would have to be added as API keys to the config locally and in production. Uh but I'm I'm correct Taylor where you can use like let's say you're building something locally and just want to test it. can use local models through Olive. Is that correct? Yeah, that's correct. Yeah, we we released that pretty quickly after launch. Another thing you can do if you don't want to sign up for multiple model subscriptions is use something like open router where you sign up for open router and you have one key and then you can actually access anthropic models uh open AI models all all the models through one API or through one sort of like paid service. Uh but yeah, you can also just use local models. I actually did that myself the other day just testing stuff. You can um download O Lama and then pull in some kind of local model, whichever one you want, and you can um just run AI stuff locally. Yep. So that's O Lama. O L L A M A. Um and I think actually Open Router now that you mentioned it to I think they also have like a uh of course they have like a paid one where you can pay them and they'll route it to different things, but I think they also have a a free where they'll route it to like free providers as well, which is really really interesting. Um uh I saw another one. Okay, you mentioned embeddings um and vector search. Can it replace lang chain as an example and can the tools be a rag replacement? Yeah, a lot of the tools kind of can be a rag replacement. As far as it can it replace lane chain, you know, it probably depends on kind of what you're doing with lane chain, but a lot of the ideas behind some of the built-in tools are around sort of like rag and similar concepts in the sense of like vectorizing files, searching them, doing similarity search across data using tools, um things like that. I think that's like like a super common use case for tools in general. Um so yeah there's opinions on that kind of built into the AI SDK because it is so common. Yeah. Um and then uh there is uh we'll get to that question in a little bit. Uh I will jump back in. What what is streaming? So yeah. So streaming is just a really easy way. So instead of calling like the prompt method on an agent, if I call stream, super easy way to send back a stream or a stream of text to the front end. So that you know like in the same way as when you interact with chats online, how the text sort of like streams in line by line as it's available. The stream method in the AI SDK makes that super easy. So this uses a HTTP server set events um called SSE for streaming text back as it's available. And then in this example, you can see we're using this kind of then method. We can actually do something once it's all been streamed. If there's some sort of like either follow-up action we want to do or something, I don't know, some arbitrary code we want to run after the stream is complete, we can do that here. But then on the front end in our JavaScript or using live wire or something, we can actually consume that stream to like show text as it as it is available so that you don't have to wait for uh the entire response to be generated before you see anything. Yeah, that is a really nice part of it being so succinctly tied in with the front end. I I like that. Uh for example, within Vue and React, we have the use stream hook. Uh, but then Live Wire has the the wire stream field to take all of this and and make it look like you would expect within TAG GBT where it's kind of streaming line by line. Um, totally. What about async? This is this is interesting to me because I' I've I've heard a lot of cool things about it, but I'm I'm not quite sure how it how it actually works behind the scenes. Yeah. So, this is another example of how the S AI SDK is sort of like uh leaning into a lot of Laravel's full stack, you know, batteries included opinions where um instead of calling prompt or stream, in this case, we're calling Q where we're actually basically queuing a prompt. So, we're saying, hey, we want to call the LLM with this prompt, but we want to do that in a Q job in the background because we don't necessarily need or want to wait for the response right now. um we just need to cue this to happen in the background and then we'll do something with the response later. So when you call this Q method, it's actually going to create a background job on your Laravel Q to respond to this prompt and then the and the catch callbacks will be invoked when the response is ready or if something goes wrong respectively. Um so once we get the response we can store it in a database or send an email or whatever we want to do with that response or something goes wrong, we can handle that as well in the catch call back. But basically, you know, this would be used where uh the user user submits something that we want to analyze with AI, but we don't really want the user to have to sit and wait for it to be done, right? Then um so we we want to return a response to the user like, hey, we're processing this. We'll let you know when it's ready or whatever. And then um that happens really quickly, right? Basically instantaneously. And in the background, this job is running. And then we'll do something. We'll notify the user maybe when it's ready or when it's done or whatever. Uh, this makes it super simple to do that. Yeah. So, I would I would say like it off the top of my head, it seems like a good use case for this would be um maybe transcribing a 100 megabyte 200 megabyte audio upload where you don't want them to wait for a full request until they get the response for that. That's awesome. I love that. A question on the async stuff uh from Len. Is the async stuff non-blocking? Does it use the React PHP PSR18 browser? Yeah, this async stuff is more like Q related. Um, so it doesn't really it doesn't really have anything to do with like async in the sense of like concurrency or like you know u go co- routines or anything like that. Um, it's more about just like putting something on your background queue and then doing something with it later. Yeah. Um, I think I saw another uh question. Uh, yes, the SDK is compatible with the Olama AP or API, I would assume. Yes. As well, correct? I think so. Yeah. Yeah. And you can even um there's a new I think we just documented this the other day, maybe two days ago, there's a in your config file, you can actually override the base URL for any provider. So, you can actually use any AI provider that is like OpenAI compatible or O Lama compatible um just by overriding that base URL. Yeah. Uh question here that we we kind of glossed over here at the top. I'm curious to hear what how how this was kind of built Taylor and then also what your thoughts are in terms of uh I know that there's also a use cheapest model trace and stuff like that. So in terms of automatic failover um and then how does all that work behind the scenes? Yeah. So um yeah yes yes you can do this and it pretty much works how um this person is asking where you pass multiple providers and models to the prompt method or either uh or specify them in the provider attribute on the agent itself where if one provider is either overloaded or you are rate limited on that provider um it will actually just fall back to the next provider in the list that you give it. So maybe you try enthropic first. If enthropic is overloaded or you're just rate limited on enthropic, it will fall back to open AI for example. Um we do that by checking like the response codes from the AI providers that we get back. So we can see like oh they're you know overloaded or we get like a I think it's like a 429 HTTP code if you're rate limited or something like that. And then we can fall back to the next provider and we actually raise an event that you can listen for to know that you got uh you had a failover. And we're working on bringing a lot of this event visibility into Nightw Watch so that you're going to be able to see, you know, like how much tokens am I using on which providers and how long are my tool calls taking and things like that. Um anyway, so that that's kind of like model failover. And then uh you mentioned kind of like some of the attributes you could put on agents like use cheapest model, use smartest model. I kind of wanted to have this nice way to just like annotate an agent with use cheapest model for example. And sometimes you have tasks like it's basic text summarization or something like that where you don't need like anthropic opus 4.6 for that, right? It's like too expensive. Yeah. Exactly. So if you're just trying to summarize like a paragraph of text into one sentence for example, um you can just use like haiku. So the benefit of putting use cheapest model on the agent is like let's say tomorrow a new cheapest model comes out. You don't really have to go look okay like what is that model? what is let me copy the ID for it like paste it into my code. If you're just like use cheapest model and you just like composer update to get the latest AI SDK you're just like always on the latest cheapest or smartest model. So this is kind of nice. You don't really have to update your code if you know that this agent is doing something that's not very like doesn't require a lot of intelligence. Yep. No, I I love that. uh because it's one of those things similar to what you had mentioned at the beginning where you having the one package then makes it easier and more eloquent in the sense of you're not having to you have those opinions already formed for you when it comes to cheapest when it comes to smartest even when it comes to automatic failover for uh those are things that you probably mostly want when you're building a full stack application you just don't want to do it yourself in a lot of ways. Yeah totally. Uh lastly, what about attachments? This is interesting. Yes. Yeah. So attachments kind of come up in a few places throughout the AI SDK. So in this example, we're using it when prompting during text generation. So you know, maybe we're giving it I think in some of my demos I've given it like a CSV of sales leads or maybe you're giving it a PDF of a transcript and you're saying analyze this this document or parse it or do something with it. So we just make it super easy to give attachments in various places. So you can do it here with text. We also actually let you do it when you're generating images. So, you could give it um you could prompt the model, hey, here's an image and I want to um make it like cartoon style or like a painting or I want to change something about it, like basically remix an existing image. Um you can also use attachments for that as well. Um yeah, just super easy to attach and upload files and kind of interact with them. Yeah, here's here's this uh image example. Yeah, maybe you're asking what's in the image. So I can give it an image and I can combine that even with like structured output to like analyze images and generate um structured output that I can do something with. I love that. I love that. Uh quick question. Uh I'm going to close out this screen or or stop sharing my screen real quick just so we can uh answer this question. I'm going to pull up a demo that I built so we can walk through it in a practical sense before we and then we maybe if we still have some time we can build something from scratch because I'm curious to see uh how cloud code also works with the AI SDK and get to show everyone there but question says now that we have the Laravel SDK what's next level's focus on ASP is exciting I think it's the right step forward are there any future plans so I guess what you kind of mentioned the the the whole um uh remembering agent being one of your things that you're focus ing on what do you think is um I've got actually got a list of things that I that I'm working on um sort of like post beta and even after the stable release. So yeah, I've got a a list of features I want to keep working on for sort of the conversation storage, context retrieval, all of that stuff. I mean it's stuff like some basic stuff like pruning of the uh historical conversations or compaction of the history and sort of like summaries and snapshots and things like that. There's some stuff around tools and sort of agent loops that I want to look at. So like human uh tool approval is a good one. So like say you want the LLM to be able to invoke like a refund customer tool, but it needs human approval if the refund amount is greater than $200. Um so like stuff like that. It's kind of these more advanced complicated tool use cases. Um and then there's other agent stuff like uh agent loops. um you know more customization of when the agent actually stops doing things. This is sort of gets into like some of the uh you know you've seen people have these like so-called Ralph Wiggum loops of where you can can have like infinite loops of agents doing things. So a lot of like agent loop uh configuration and stuff like that. I also want to build like a cool artisan command like um artisan agent chat so where like if you have an agent and you just want to like chat with it maybe to see how it's behaving or what it's doing I can just come into the command line and do agent chat and then I can just like pick an agent from a list using Laravel prompts and then just chat with it and see like okay is it behaving like how I would expect you know um almost like the uh the DD of you know like agent debugging [laughter] in a way the AI DD in a in a lot of That'd be that'd be really interesting. I almost imagine then you could have its own agent MCP where your cloud code or LLM whatever is kind of can interact with the agent talk to each other. Yeah. And then there's a there's a you know there's already a bunch of PRs open from the community. And one of the ones I think that is most requested or most on everyone's minds is uh so-called like sub agents. Um, and one easy way to think about this is almost like the ability to expose another agent as a tool from another agent. So like imagine I have you know one primary agent and in its tools method maybe I don't only return tools I can also return other agents. So it can actually call on okay you are sort of like an an orchestrator agent and you can call on maybe this planning agent or this research agent or this other type of agent that is more specialized to do things. There's actually a PR open for that right now um that I need to uh review. Hopefully I get it reviewed today and um but at least can kind of get out that sub agent behavior this week but a lot of it will be driven by the community. You know, I think there's like dozens of pull requests already out there and u that's kind of how it always goes in the open source world. You know, like I I'm driving I I have one hand on the wheel, but the community also has a hand on the wheel and you know, we're we're hopefully getting to a good destination. Yeah. Co co-piloting together to a great place. Uh that there kind of similar you kind of answered a couple questions in the chat of a sense of can you call an agent from another agent? Doesn't sound like it's possible right now. That is something that's coming in terms of agent orchestra orchestration and everything like that. Again, thanks so much for everyone being in the chat. I want to talk I wanted to walk through a demo that I actually built and I'm curious to get your uh thoughts, Taylor, on what could be what could be added to this. Uh but then we can jump in and maybe build something from scratch as well. Uh but thanks so much everyone for for joining. If you're just joining or maybe you've been here the last 10 15 minutes, haven't heard me do this spiel yet. We are with Taylor talking about the AI SDK that was just released uh last Thursday, Friday, Thursday. Um and uh so it's only been out for 4 days. Um and so it's been crazy to be able to see how many people are already building with it, how many, like Taylor said, how many new PRs have already been added. Uh but just walking through the possibilities of it, as well as just talking through why it was built in the first place. So, if you're here, let us know you're here. You have any questions, we'll try to get to them as best as possible. Uh, but yeah, I wanted to talk show this little demo that I built. Um, it's actually live right now. I'm not going to tell everyone the the URL just because I don't I don't want it to crash while we're on on on stream, but I'll share it at the end um so that you can have it as well as a GitHub repo and I'll put it in the description if you're watching this in um in interview. This is a little I wanted to try to find a way to build using almost everything or almost as much of the AISDK as possible. Um, and the premise for this was I always save a lot of links that I always forget what those links are. Bookmarks have never worked for me. Um, even most AI tools have never worked for me because it only saves like the link and maybe meta description doesn't necessarily save any of my thoughts on it or anything like that. And so I wanted [clears throat] to build a thing using the AI SDK that either uh takes a file that I upload, maybe a screenshot, a document, or a URL, and then have it upload to this vector storage. So we're actually using um the vector storage uh uh part of the AI SDK to autochunk that embed. And this is one cool thing that I actually didn't know. I'm curious to get your thoughts on like what's the difference between storing something in vector using open AI's vector storage versus putting this in postgres PG. Yeah. This kind of came up at Laracon India. I mean they are the end goal is really similar I would say in the sense of like we want to store embeddings about things and then search them or query them. What is cool I mean I guess let's start with like um the PG vector stuff. So like in the AICK docs, we show you how to take a string of text and generate embeddings for it and then you can put that in a vector column in a postcrist database as long as it has the PG vector extension which um most local uh postcrist things do like the herd version and then Laravel clouds postcrist has it. Um and then you can run queries on that to find um things that are similar to your query. So, like if I to go back to our Mexican food example, which we both like, like if I stored like um you know, restaurants that have great cheese dip and then I searched for good appetizers, those things are semantically similar. Even though they don't actually have the same words contained within them, they're are similar ideas. And so that that that's the purpose of semantic search is to be able to pull back things that are related. So you can actually store those embeddings in your own postcrist database or you can actually upload files or data to OpenAI or Gemini and they will vectorize and store them for you. So it's, you know, it's just sort of like one of the convenient things about storing them on OpenA Gemini is you can just hand them a PDF. They will extract all of the text and vectorize it for you out of the PDF and all of that. you know, like doing that all locally, you're going to have to pull the text out of the PDF, you're going to have to vectorize it. Um, you're going to have to store it. Um, so it is kind of convenient to just be able to like throw files at OpenAI and Gemini and say, "Hey, like vectorize these for me. I don't want to even have to think about it." Is this, you know, it's two different ways of doing similar things. Yeah. Yeah. No, I I love that. Um, and so yeah, th this this works in the sense of the there's a couple cool things I really love is how easy it is to take an attachment, let's say an image, and grab specific things out of this using an AI agent to say, okay, I have this image now. I just want like I I save a lot of screenshots either on my phone or just like I'm I'm grabbing things and usually I have like a directory on my device. I wanted a place to be able to save all of those, but then also be able to query them in later things. So again, the goal of this is one to kind of scratch my own itch and build with the AISDK, but then two, be able to show as much as possible off of the ASDK. So I wanted to grab we'll go to we'll go to Larville news. I want to just find like a quick uh little link that I might be able to sav. Um so let's go ahead and maybe we'll see um see here like OpenAI releases GPT 5.3 codeex. I'm going to grab this link and go back to my URL here. And if I just paste this in, I can add a specific comment about this URL or I can just capture this. And this is one of those uh async things that's happening then on in the background where um I can still, you know, go to different links. Uh but it's still being stored and retrieved in the background. Uh it looks like because it's uh because it's a JavaScript front end. I this this failed. So, here we're going to upload a screenshot instead. I'm going to go back to that link. I'm just going to upload. We'll zoom uh we'll zoom out here. And I'm just going to paste a quick screenshot just of this. There we go. We'll go back to here. Go paste this in and upload and analyze. So what it should be doing within the uh AI SDK is it's going to grab the process using the vision to say hey let's describe what is actually happening in this image and not even just transcribing it but the cool thing about that vision SDA is like you get additional aspects you can send a specific prompt in with vision to say okay I actually want these particular things if we're building a research library I want to know what's pertinent to this you can see Here it says it's uh a detailed description of the image, screenshots, a top header, a large rectangular card with rounded corners, and a soft drop shadow. All all the neat things that you might want from image. It's a lot more than I actually thought it would be from just one one simple image. But the neat part with this is uh kind of everything that's happening in the background is a couple people mentioned uh what would you recommend for front-end things? Some people said polling versus broadcasting. What are your thoughts, Taylor? And I'll say say what I did in this particular demo. Oh man. Um like there is something is which would be a good answer. A question. I guess there is something just like really nice about for simple stuff is throwing like a wire pole on a div and it just like works. [laughter] So if you don't have Yeah. if you don't have tons of users like um or maybe it's just a tool for yourself even you know like you're the only user um which I think in the age of AI we're seeing more sort of just like personal software right of just like tools people are creating for themselves that are really intended just for them I think throwing like some polling or especially with liveware wire makes it so easy with wire polling I guess inertia has polling as well now since inertia too I think that's nice but if you're in like uh more of a production use case like let's say like Laravel forge or Laravel clouds dashboard where there's hundreds of users in there at a time. Wire polling can be a little bit more inefficient since you're kind of like continually hitting that back end whether there's data ready or not, you know. Um, broadcasting kind of gives you more of like a push mechanism to where you're just pushing data as needed. So, it's a lot more efficient. Yeah. Yeah. In this particular demo, I did use broadcasting with uh with the use like with with reverb and then like the use echo uh hooks to watch for that. So that way um when we're uh you know when we're grabbing something from the web or we're up uploading a screenshot for example if I go into here and I just say something like uh I don't know we'll just grab this from the from the AIS SDK and if I go into and and actually paste it it's happening behind the scenes. Yes. But once it finishes you'll see that happen live. So this is one of the things that of course like Taylor said you could use this with wire pole where it's just constantly pinging the server um every 5 seconds or whatever and it will update automatically but this is using specifically for this demo um it's using the vector storage to store all this once it does read it from the uh vision and then we're getting a structured output so that when we chat with it and you can see that kind of popped up right there. Um, now we can chat and have all of those things. And if I pop over to this chat, this is where we have a couple additional features that we kind of already walked through with the AI SDK. We have conversational to say, hey, here's like um remembering those particular agents. Uh, but then we also have uh streaming set up as well as file search and web search. These are the two tools that me personally te those were like incredibly easy and I loved having them to not have to uh I I I've I've built a couple AI tools where I'm relying on uh maybe my maybe you know web search within PHP for example instead of giving AIs the tools to be able to use it themselves. Yeah. Uh I'm curious of like what were your we kind of talked about the tools but what was your thoughts on those three tools the file search web search and um is it image search uh file search web search and web fetch are three built-in well I guess there's also a similarity search which we haven't really got into but yeah I mean I wanted to give people um you know I think these are just like incredibly common use cases um web search and web fetch of course used to sort of overcome the training knowledge cut off for the LLM to where we can actually go out and fetch relevant data. I actually use this type of functionality in LLM a lot when I'm maintaining the Laravel docs and um or when I'm writing the docs for a new framework feature. So, one thing I will do is go into the docs, fire up cloud code and say, "Hey, can you write docs for this PR?" And I will like paste in the PR URL on GitHub. And it works like incredibly well actually. So it's super convenient to be able to give the AI the ability to fetch web pages in a variety of scenarios. Yeah. No, I love that. I love that. I'm just going to ask a question about um let's see uh maybe maybe we'll say um what was that latest model from OpenAI? we should see it search the specific things that it already has pertain to my local uh user and we should see those tool callings actually be made here. So it's searching the knowledge base um and then it found you know the open AM model mentioned is GBT 5.3 codex in our save notes. So this is a use case of particular conversational API that we kind of talked about in the sense of it's remembering the things if I was to ask um without having that context of something like uh what uh and you even had like you you even had the tool call thing like show up too. That's that's pretty fancy. That was that was a fun uh I I I didn't know that that was possible. Um because I've only ever worked with chat where it was just returning text. So returning the when it returns and I could be wrong I uh or misremembering but I would I think when you return the structured output um it uh open AI specifically which is what the provider I'm using with gives um says when it's calling particular tools as a JSON. So it's really easy to parse that within you stream. Is that correct? Yeah. Yeah. Yeah. When you stream stuff, you get sort of like tool call start, tool call end. You get like, you know, these various events for when tools are being used. Yeah. I'm curious. Uh I actually haven't tested if I can search on the web then uh of…

Transcript truncated. Watch the full video for the complete content.

Get daily recaps from
Laravel

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.