Debounceable jobs, Pest sharding, and API starter kits

Laravel News| 00:44:43|Apr 30, 2026
Chapters18
Laravel 13.6.0 adds debouncing for cued jobs, keeping only the most recent dispatch within a specified time window and allowing this behavior to be applied at the job or dispatch site with a max defer. The update also touches related improvements like JSON health checks, a JSON formatter for structured logs, Cloudflare email support, and enhanced model factories with pivot data for tests.

Laravel News breaks down debounceable jobs, pest sharding, API starter kits, and a flood of 13.x enhancements with practical examples and release notes.

Summary

In episode 257 of the Laravel News podcast, the crew dives into Laravel 13.6.0’s debounceable cued jobs and health route JSON responses, then rounds through 13.5 and 13.4 improvements like Redis cluster safety, delay attribute coverage, and enum support. They explain how debounce for cued jobs keeps only the latest dispatch within a window, and how to apply it at dispatch or on the job itself. They cover a new JSON formatter for structured logs and the Cloudflare Email Service driver. The discussion moves to model factories, pivot values, and a more ergonomic assert database has syntax. Pest receives attention with flaky tests, case-sensitivity architecture checks, and the new time-based sharding feature that balances CI loads by actual test time. The hosts warn about a Composer 2.9.6/2.2.27 vulnerability, announce Laravel PDF 2.6 attachments for mailables, and spotlight community tools like Spasi’s AI skills, API starter kits, Laravel Sluggable by Nuno Maduro, and various cutting-edge packages (Geo for generative SEO, Paperdoc for multi-format docs, and mobile passes for Apple/Google Wallet). They also tease PHPverse 2026 and a handful of tutorials and tutorials-focused articles in the show notes. Throughout, the hosts share practical takeaways, install steps, and links to PRs, repos, and docs.

Key Takeaways

  • Debounce for cued jobs in Laravel 13.6.0 keeps only the most recent dispatch in a time window, configurable via a seconds value and optional max delay.
  • Health route /up now returns JSON when the request expects JSON, improving API health checks without breaking HTML dashboards.
  • JSON formatter for structured logging (log::build with an array) enables easy integration with ELK/DataDog for log aggregation.
  • Cloudflare email service driver is supported in Laravel, providing an alternative to AWS SES for transactional emails.
  • Model factories now support hasAttached pivot values, enabling multiple records with different pivot data in a single factory call.
  • Pest 4.5 adds flaky tests support and a CLI flag to run only flaky tests; 4.6 adds time-based sharding to CI, balancing shards by actual test duration.
  • Composer security advisory: update to Composer self-update to patch VCS driver command injection vulnerabilities (2.9.6/2.2.27).

Who Is This For?

Laravel developers who want to stay current with 13.x innovations, testing/CI optimizations, API starter kits, and productivity improvements like debounced jobs and enhanced factories. This is essential viewing for teams aiming to reduce CI times and harden their deployment pipelines.

Notable Quotes

"Debounce essentially does the opposite in that it will keep the most recent and will discard the previous."
Explanation of how debounce differs from unique constraints for cued jobs.
"When the same job is dispatched multiple times within a window, only the last dispatch executes."
Core behavior of Laravel 13.6.0’s debounceable cued jobs.
"If you want to run only flaky tests, Pest 4.5 adds a companion flaky CLI flag."
Mention of Pest’s new flaky test filtering feature.
"Time-based sharding balances CI shards by actual test execution time."
Pest 4.6 feature aimed at reducing CI wall time.
"There is a long-term security patch for the Perforce VCS driver in Composer."
Composer vulnerability advisory discussed on the show.

Questions This Video Answers

  • How do I enable debounceable cued jobs in Laravel 13.6.0?
  • What is time-based sharding in Pest and how do I set it up in CI?
  • How can I attach PDFs directly to mailables in Laravel PDF 2.6?
  • What are the new features in Pest 4.5 and 4.6 for test reliability and CI performance?
  • How do I configure the JSON health check route to return structured JSON?
Laravel 13.6.0Debounceable cued jobsRedis cluster hashing with hashtagsHealth route /up as JSONJSON formatter loggingCloudflare email serviceModel factoriesPivot data in hasAttachedPest 4.5/4.6Flaky tests CLI and time-based CI sharding (time-based sharding)
Full Transcript
Hey everybody, welcome to the Laravel News podcast. This is episode 257. The date today is Monday, April 27th of 2026. Michael, my good friend, how's it going, dude? Doing great. It's uh it's April the 27th or the 28th here and it is still warm. Like we it it's still in the like mid high 20s Celsius here, which is outrageous. I think we've got about a week of it left before winter starts to turn up and we've got rain and cold, which all starts on uh soccer Sunday for us. So, that's always wonderful. Yeah, we've uh we've been dealing with a fair bit of rain here, dude. We actually had I don't think I told you this. Um we had Yeah, we had eight eight tornadoes. Eight Yeah. in our uh county. So, in the base of the weekend. Yeah, that was kind of crazy. And so, um, anyway, glad to be glad to be through that. Um, but yeah, onward and upward we go, my friends. Uh, glad to have you all here hanging out with us. We have got officially 40 minutes before we have to be wrapping this one up. So, we are going to hit it hard. Here we go. Debouncible cued jobs in Laravel 13.6.0. So, let's talk about what that is specifically. This release adds debouncing support for cued jobs. Here's what I want you to think about like this. If you ever used the unique like should be unique on a job, what that does is it gives you a method that allows you to define a specific key and if that job attempts to be cued again, it will discard it and it will not cue it. Does this make sense, Michael? Should it be unique? Yes. Okay. Debbounce essentially does the opposite in that it will keep the most recent and will discard the previous. That's how it works. Interestingly, so that's basically what it is. When the same job is dispatched multiple times within a window, only the last dispatch executes. So if there's something that's already cued, it's going to discard those at execution time and it's only going to grab the most recent one. So all you have to do is apply the debounce for attribute to any should job and then specify a uh an amount of seconds. The user edits the document 10 times in 30 seconds only the last of that dispatch would run right debounce for 30 is basically how that works. It also can be applied at the dispatch site. So where it's being dispatched without modifying the job class you can do that. And then you also have a max weight parameter uh and that caps how long it can be deferred for. So that's should be uh I'm sorry that is debounce again it's basically um the opposite kind of of should be unique it's not the opposite because it's not well anyway you heard my explanation earlier it's different it's different but operates similarly okay secondly the framework's built-in health route which is slashup now returns JSON when the co request expects JSON I believe this was originally implemented in Laravel 11 uh with the new sort of bootstrap service provider sort of deal. Um and it will mount your web routes, your API routes and this new up route. So previously the route always rendered HTML which was awkward for API only applications whose load balancers and uptime monitors were expecting JSON. So now if the request expects JSON which would be um accepts uh I think it's application test JSON or something like that right then it will return a status application is up JSON response or when a diagnosing health listener throws and debug mode is off it will say application experiencing problems as JSON the status codes of 200 and 500 remain unchanged and non-JSON requests will continue receiving the existing blade page you can check out more about that one in pull request 59710 Then a new JSON formatter provides structured JSON logging output which is useful for log aggregation systems. Think like Elk or data dog or something like that. So all you have to do is log double colon build and then you can pass in an array of values and then you can say arrow info here's your log message and then the second argument being that array that you can typically pass in there. But uh that JSON formatter now allows you uh to be able to uh push that structured JSON data into there. You set the formatter as the JSON formatter and uh again you can find information about that how you set that up in the show notes but that is now available to you. Cloudflare email service support. So Laravel now supports Cloudflare email service for sending emails through Cloudflare's infrastructure. So you can think of this as an alternative to something like SCES AWS is offering for sending transactional emails. Uh so you now you can now do this through Cloudflare. All you have to do is install the driver and then you uh simply configure that and your configservices.php. Uh you'll just set up your Cloudflare account ID and your token in there. An array of pivot arrays for has attached. So, if you've never used model factories before, you should definitely check them out. This was something that was uh a little bit arduous before, something you had to kind of figure out on your own. And probably four or five versions ago in Laravel, uh model factories came out. Now, every time you create a model, it creates an associated factory for it uh in your um in your application. So, you can make it easier for yourself to test. But the factory has attached method now accept accepts an array of pivot arrays. So this makes it easier to create models with different pivot values. So uh you might have a let's say you have a posts let's say how I want to do this like a blog and a user. Your site hosts multiple blogs and a user wants to subscribe to that blog. Really all you're doing is you're associating a user with a blog, but that pivot there is a subscription. Like you can name it and you could associate op you could associate fields with it too, right? So uh on those pivot values sometimes you want to be able to pass in uh values for those. This has attached now allows you to attach the role the other thing but also to put those pivot values in there as well. So you can create multiple models with different pivot values. Hopefully that makes sense. A little bit muddy there, but I think you get the idea. Kind of kind of like sequences I suppose for for these. So if you if you're attaching multiple post, you can say that you know the first post gets these pivot values and the second one will get you know in the sequenced uh array. So it just streamlines the process rather than having to do multiple has attached because you want to set the values on you know to different values on different different records that are created. Yeah, absolutely. And in those tests a lot of times you're going to want to then make assertions on database records. Well, previously if you were going to use something like assert database has, you'd be able to pass like a single assertion there like assert that the database has a record that has these values in it where the name is Bob and the email is bobgmail.com. Then you'd have to make a second assertion. Uh you say Alice, you should have Alice and this should be Alicegmail.com. Well, now you can make multiple assertions inside of a single call. So this assert database has and then you pass in the table name or the model class and then you can have an array of your assertions there rather than having to make multiple assertions on that. So just a little bit of a quality of life improvement there. Of course there's other fixes and improvements. You can read about all of those on the show notes. All right, over to you Mr. Derinda. Larrael 13.5 adds first class Reddus cluster support for the Q driver and concurrency limiter fixing crosslot errors on AWS elastic serverless and other reddis cluster deployments. It also completes delay attribute coverage for cued mailables adds controller middleware attribute inheritance and expands enum support across manager classes and of course as always includes a number of bug fixes. You could probably hear the frog in my throat. I went to the football uh on Saturday night and for the first time in probably seven or eight years I was excited to be there. So I've burnt my voice out and it's there you go four days later still still still broken. Yeah, it's still decent. You got this. So when using AWS elastic serverless which is Valky or any Reddus cluster deployment, Laravel's Reddus Q and concurrency limiter previously failed with crosslot errors. The Q's lure scripts operate across multiple keys. Q's default, Q's default reserved and Q's default notify that hash to different cluster slots which reddis cluster prohibits in a single command. Laravel will now automatically wrap Q names in Reddis hashtags when the connection is a cluster ensuring all related keys hash to the same slot. Different cues for example will still distribute across the cluster naturally and the public and the public get Q method is unchanged. So existing integrations that consume Q names continue to see the same format as before while Reddus operations themselves use cluster safe keys. Non-cluster users are unaffected. So if you've run into any of these issues, uh if you're using Valky, which I believe is powering Laravel Cloud's implementation, you may have run into these issues before. This is has been fixed up as part of Laravel 13.5. Uh thanks to Timmy Lind for that fix. The delay attribute support added across cued event listeners jobs and notifications in Laravel 13.4 now applies to cued Malibables as well. Previously, Malibables only checked the delay property and ignored the attribute entirely. The delay property still takes precedence when explicitly set matching the behavior of the other dispatches. The middleware attribute on a base controller is now inherited by child controllers. Previously, if you had defined uh this on a base controller, any child controllers would ignore the middleware attributes defined on their parent, requiring duplication across each class, which of course if you forget to do that, you end up with all kinds of hinky things happening. So, thanks to at nijuranga for that fix. Enum support has been added to several manager classes that were previously missing it. For example, the cache managers, store driver, memo, forget driver, purge, and set default driver. now accept a unit enum. The male manager mailer driver and purge runs now accept enums and the orth manager guard should use and set default driver methods now accept enums. Larva also added enum support to the base manager driver method extending the same pattern to other manager classes that inherit from it and this continues enum support wave that we have already covered across the Q manager the log manager the database file system reddus and broadcast managers and we've seen the enums come in in various bits and pieces across the framework for a while now as well closure values in update or create and first or new are now accepted for the values is argument completing the lazy evaluation pattern that was introduced in Laravel 13 for first or create and create or first. This allows you to defer any expensive operations like geocoding or API calls until you know the record actually needs to be created or updated. The closure is called exactly once per method call on first on you. The closure is never invoked when the record already exists. And the last thing we have here today, a new cache handle un serializable class using hook lets you register a callback that runs when a cache value d serializes to the magic_php incomplete_class which can happen when the serializable classes config in your Laravel 13 application is in use and a class is missing from the allow list. No handler is registered by default. So this is purely opt-in with no behavior changes for existing applications. That is all for Laravel 13.5. I'm going to get some water. Okay, if you're not back by the time I'm done with 4.5 for pest, I'll jump into 4.6 for pest as well. So let's start with 4.5. So pest, if you don't know what pest is, is a testing framework that is built on top of PHPUnit by our good friend Mr. Nuno Maduro. And there is a new release pest 4.5. So the first thing on the list here that it's bringing is flaky tests modifiers and the flaky CLI option. So on occasion you have these tests that fail intermittently. It could be uh due to timing issues or maybe external dependencies. Sometimes there's these non-deterministic conditions. It could be based on like the time of day that it gets run. Whatever we all have them in our code bases, right? So if you would like to, you can now mark a test with arrow flaky. And what this does is when this test fails, pest will automatically retry that test up to the configured number of times before reporting it as a failure. You can almost think of it like a Q worker where you say like retry three times, right, before you actually fail this job sort of deal. It's the same sort of thing with the tests. When this flaky test fails, it will automatically retry it up to the configured number of times before it reports it as a failure. The default retry count is three. You also have a companion- flaky CLI flag. And what that will do is it will filter the test suite to only run those tests that are marked as flaky. You might do this to verify or debug unstable tests in isolation. Um and then one thing to note on this is that when you're running these flaky tests before each and after each do run between each retry. So it should reset all the state properly for each attempt uh so that you don't end up with stale state getting persisted between these. It will kind of give you a fresh run each time. Okay. If you remember a while ago, Nuno also introduced this idea of architecture tests where you can make assertions against your codebase and just say, I want to always have final on all my classes or I never want to have final on my classes. Right? That was sort of a a fun and funny example. Um there's all sorts of assertions that are out there now for architecture tests, but a new one has been added and that one is to be cased correctly. So, MacOSS and Windows use case insensitive file systems by default. Is that correct? Um, is do Mac and Windows use case insensitive or case sensitive? Know for sure Mac doesn't is case insensitive. Is it? It's always fun when you have like a lowercase. Yeah, last name in an uppercase file name and then you try and fix that and all. Yeah, that's exactly what's happening. And on Mac OS, trying to to rename them is always always a fun experience cuz you you can't just change it from lowerase to uppercase cuz weird things will happen. You have to re like do a git move to a completely different file name otherwise the change doesn't get tracked properly. Yep. Yep. And so this idea here is mismatched casing is oftentimes invisible in your local development, right? doesn't flag anything, but then you ship it and there's errors that are now in Linux, right? You have a Linux server that's running your production app and CI or CI. Yep. And uh so now what you could do is arch, which is architecture tests, expect app to be cased correctly. Um and so what that would do is it would make sure that each of your classes have uh a file name that matches the class name. So that's great. We also have a new flag called d- only covered. So this is related to the coverage flag that you might u put on there if you want to say like give me classes that have coverage. Uh so I've marked them as like they cover these classes or they just test those classes. Um if you want to see only covered classes or only tests that have covered over dash coverage. But if you're only interested in seeing ones uh that have uh that coverage, then you can d-on coverage. That will hide all files that do not have any coverage from the output. Uh so this is useful when you're working on a large application where you only want to focus on which files your new tests actually cover without scrolling past a long list of uncovered files. So uh it does not affect the min or exact thresholds. Those calculations still include all files, but d- only covered is the new flag there. Okay. Timebased sharding pest 4.6. Michael, hit us. I was very excited to see this one. This is interesting. I'm Or curious. No, no, no. I I'm You go for it and then I think I got some follow-up commentary. Go ahead. Pest 4.6 Six adds timebased shard distribution which is a new mode that balances CI shards by actual test execution time rather than file count. If you're already using d- shard in CI, recording timing data with the d-update-shards uh argument and committing the resulting JSON file is all that's needed to switch those jobs to time balanced distribution. When running tests in parallel across multiple CI shards, splitting by file count can produce uneven jobs. One shard finishes in seconds while another one takes several minutes. Pest 4.6 addresses this with time balanced sharding. So, as we mentioned, if you use the D-update shards command, this will allow you to record perclass timing data. And you can combine this with the d-p parallel uh argument to speed up the timing which will then write a tests slash in your test directory in the pest hidden directory a shards.json which stores the timings for each of the test files. Once you commit this and then run the shard CI job as usual the timing file is present pest will automatically use time balanced distribution. The output will confirm this by showing um time balanced in the shard output. And when test files are added or renamed after the timing file was last generated, PES detects the stalness and displays a warning. The test will run uh new files are distributed evenly across shards while knowing files remain time balanced. Deleting the test files does not trigger the warning and stale timing entries are ignored until you regenerate the file. So the main crux of this when you've got say a thousand files and you want to run in five shards we're going to divide those thousand tests by five. So each shard will get 200 tests to run as as we said in the article. Some of those might be very fast. There might be unit tests or tests that don't do a lot of scaffolding and setup and so they run very quickly. Others are making perhaps requests. They're writing to database. the scaffolding out, you know, cedars and things like that. And um it can take quite a while. So by calculating how long each of the tests run, it's possible to then go through each of those tests and then group them in such a way that the the average length of each of those shards is the same amount of time. So where we had some test shards that would run in 8 minutes, some that would run in 2 minutes, now they all run in four minutes across the board. So the the total like parallel execution time of our tests in CI is 4 minutes which is incredible considering you know we spoke about this 12 months ago 18 months ago where our tests were taking 15 20 minutes to run through CI. So this is a huge improvement. Nice. Very nice. Yeah. So the thing I was curious about on that is the committing of that shards.json situation. So like in CI, you would have to update the shards and then you'd have to write that shards.json back to your repo, I guess. Yes, it's stored in as a as a committed file. Gotcha. Gotcha. Yeah. Um I typically don't have my uh GitHub actions push that stuff back in, but um I guess that's not the actions. you you you'd update it in your local environment. You would commit the shards file and push that into so you get the warning in CI. It I mean depends on how much it drifts. You know, we haven't had to we've added tests and we're still seeing about 4 minutes. When we start seeing the drift between each of those shards, then we'll go through and like manually do it. There was Jake Casto on uh Twitter. He was posting where they've got their shards down to about five minutes where their CI used to take 1 hour to run. Every time they pushed, you know, opened a pull request, it' be an hour of test running. Now we're running across and they've got it down 10 shards. Five minutes. They've got it down to five minutes. That's awesome. you know, the time balancing has, you know, taken all these shards and then and and um split them up in a much more sane way. So, yeah, definitely worth checking out. And and, you know, it's, as I said through reading the article, it is as simple as generating and committing the shards file if you're already using the the shards in CI and it will just sort itself out. And it's it's probably one of the most uh exciting uh beneficial changes to to pest since since parallel testing came into it, I think. Yeah, very cool. We've been using uh some of that shard stuff. Um yeah, it's definitely helped. I think we have like four to six shards running. It It's definitely much faster. Absolutely. Okay, folks. This is uh your bi-weekly security vulnerability notice, right? There's one every every time we record the show, it feels like. Here's the long and short of it. You should run composer self update. That's what you should do. That's that's the whole post. That's it. It's all I got to do. Um the reason why is because version 2.9.6 and version 2.2.27, 27, which is a long-term support release, uh, patches a command injection vulnerability that was inside of the Perforce VCS driver. So, um, basically it didn't involve proper escaping of values that were going to be used in shell command construction. And so it was possible that something could run code on your like in that directory after installing run composer install. Um the good news is they did not find any evidence at all across all of packages that this had ever been exploited. They also searched for everything in private packages also did not find anything. So, you know, it this is certainly something that we should be aware of and certainly something that we should uh update. But the good news is this isn't something like you have to go rotate all your keys or anything like that. Um there is, if you're really interested in knowing how it all worked uh and why it's a problem. They did have an official writeup from Composer. It sounds to me also like this is not something that is able to be exploited if you're just putting something into your dependencies. This is more specifically I believe if you are listing out a VCS uh type of repository that is getting linked in as a dependency. So also you know sort of another layer removed of like if you're not doing that you're also probably fine but again please run composer self update and that will take care of all the problems. There we go. Attach PDFs directly to mailables in Laravel PDF version 2.6. This adds a new attachable contract support to the PDF builder. So you can pass a generated PDF directly to the attach method in a mailable or notification without having to first save it to disk. The PDF builder now implements Laravel's illuminate contracts mail attachable interface which means any PDF builder instance works wherever Laravel expects an attachable. The file name comes from the name method. The PDF extension is appended automatically if missing and the mime type is set application/ PDF automatically. Uh you can add the mail in the attachments method which returns an array of attachments. That is it. Useful feature. Thanks to the team at Spasi for that one. Yeah, speaking of Spasi, um I remember when Spasi was releasing their guidelines. So they had even this open-source repository where you could sort of pull down their guidelines and then customize them to meet your needs and the way that you handled things at your organization. Well, following this pattern of releasing AI skills, Spacey has released their coding skills guidelines as AI skills. So skills are reusable instruction sets for your coding assistants. Uh and they will automatically activate based on context. You can think of them more like um projectaware sort of prompts uh that will keep your agent in line with your conventions without having to repeat yourself over and over and over and over and over again. And a lot of people have written their own skills and all that stuff. But this package ships with four different skills covering all the areas that Spasi cares about most. So PSR12 standards, typed properties, constructor promotion, etc., etc. They also have one related to JavaScript, how they handle their prettier configurations. um named functions, dstructuring patterns. They have how they handle versioning uh version control like what do they do with branch naming and conventions? How about commit messages? How should they handle squash merge strategies? And then you also have things like security related items, SSL requirements, CSRF protection, password hashing, etc., etc. And so you can install all those uh using Laravel Boost if you'd like to. So, install that via composer and then just run boost install. Uh, of course, there are other options for how you'd like to install those if you'd like to. And all that is in the show notes. Great job, Spasi. Any of you that have been around the community for a while, maybe since the old um the original like starter kits and starter projects and things like that have been asking for API starter kits and they are officially on the way. On the Laravel live stream, Aaliyah and Wendell Adreel fielded the community questions that kept on asking. When do we get an API starter kit? And Wendell confirmed he's been working on exactly that. The pull requests are already open, still in draft on Maestro, the Laravel orchestrator repo used to build the individual starter kits and there are two PRs in flight. One adds the stateless API which is the base API starter kit with authentication and everything you'd expect and a team's variant being added to the API starter kit as well. There is no release date as yet though the team still needs to review, refine, and gather internal feedback, but it is coming and team support looks likely to land alongside it. So if you want a preview or want to help out, jump into the Maestro repo and leave feedback on the PRs, we have links to those for you in the show notes. Very good. Lant elephant lphant, right? Get it? Mhm. Uh this is a PHP generative AI framework that was inspired by Lang Chain. And so I looked at this a little bit earlier this week. I'm going to do my best to read through this one in a way that makes sense. So, first thing is that this has multi-provider LLM support. So, you can switch between any of your AI services with min minable code changes. Uh I'm not going to list out all the different agents you might use. They're all available and they've got the APIs to support that. Wonderful. Elephant also elephant uh also includes a pipeline that allows you to easily build rag applications. So, that's retrieval augmented generation. The trick with this is you have to be able to feed items in so that you can get them back out. And how do exactly do you do that? Well, that's what it tries to make easy for you. So, it will take in PDF item documents. It'll take in Word documents, text files. It will automatically split them into chunks for you. It'll generate the vector embeddings for you and then it will store them in your preferred vector database. So, they've got support for a bunch of different vector databases. Uh they've got Doctrine, Reddus, Elastic Search, MongoDB, etc. The list goes on. So helpful uh for those of you who are just looking to hey, I just need a rag pipeline. Give me a pre-built one. This will handle all of that for you. But in addition to that, it will also handle the querying of those vector embeddings and do question answering. So the question answering class handles the entire rag workflow retrieving those relevant documents from the vector store and then generating contextualized responses. It also has things like guard rails for safety so people can't just go around your prompts right and uh sort of hack your uh little AI agents there. Uh it will also implement multi-query transformations to improve retrieval quality. So you can kind of give it a multi-shot approach I suppose where you can ask it to find the relevant pieces and then give me the best one out of those or however you might like to do that multi-query transformations on that. It also supports function calling and tools so that your LLM can interact with external APIs and services. So all you have to do is define your tools as PHP classes and then the LLM will decide when to invoke those tools based on the conversation context. Pretty cool. Check it out at llfent.readthedocs.org. Jet Brains PHP Verse 2026 is returning on the 9th of June, bringing together PHP developers worldwide for a free online event. The conference runs from 11 until 5:50 p.m. UTC and features talks from prominent voices across the PHP ecosystem. Last year, PHP Verse reached over 55,000 developers with 2500 watching at the live peak. This year's lineup continues the tradition of featuring leading PHP community members. Elizabeth Baron the executive director of the PHP Foundation, Ashley Hindel the founder and CEO of Fuel. Jonathan Bosinger from Automatic. Nils Alderman co-founder of Packagist. Fabian Patensier founder and project lead at Symphony. Lar Larry Garfield will explain the RFC process that shapes the future of the PHP language. and our very own Jeffrey Wei, founder of Larcast will step beyond PHP to explore how AI is transforming the role of developers. Nuno Maduro will be hosting and Brent Roose as well will welcome attendees and set the stage for the day. The event partners include Laravel Symphony, Lariccast, Symphony Cast, the Dutch Laravel Foundation, Laravel News, PHP Foundation, Automatic Private Packages and Laravels. If you would like to join, it is a completely free event to attend. The sessions will stream on YouTube and attendees can watch live or catch the recordings later. Questions can be asked in the YouTube chat during presentations. You can register to receive reminders and the link to join. We'll have links to all of that for you in the show notes. Very good. We're going to move on to packages now. Hey, if you have uh if you were at one point an SEO master, right, your thing was I'm really good at getting good search engine optimization stuff. I feel like this was a shtick for a while, right? Um, we've sort of moved past that. What we're looking at now is how is it that these LLMs are indexing my content? How are my products showing up in AI generated answers at the top of Google or Bing or DuckGo, whatever that might be. And if you're concerned about that, which you probably should be, this package brings generative engine optimization directly into your Laravel app. So GEO is sort of what we're talking about here. Instead of SEO, search engine optimization, it's generative engine optimization. So this brings it into your Laravel app, giving you structured metadata, AI readable feeds and an audit dashboard without having to do all of that yourself. So the entry point is this has geo profile trait. And what this does is you add this to an eloquent model that you want visible to AI crawlers. This could be a product. It could be an article, it could be a listing. What you do on that is you define this geo profile method and that will map your models fields to the schema package uh for JSON LD injection. What is JSON LD you might ask? So this is JSON linked data and this is essentially I'm not sure actually how old this is but it's JSON linked data format easy for humans to read and write and it's already based on JSON of course it's an ideal format uh for REST web services unstructured databases and it's really nice for AI engines to be able to read. So you define this geo profile and then the output from that will be a structured JSON LD block that then describes your content in a format that LLMs can parse when crawling your site. Secondarily LLMs.ext and the AI product feed. So two scheduled artisan commands keep your AI crawler facing files up to date. The golm text command generates this llms.ext file which is a plain text index that guides AI crawlers through your site's content. You can think of it almost like a robots.ext text, but it's llm's.ext. And then secondarily, the geo feed command outputs an AI product feed.json. This is a JSON feed structured specifically for those LLMs that are indexing your site rather than a traditional RSS or a sitemap format. So you can wire those up into your schedule. And then lastly, you get this geoc scoring dashboard. This is a dashboard that's at /jo that will give each of your registered models a 0 to 100 score based on the AI signal completeness. Things like are your descriptions filled in. Are the schema fields present? Uh are there citation worthy data points? You can configure which models appear in the audit view in the configuration. Uh and then of course you'd want to lock that route down with some sort of off or admin middleware before deploying. But you can check all of that out and the source code and documentation at gethiteszope slash i a i geo. Just go to the show notes mistakes. It's not just just go to the show notes. It's easier to click on it there. You don't have to type it in. There you go. Paperdoc is a PHP library by Zaraka Muhammad Ali Akram for generating, passing, and converting documents across multiple file formats through a single unified API. Rather than juggling several format specific packages, Paperdoc gives you consistent interface for everything from PDFs to spreadsheets. It handles both modern formats like PDF, DOCX, XLSX, PPTX, HTML, CSV, and Markdown and legacy formats, DOC, XLS, and PPT. And all of them work birectionally. You can both parse existing files and generate new ones in any of them. Main features include document generation, pausing, format conversion, rendering to strings, OCR processing, AI augmentation using an optional neuron AI integration, thumbnails, batch processing. Paperdog ships with firstparty Laravel support, including a service provider facade and artisan commands targeting Laravel 11 and above. And the facade is registered automatically via package auto discovery, giving you a clean paper do interface. You can find the source code on GitHub and learn more on the official paper do website which you will find in the show notes. Steve McDougall is a dude who used to be very present on the Laravel news team. He's still around, but uh he has been well known for his skill in creating really nice APIs. I think he's even maybe got a course on it. Uh really smart guy when it comes to that sort of stuff. Well, Steve McDougall has released an API skill which is a cloud code skill that captures his production API conventions for Laravel 13 plus of course as I stated he's well known in the Laravel community for his work around API design. So this is one that's very much worth paying attention to. Um, the idea behind this is that once it's installed, Cloud Code is of course going to pick up these skills automatically rather than explaining your preferred patterns at the start of every session. The skill will keep the agent on the same page across all your projects. I'm not going to list out what it enforces. You can read that yourself. Note that this particular skill is pretty opinionated just like Steve himself and uh in all the best ways. I say that with all the love in the world. but he has got a specific way that he likes to write things which is why people like following him. Sometimes it's just nice to be told this is the best way to do it. So he's got a bunch of different uh opinions and uh things that this skill enforces of course just like we were stating before. Couple different ways to install it. It's just a get clone really and then you put it into your configurations and away you go. So thanks Steve for that one and thanks Yanick for writing that up. Laravel mobile pass is an way to generate Apple Wallet and Google Wallet passes in Laravel. This is a new package from Spari in collaboration with Dan Johnson and generates mobile wallet passes for both Apple and Google Wallets. It covers boarding passes, event tickets, coupons, store cards, membership cards, and gift cards and can push live updates to passes already installed on user devices. There is a single builder API that works for both platforms. Uh so basically you say event ticket pass builder domake and set the organization name serial number the description and add any additional fields you want on there. Google wallet follows the same pattern with one extra step which is declaring a class that is a reusable template before creating individual passes. Once a pass is installed on a device you can push changes to it. For Apple wallet call the update field on the mobile pass model. And for Google Wallet, updates work by modifying the content array and saving. Beyond event tickets, the package includes builders for boarding passes, for airline and other transit, coupons, store cards, membership cards, and gift cards. The airline pass builder handles flight specific fields like departure and destination codes, passenger name, and seat. And for trains, buses or boats, extend the boarding pass builder and set the appropriate appropriate transit type. Sparsi has a live demo where you can generate every pass type, install it on an iPhone, and trigger a live update to see the push mechanism in action. The demo source is available on GitHub. Uh you can check out the repository itself for installation instructions, Apple and Google credential setup, and full documentation. We have links to all of those things for you in the show notes. This looks great, especially if you need to use these things. And I was thinking if there's any way that I would actually need to use this myself. That's pretty cool. I know. I was wondering like with the updates to it over time, what you could possibly do with that. I was thinking maybe this could be interesting for like what Daniel Cobborn does with like the game stuff. You know, you have like a QR code that you have to scan. It's like just add it to your wallet and then have them scan that QR code. That could be pretty cool. You could even update that wallet pass with like your most recent set of points, you know? push it to that. So, it's just something you can grab and click to really quickly. Yeah, there's there's all sorts of interesting things you could do with that. It's really cool. I was super excited when I saw that one um coming up. Okay, moving on here. Nuno Maduro, we haven't talked about him enough today, so we're going to talk about one more thing that he has released, which is Laravel Sluggable. This seems like it's a solved problem, right? You think to yourself, I have a post that needs to have a slug. What I mean by a slug is sort of like a permal link, but one that contains at least some hint as to what the content sitting at that permal link is. So, it usually will contain something about the name of the post or something like that. If you want to do that yourself, go for it. Totally fine. We've all done it. However, this is able to be implemented by just adding a single sluggable attribute on a model. Here are the edge cases that it handles, which you do not think about until you've had to do it yourself. How about collision handling? What if I'm trying to create a slug that already exists? It's got to be unique. So, you got to be able to handle collisions. How about uniqueness per tenant or per local? Is it is it unique across those? How about multicolumn sources? If you don't don't want it to only take into consideration the title, but you also want it to take into consideration the category it's under. How do you do that? How about soft deleted record collisions? You used to have one that was deleted and now you are trying to rename it or make a new one that is is wanting to have the same name. How do you do that? Full Unicode Unicode support as well. All of those challenges are solved simply by using this sluggable attribute and you're done. Super simple. Require Nunaduro Laravel Sluggable. And that's it. The quickest way to wire up a model is the included make sluggable artisan command. You just PHP artisan make sluggable pass the name of the class. It's going to read that table schema. It's going to pick the most likely source column and then it's just going to add that sluggable attribute to your model class and it's going to create a migration for you. That's it. Super simple. There is configuration options, of course. There's error handling, of course, unicord support, of course, as I stated earlier. If you want to learn more about this, check out the show notes or check it out on GitHub. We have for you three tutorials today. The first one continuing our theme of Mongodb uh tutorial is build custom middleware for query performance monitoring and optimization in Laravel with Mongodb. The author of this article uh Moses Anumadu takes you through the whole process. All of these if you've read any of these MongoDB uh tutorials you'll know that they are quite thorough. It talks through understanding the architecture, your project setup, configuration, um, creating a model for testing, seeding sample data, and the actual query monitor service, and listening to the events themselves. And Harris Rafopoulos, two episodes of ship AI with Laravel. The first one, your AI agent has amnesia, let's fix it, where he talks about adding memory to your AI agents. and the other one rag with embeddings and PG vector in Laravel 13. We've spoken about uh retrieval augmented generation a few times. We did it in this very episode. So if you want to learn more about that and how you might implement it in Laravel, check out the video. We have links to all three of those articles for you in the show notes. Awesome. All right, friends. How how do we do, Michael? What was our what was our time on that? 44. We did okay. 44. We're close. We're close. We're going to wrap this one up, folks. Episode 257. Find shown notes for this one at podcast.lairvel-news.com/257. Hit us up with any questions that you have on Twitter at Michael Derinda, Jacob Bennett or Laravel News. Of course, if you like the show, we'd really appreciate it if you'd rate it up in your podcast of choice. Five stars would be awesome. Until next time, my friends, we'll see you later.

Get daily recaps from
Laravel News

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.