From Vapor to Cloud: How Pyle Migrated Their Production App

Laravel| 00:41:11|Feb 18, 2026
Chapters9
Hosts introduce FA from Pile and outline the session agenda and topics

Pile’s move from Laravel Vapor to Laravel Cloud slashed costs, cut downtime, and simplified their massive, multi-site infrastructure without losing data or control.

Summary

FA from Pile walks through a long, thoughtful migration journey from Laravel Vapor and Forge to Laravel Cloud. He explains how Pile’s six apps, 1.5 million daily requests and 50 million monthly requests required a pragmatic, staged approach. The team wrestled with real-world constraints like outbound IP whitelisting, huge 300 GB databases, and terabytes of S3 data, then implemented practical fixes such as a proxy for external calls and an FTP-to-SFTP sync tool. They started with a best-fit hybrid model (Vapor for requests and Forge workers undercast) but ultimately shifted to the Laravel Cloud paradigm to gain observability, predictable costs, and easier autoscaling. The migration was executed in clearly defined phases—cleaning the app, staging, and dry runs, followed by a DNS swap with minimal downtime (about an hour max) and no data loss. The story highlights the critical role of tooling (myloader/dumper for DB migrations) and the value of the “Laravel way” philosophy in reducing hacks and drift. By the end, Pile achieved simpler platforms, lower costs (roughly 50% savings), and an autoscaling setup that finally feels “brain dead.” FA also notes that Horizon on Cloud is evolving, and the new Q clusters with Nightwatch integration promise better observability.

Key Takeaways

  • Pile handles around 1.5M requests per day and 50M per month across 13 sites, with ~800k jobs running daily.
  • Migrating from Vapor to Laravel Cloud reduced infrastructure costs by about 50% (from roughly $11k/month) despite complex data and storage needs.
  • A disciplined migration plan (clean app, staging, dry runs, DNS swap) limited downtime to roughly one hour with zero data loss.
  • External constraints like outbound IP whitelisting and FTP need to be handled via a proxy and SFTP sync tools rather than reworking every integration from scratch.

Who Is This For?

Essential viewing for Laravel teams migrating large, job-heavy apps from Vapor to Laravel Cloud who want concrete, real-world lessons on cost, downtime, and tooling choices.

Notable Quotes

"‘Infrastructure was becoming a hack, which is never the solution you really want.’"
FA captures the core motivation for moving away from ad-hoc, hacked setups to a cleaner cloud solution.
"“The 12 weeks for six apps wasn’t the bottleneck—the actual data migration and hacks were.”"
Highlights the real challenge: moving data and untangling prior hacks slowed progress more than the orchestration itself.
"“If somebody can guide us within these two products, it’s him.”"
Taylor’s guidance helped Pile commit to Cloud after a long internal debate.
"“We switched to the Cloud and saw about a 50% reduction in infrastructure cost.”"
Conveys the primary financial impact of the migration.
"“One hour downtime with no data loss was our worst downtime—DNS swap included.”"
Demonstrates the effectiveness of the cutover strategy.

Questions This Video Answers

  • How did Pile migrate from Laravel Vapor to Laravel Cloud without losing data?
  • What tooling helped Pile move 300 GB of production data during the migration?
  • What were the biggest challenges when moving S3 data and FTP integrations to Laravel Cloud?
Laravel CloudLaravel VaporLaravel ForgeAurora ServerlessS3 data migrationHorizon Q clustersNightwatchOutbound IP whitelistingFTP/SFTP integrationDB migrations (my loader/dumper)
Full Transcript
Today we're chatting with Pile who is a Laravel cloud customer that recently migrated from Laravel Vapor onto Laravel Cloud. So with me today I have FA from Pile. FA do you want to introduce yourself and give us a little details about what you do at Pile. Yeah. So my name is Frana Alexand which is probably the hardest word to say in English. So FA is going to do for now. I'm the solution architect at Pile. So I've been there since pretty much its inceptions like five years ago. I think I'm responsible for everything from leading the development team to everything that is infrastructure related. So I've been pretty much driving the whole migration. Nice. Awesome. Little side quest for everybody listening. The first call that FA and I ever got on, I was like, "So, how do you say your name as a friend?" And he was like, "Dude, just call me FA." Oh, [laughter] yeah. Totally. No, don't try it. Saved myself from botching that one, which is awesome. All right. So, agenda today, we're going to talk a little bit about Pile Scale and their infrastructure kind of in the beforehand. Why vapor to cloud? We're going to talk through some of the details of the migration and at the end we're going to have a Q&A. We only have 45 minutes now, sub 3 minutes to 42 minutes left. So, we're going to kind of move through a little bit of content here. So, everybody who's listening, if you want to ask questions, go ahead and fill those out in the Q&A and we'll we'll get to those kind of at the end. So, kick us off. Can you tell us just a little bit about Pile? What is it? Where did it come from? What do you guys do? Yeah. So, the inception of it all is the floor box. So, the floor box is an ecommerce solution for selling flooring online and all flooring tools, everything that's related to that. And that was something that was built on Shopify at the beginning a little close to COVID. When I got hired, they kind of outgrew Shopify a little bit for their needs. So, they were starting to reach all kind of limitations and everything. So we rebuilt it from scratch for the specific needs of that website. And during the pandemic, it kind of hit hard. So a lot of customers were switching to purchasing online, even flooring, which is was surprising to me. And when people saw how that rewrite helped the floor box get launched into the stratosphere, we started getting a lot of requests for that same solution to be resold. So this is where Pile was born. So basically what we're doing is we have a B2B solution that we sell for every any company that is within the flooring industry online. That's awesome. So you guys just had something grew out of it and were like hey let's grow and then just out of that opportunity was like hey what if we just like sold this too like made this a product said that's awesome. That's exactly how it got born. And it's a it's very it's very very niche, but there's a there's a market for it. So, hey, floor flooring is great. I did my own flooring when I bought my house and it was I wish I knew about where to buy wholesale flooring. So, yeah, that that's awesome. That's the thing about flooring is people don't realize when you talk about selling flooring online, it's not like the little Amazon brown box. It's like you have pallets of it and it's a it's a real puzzle to sell online. Oh, yeah. It's a real puzzle to put together [laughter] too. Yeah. Awesome. So talk to me about the scale and infrastructure of the of what you guys are doing. Yeah. So today the scale at Pile is pretty much what you see here. So we have about 1.5 million requests a day and we serve about 50 million requests a month. So these are very much like basic HTTP requests that we serve. As you can see, we have about 800,000 jobs that are running a day across the whole infrastructure. So it's pretty high in terms of this compared to the number of requests that we make. So we are a very jobheavy industry and the reason for that is that because we're selling with different suppliers, we are working with livestocks, live prices, everything has to be updated in real time during the day across multiple sites. Here we see that we have 13 different sites including staging and production. So yeah, that that's pretty much our scale. And in terms of database size, well, we have around 300 gigabytes of data stored across our whole infrastructure that we had to migrate to our cloud. And that that 300 gigabytes that's not with backups and like that's just raw data, right? Oh, yeah. It's raw data. No, no backups were transferred. No, no, it's just So, if you're doing backups plus+ Yeah, we have terabytes of backups somewhere in there. That's crazy. I remember when we were first starting that and know we were like how do we move this? So we'll get to that but first why vapor to cloud? What what made you choose to go to cloud? Well first of all I think that when we take a look at how we built things beforehand we're going to see that our whole infrastructure was kind of working by luck at some point like or working by by like it was a real spaghetti mess infrastructure wise. We weren't sure we were moving the cloud, but we knew that we were moving and we were either moving the cloud or we were moving the forge or we were moving to something else. And as we're going to see a little further down the line, we got convinced that cloud was the right answer for us by the man himself. The man himself. Yes. Let's take a look here. So early architecture, you know, you mentioned forge and kind of vapor is where we're coming from. So talk us through kind of the phases of this of how you guys moved this. So on the far left here, phase zero was free, right? Free pile, floor box, Shopify. So the first phases were the the forge back end which which was pretty lightweight at that point. We had an Heroku view app. So at that point our whole infrastructure was an app that was making API calls to a lar back end which was on forge and we did hit some scaling limits. This is about the time we moved out completely of Shopify and started a whole rewrite. And at that point, we were looking at the different technologies and trying to justify a little bit the time of the rewrite that we were going to make. So, we were going to use something that was pretty new and pretty interesting and promising. So, this is where we ended up looking at Vapor, which was pretty new at the time. We bought like I I remember we bought like a custom deep dive course by Jack Ellis, the guy that made Fathom of Analytics, which was a pretty early adopter of Vapor himself. So, we geeked out about that with him and we were convinced and we we um we went down the rabbit hole with Vapor and we did hit some limits there as well when we took a look at you saw our scale. So we had a lot of jobs, we had a lot of requests and uh it was kind of hard for us to have visibility uh into um the exact provisioning that we would need. So we decided to build an hybrid setup where we would have vapor that would be handling all the requests and we would have one or multiple forge workers underneath. And this is where the hack kind of began where we had like um shared environments. We had um we had like a shared reddis that was provided by vapor but we had to kind of hack within AWS so that our forge workers could see it. So as we're going to see in the next slide I think the hacks kind of well yeah the breaking point that I think [laughter] that's the that's the right title for it. Awesome. So yeah, so we go from Ford setup, move over to Vapor, the new hot Ferrari on the on the floor, then kind of go back into to kind of that hybrid setup. What was kind of the main driver to that hybrid setup? And I'll just I'll just jump back here. Were you using kind of your forge workers and Vapor for your request? Was it like the timeouts on Lambda? Was it like the ephemeral storage? Like is there like a piece that was very specific there? Yeah, you you touched a real good point there. The timeouts on Lambda were were kind of a problem for us, especially for jobs. like we have extremely longunning jobs uh where we have to import a lot of uh huge Excel files as we're going to see later down the line. Um and uh as well we were kind of scared of Vapor to be honest and uh and kind of scared of like letting our infrastructure run free with no specified costs and just like infinite scaling was kind of scary. like you're you don't know how many jobs are going to run that night and you don't know which which kind of build you're going to end up at the end of the month just for running a basic infrastructure that could have run a little slower but within a more confined space within Forge. Yeah. The one that I always I always hear from people is like what happens if DOS happens and you don't have WF then just Lambda goes nuts because on a server you overrun the resources it's all done shuts itself down. Totally. And and a funny thing about that is we did hit that limit a lot of time unfortunately. It's kind of scary to think that we were not like vapor professionals at that point. So we couldn't configure it properly at all. So we it would have been a scary thought for us to let that run free over multiple apps at the same time. Yeah, it makes a ton of sense. So back to the breaking point here. Looks like let's start with the infra complexity here. So we've got the hybrid vapor and forge. We kind of talked on that. So let's go a little more in detail on that there. Yeah. So the infrastructure complexity was pretty much started with that split there. We had to do like vapor is scaling by itself like by definition right it's serverless. So in terms of what you could control I guess we did play around with uh warming up workers uh warming up containers I mean and um pre-provisioning for a certain capacity that we thought we would hit. So that's the scaling that we could do on vapor. On forge it was a totally different story though. We um we didn't have any uh automatic scaling. We had to kind of account for big events and big um big imports so that we could provision maybe a little more and then deprovision it. So it was very very manual and uh the environments themselves uh some things like it's the same app that was being deployed on vapor and forge right so the environments themselves had to be a little different on each of them cuz uh for some you didn't want jobs to run at all so we did have some things uh that were flags to let Vapor know not to run any things uh that would be reserved served from things that Forge needed. So, we were kind of hacking our way already there. To add to that, we did have a big dev experience gap. So, Vapor is pretty much the furthest away you can get from what you have on your local machine, right? Especially us, we were on Windows at that time. So, we were like I think we were what the hard [laughter] Yeah. Yeah. totally. What What's the tool that beforehard like it's was Lara Homestead? Yeah. homestead but on Windows it was a peach I don't remember like the elephant with the wer thing was that wimp vamp there's there's been so many it's kind of a it's it's kind of a lamp web stack something like that but it was preheard so it was so far away from what we had so we did try I think with Laravel sale uh to get our devs closer to that but then we had to do it on a Linux subsystem within Windows and when you get to that point and try to replicate what we had on Vapor there. It was becoming a mess and and even there we didn't even match what we had in productions. Bugs were hard to debug were hard to reproduce and to pinpoint like what we shipped from local. We we had no confidence that it would be the same when we did hit the lambdas down the line. Yeah, if my head's in the right spot on the timeline here too, when you're on Windows trying to use Sale and the Dockerbased experience was right when WSL2 became like the thing, but it didn't properly attach to like your internet gateway. So like the internet was dirt slow inside of the Docker containers. So you guys were like literally that's wild. You're straight on point. It was exactly at that time and we I think we had to by the time I built it like the files for the devs to work with. I think I had to migrate it because Docker wasn't supporting some parts of it and we like it was a mess. So we didn't change at the right time. We played around a little bit with spin and the idea of having like the same machine being shipped to to production that we had locally. So this was already an idea that we had from going away from vapor and directly shipping that to either Forge or I think it was very early announcements of cloud at that time and I think it's Taylor that I talked to about this at Laracon in Texas and he did tell me that this was not in the plans of being supported in in the near future. So running containers like natively on what we had in Laravel ecosystem was not going to be supported. So we kind of killed that at that point. And yeah, the costs I mean if you ever ran a payload on AWS, you know what the cost means and you know what a labyrinth it is to kind of figure out what's costing what. that good old EC2 other in your bill which [laughter] kind of is everything like anything that is not your EC2 is going in there like network costs and everything. So we did have oversized machines and since we were scaling infinitely with Vapor we had huge databases which I I think we had like at some point 25 ACUs for one of our apps in our serverless Aurora. So this one was a beast and costing a lot and we were even hitting like problems with it. So yeah, we were definitely paying for safety at that point. Yeah, I remember a lot of times where we were just just like we're going to move away from this. Let's just like crank it up even more. And scaling down at that point was not really possible for us because it was very very scary. Like we had to crank it up just to make it work. So we can figure out if we could handle going back to tennis shoes or if we're going to crash during the night. Yeah. And so yeah, I love I love the kind of the quote we gave from you at the bottom here. Infrastructure was becoming a hack, which is never the solution you really want. And so coming out of that, you're having to make reservations at the infrastructure level, at the cost level, at even the developer experience level. So then we you come to cloud and like what why cloud like what what brought you to cloud ultimately? Yeah. So I think that the first seeds of us migrating out of vapor was kind of something that we started saying internally which was the Laravel way. And the Laravel way kind of became our moto because we figured out that every time we took something out of the Laravel ecosystem and we tried to apply our own sauce to it and try to hack it out like it was JavaScript and like oh you know you can just like freefor-all do your thing and we were getting away from the way it was meant to be ran. This is where we started having difficulties with Laravel. And this is where we started having difficulties with scaling with Laravel specifically. The more we took a look at how things were done like granularly within Laravel, we figured out that we were very much hacking and that Laravel was not meant to be ran the way you saw in the last slide. So our goal from there was like, okay, here's what we're going to do. We're going to move back to the roots of installing a basic Laravel app on Forge and taking a look at all the hacks that we do from there. And yeah, so we we kind of were at the crossroads there between Laravel Cloud and Laravel Forge and we went to Laracon Denver last year where the new Forge dashboard were was announced and I remember I was kind of stoked cuz for me it was like all in Forge. I knew for I'm pretty hands-on guy. So I was like, "Okay, I'm going to be able still to SSH into my machines. I don't want to go on cloud and give all the control Laravel and and stuff like this." And and we went to a dinner and we saw Taylor in a corner by himself and we're like, "Okay, this is the time to attack." We have [laughter] me and the CTO where I work and we're like, "Okay, let's go." If somebody can guide us within these two products, it's him. So, we talked to Taylor about all the issues that we had, all the hacks that we made, and he introduced us to a few members of the Laravel team, which I think was Joe Dixon and and and and Trey, which were bombarded that night with a lot of questions and curveballs like, "Okay, but can cloud do this and I can do [laughter] that?" So I remember that was a a good moment and and yeah, it kind of clicked at some point and Taylor told us like what you want to do like your Laravel way mentality and the the amount of jobs that you're working with and the amount of crud that you have to do, you have to go on Laravel cloud because of the new Q tooling and for you to have all the new things that are coming out like nightw watch being integrated within it like easy integration for all the new tools that are coming I remember like Reverb was being like whispered around that it could be coming as a first party at that time. So yeah, it it was kind of the new Vapor at that time, the new shiny new Ferrari, like you said. Yep. All the new shiny stuff. That's that's awesome. What's was funny is as you mentioned, Reverb was being rumored at that time is you guys were one of the first people because it was during your migrations that got in there on early access. So we we had a ton of fun kind of talking through that. So, we've talked a ton about kind of the why and the what and how what brought you guys to that decision. So, now like let's really dig into like the the nuts and bolts of like the migration. And so, what's really great about this part is this is where I came in. I helped you guys a little bit here. And so, I have a little bit of an inside track on on this migration story. So, I'm super excited to hear kind of your perspective of this and bounce a little bit of like how we made some of these decisions here. So, uh let's start with like what these apps look like. So first thing, six apps, right? Yep. Six apps. What we have for each stack is pretty similar. I'd say that anything that is very differentiating between all these apps is the external tools with which they have to work. They all have like since we're working with different customers, they all have like their own ERP to which we have to connect and stuff like this, FTPs, FTPs and stuff like this. So yeah. So basically at the very lowest level what we have is a web server or web servers now on on cloud but at that time it was it was the vapor part. We have multiple nodes of workers which are on forge. We have an Aurora DB which is serverless on AWS. We have mini search server at that time. I think we were just coming out of having them on cloud.minarch.com so hosted for us. Um but they all have their different instances. So they had to be migrated as well. We have reddish cache on elastic on AWS. We had reddish jobs which is another elastic instance on AWS. We had S3 buckets with terabytes of data. We had dedicated IPs. So the reason why I included in our infrastructure is that we needed to have NAT gateways in our VPCs that I had to set up so that we could have a singular outgoing IP in order to be able to whitelist it in our customer services. And we did have some monitorings like o dear flare and stuff like this that we had to point to our things and multiple different security tools that we have as well. Yeah, just just to clarify this chart that we're looking at here. This is what it looked like previous to the migrations here. As we went into this, this is really where where we sat in, we started sitting down and having some conversations here. Some of the known risks before the migration. Let's start in this networking column and kind of how that worked for you. Yeah. So, the very first time I talked about moving to cloud is the first time I saw that there were no available net gateways at that time. Like out of the box, I was just reading the documentation. and it just came out and I knew that since our whole infrastructure was based around communicating with different ERPs and our customers were very very specific about the security of it, we could not just whitelist a range of IPs. So we knew that this was going to be an issue. So like I said, IP whitelisting was required. Absolutely. And this one was figured out, the FTP IP matching one was figured out a little down the migration. I think after one of the first apps that we migrated, our one of our customers was using the original the OG FTP protocol and I think it was it's either passive or non-passive mode. So what this does is that when we connected to it, it was asking us to connect back to that connection. This is how FTP works and it looks at the next IP coming in and it has to be the same and cloud you you don't know which node you're going to get, right? So you can't guarantee that it's going to be the same IP that's coming out the second time around. So, we had a good time figuring that out. Thanks to the LL team for helping us on that. That was that was a good six hours trying to figure out what was happening Yeah, that was totally what we call a deep dive into trying to dump weird things within cloud. Yeah, in terms of storage, we had like we said huge production databases. I don't know how huge they are in terms of like in the market but for us it's 300 GB of data without much downtime for us was a very big challenge. We have terabytes of S3 data that we wanted to move. We're going to come back to that after a while like tight downtime window. So we are shops online. So we cannot be down at all. We have customer in different time zones. So there's not really a huge window where we have a real downtime, especially across all of our customers. Yep. Even when it's planned, it's just lost revenue at that point. Yeah. Exactly. Like I remember every time that we're down and that we have to make like a maintenance like a st a timer starts and that like people start talking to me in terms of profit per hour that we're losing. So this is how they're they're counting it and it's true, right? So that's that's the way it works in that domain. We had some custom Linux extensions that we were using. So that's another reason I forgot to talk about earlier that we were using kind of a hybrid between Forge and Vapor is that we did have to have some custom things installed that we could not install on Vapor. So we did bring that problem over to cloud. We saw that Horizon was not natively supported on cloud. So what I mean by that is that the Horizon Q system doesn't have like a single like oneclick deploy button where you know that is handled by you by cloud. It's supported at some point like I think it's even in the docs right how you can run it. Yep. But it's not like supported supported like on Forge where you click one button, the demon gets created for you and everything is monitored and stuff like this. And yeah, we did have a lot of environment drift in the code. So we did have a lot of and checks to see are you on forge, are you on vapor? So we we knew that we were going to have to take a look at our mess and clean it up before we can move. Yeah, this was fun kind of going through all these different things and being like, all right, how do we solve this? And just like getting the mind together and come up with these and so first thing was some in infrastructure fixes that we were able to get in there. So, let's kind of go through some of these problems that we were talking about on the last slide and how we solve those. Yeah. So, I think that the the subtitle I just saw for the first time. Now, most blockers turned into simplifications is kind of a good way to put it because for us at first it seemed like these four were pretty much showstoppers. And when you think about it, it did make us realize that once again we were kind of making weird things happen. I don't mean that outbound static IP whitelisting was a weird thing, but it didn't make us like think about how we were doing things. we could and I remember we talked about the the possibility to have you guys set up one outgoing IP like customly made for us and like even if that was on the table we we we took a look at the number of calls that we had to make and that we it would be even better for us to just have a proxy and handle it ourselves like why would we be using like a n gateway with a provision IP and you don't have a control over it so this way so what we did for that is that we took all the external calls that needed to be whitelisted and we routed them through a a proxy that we control. We did have the custom document conversion. So this is like the custom Linux tool that we had to install on Linux. Uh what we did is we just built an ex no actually we started using external conversion services for us like for the time that it took like the machines and compute that it took for us to transfer these huge like gigabyte Excel files. We just use an external services that does it for a few cents for us and it's pretty much instant in this. So it was a good find. The FTP well we cannot tell our customer to migrate their FTP. So because they have a whole infrastructure themselves. So what we did is we just built a new tool which basically clones outside of Laravel cloud all the files that are in the FTP and then we used the much safer and newer protocol. It's kind of funny to say newer for SFTP, but yeah. [laughter] So, we built that sync tool. If I recall correctly when we were doing the research, SFTP came out in like 1972 or something like [laughter] Exactly. So, so it being newer is just showing how much like some people are very much stuck with the old protocols and cannot get out of it because of various reasons. The Horizon uncertainty was to use cloud workers which is not done yet by the way but we are at the moment running horizon full scale on with the the numbers that I saw before like a 100,000 jobs a day on cloud. We are planning on moving to the new workers. We're just waiting for visibility to be enabled like we have with a dashboard on horizon. So I think that's going to be potentially coming up within Nightw Watch one day. Yeah, that's going to be a huge one. So those cube clusters are are supposed to be like the Laravel cloud version of Horizon. So Horizon was built originally in 2015 and was made to run on a virtual private server and it takes up as much RAM as it can grab and and does some other things in the orchestration. It works works wonderful, but that observability is like what gets everybody. And so the Q clusters that we designed on cloud they are getting observability hopefully I believe sometime this quarter maybe into next quarter that'll be powered by some of the graphs and things that we have in nightw watch and that'll give that observability with the first party nature of those Q clusters that's that's that's definitely coming I can't wait to get you guys using those. I'm you're definitely going to be in our test group for those as well. You know what's crazy is going back to your proxy here is just thinking back of how your problems that you ended up coming up with certain solutions for informed for us ways that we needed to look at the cloud product in general and change things. So like two of the things that came out in this is the static IPs. We created an API that people could query that would give what the outbound IPs were for the dedicated regions that were happening. And so it was supposed to serve to solve your problem, but then you guys went a little bit of a different way. And then on our private cloud, we also offered having dedicated NAT gateways. So you could have better for that, which you guys aren't on private cloud, but that was something that was informed by that decision. And so I think it's just really cool how just seeing these problems in real life as we went through this migration kind of helped both of us get to a better state. Yeah, totally. and and and rethinking all that is we're going to see like savings at the end and we're going to see like a lot of good thing that came out of this. And the funny thing is what actually convinced us were the new Q clusters and we're not even using them right now and we're very happy with what came out of that whole migration. So we cannot even believe that we haven't touched yet what we wanted to migrate for. But yeah, it's coming up pretty close to to being the last thing that we're going to need to do for us to reach the the fully migrated point. Yes, that's I literally can't wait for that. I want to leave some time for QA. So, we're going to we're going to do this in speedrun mode here. So, the migration process here, first part you kind of guys started with cleaning the app. We've talked a little bit how we had to remove those hacks, standardize what was happening. Then kind of in the next phase, we're going to jump into migrate staging. This was kind of like, hey, does it work in tests? Does it do that? And then, you know, the dry runs. I think this is where I want to focus in on this one. I think this is where we really figured out like the go no-go and all of that is really like in the dry runs. And so, condense those three into one. What are your thoughts on like how we executed on that together? Yeah, so cleaning the app was pretty straightforward. Migrating to staging as well, you don't have very much impact on that. So, it's it's pretty much you have infinite time to do those. So the dry run for is a way for us to figure out a little bit about what is it going to look like when we actually move data. How long does it take to move the data on cloud and can we even move that much data on cloud. So we did restore snapshots on cloud the day before or or the week before. Take a look at how it was going to go. We did some custom scripts to take a look at the data that was migrated versus what we had. I think we were using my loader and my dumper as custom tools which was the quickest way for us to move all that data and I think we got in touch with the Laravel team for us to get a little bit more access on our databases so that we could use those tools. Yeah. Which is standard now as part of that as we figured that out which is pretty cool. Nice. So yeah and then the day of like maintenance mode then draining cues because we have a lot of cues so we had to wait and we chose like a a good time frame for that we did restore the right database and we did the DNS swap which is pretty much instantaneously and yeah the largest downtime that we had I think was one hour with no data loss so I'm pretty hyped about that. No data loss is great. The swap DNS is a funny one. So I think this is like the one spot that in the first migration gave us real pause is when we were trying to do it and the UI was not very clear of what we needed to do and we had to kind of be like hey Cloudflare what are we doing here kind of doing this and so what that led into is we've completely changed the product so that as you're doing that it asks for more information to make sure you're doing that correctly and has it do there so we didn't have to do the guessing that we did when we were swapping your DNS for the first time. Yeah. And I remember our last migration, you just told me, "Hey, we switched the UI. You're going to be good to go with no other information." I was like, "Okay, well, thanks, I guess." And then I took a look at it and I was like, "Oh, well, it it's pretty straightforward for real." I mean, it did help us. It it had more steps now and it's very very much clearer than what we had. Yeah. So, before verse after here, same apps, simpler platform, lower costs. You know, we we've touched on a lot of this, but any kind of thoughts as you're looking at this in retrospect of like where things went, how they went, and like things like if you had something you would do differently. What are those kind of thoughts that you have? Well, if I could do something differently, I would have moved out of Vapor earlier just in terms of the costs for us. I mean we have way more than 50% lower in infrastructure costs but like we did have some huge machines that were kind of padding for the problems that we had and we do estimate a very clear 50% lower infrastructure cost just from moving to from vapor to cloud. So that's the biggest win for us there. Autoscaling is pretty much now brain dead for us which was an issue all along. We're very very happy with what we have currently on cloud. So awesome. And so kind of measuring this impact, you know, there were six app migrations. We kind of did those over 12 weeks. You know, we as we were kind of talking about this before, some of this was just like, hey, it's Christmas break. We're not doing a migration for two weeks, guys. So there's, you know, probably even some padding in that. You know, the 50% cost and then the longest downtime was one hour and I think that was the time we ran into the Cloudflare waiting to happen and kind of got spooked, right? Yep. Totally. So yeah like I said the 12 weeks is pretty much on our end like we were very very slow and careful with those migration because of the criticality of the data that we we were moving but in itself like the migration of actually moving a cloud from vapor was not that long at all and like you said there was the Christmas break within that time frame. Yeah, my the my favorite story of this one is we were getting together to talk about this webinar and plan session and we get on the phone and I was like hey man what are we doing your last migration and you're like we did it two weeks ago and I was like you didn't even call me like you're like no I didn't need you man. No the last one was was so smooth that I got scared like I took a look at it and I I wrote to the dev team I was like I think it's done I think we're fully migrated now. So, it got easier and easier the more we moved. Yeah, that's so awesome, man. So, we have some questions here in the Q&A. I'm going to kind of look at some of these and and I don't know if you could see them, so I'm just going to read them over to you. So, one from we kind of talked about that a little bit is it was mostly kind of the 15-minute timeouts and some of those things. Any any other things you'd add to that? Yeah, I think the timeouts were the greatest ones and we did hit some limits with the queueing systems as well. We had too many jobs for the visibility that we could have on vapor. So for us it was we did try to use so what's it I think it's SQS on AWS. We did try to use that but once again the dashboard there and the visibility didn't compare to what we could have on forge with horizon. Yeah. So we got another one here kind of in the same vein. They're asking if you can explain the 50% emperor cost for the database size example and some of that. There is also one here for the vapor spend. You can answer that one if you want or don't want up to you. But mainly on the 50% infrastructure cost. What was the what was the change on that for you? Our total infrastructure spend at that time was about 11k USD per month and the 50% decrease cost was not a one for one. Like it's kind of hard for us to have a one for one because of the way the pricing is structured between an RDS instance and the Aurora serverless instances. But just the fact that we have like a straightup machine that has definite specs and that we can take a look at them was very easier for us to scope what we needed actually and what was the peaks that were taking all the ACUs that we had when when we were on Vapor and Aurora. We just like boosted up every time we had an issue with it. Yeah. And I you guys moved over to Laraveville MySQL, which I think part of our discussion that we had as we were talking about what what you would move to was there were features you guys had on RDS that you weren't even taking advantage of like point in time recovery and things like that. And you're like, we take backups. What what do we need point in time recovery for? And so I think some of those helped cut some of those costs as well, right? Yeah, totally. Awesome. So another one I want to get to here. This is an interesting one. So, how easy is it to convert the vapor YAML to the canvas and cloud and how did you feel kind of migrating from like that infrastructurees code to that visual? The first one obviously was kind of the hardest for me like basically to understand how the canvas worked and how I could find pretty much anything that I was using and making sure that everything I had in my infrastructure on AWS could be replicated on cloud. But once we did have that, it wasn't too hard. I mean, it's pretty much like the same specs that we used. And it's kind of a funny thing is that the way we approach it is because we didn't want to have too much too much spending. Once again, we did try to have it as little as possible and as small as possible, which might explain as well the infrastructure costs reductions that we had. So we started as small as possible and anytime we saw any issues we kind of bumped it up a little bit to make sure that we we were doing it the other way around. But yeah, the migration itself from the vapor YAML were not that complicated. Maybe like a few hours just to figure it all out. Awesome. Another one here is have you noticed any performance gains and where were they? Yeah, we we did figure out a lot of performance gains initially when we took a look at the stats. We did see that the hits themselves had maybe like one 150 millisecond differences, but when we stack that out to the number of requests and back and forth that we had to make on vapor with the the potential pre-warming that we might hit and all the like whether you want it or not, the whole vapor infrastructure has a lot of things that you have to go through just to get your request back. It's pretty quick, but it's not the same as having your server close to your database. Yeah, makes a ton of sense. So, we got one here. I think I'll probably take this one. General question regarding cloud. Does it use AWS under the hood or what does it use? Like, is it built from the ground up by Laravel Cloud? Yeah, it's our hyperscaler is AWS currently. Yes. How did you migrate your S3 buckets is the next one here. That's a good one. I didn't touch on is we didn't migrate the huge ones because the cost of coming out of S3 weren't worth it for us just to go out into I think it's R2 that you guys are using and um but for the few ones that we did it was pretty easy. There's a part in the documentation on Laravel cloud on how you can just pro provision your S3 or your R2 connect to it via like Cyberduck. I connected to my S3 via Cyberdoc and it's just a basic drag and drop from there. Yeah. All right. We got one more here for you. What was the hardest part of the migration? I think the hardest part of the migration was figuring out how to work around all the hacks that we had made beforehand and migrating the data as fast as possible. We did try a lot of different things like just dropping the data, not dropping but like downloading the database and dumping it from basic tooling like table plus or DBver and stuff like this. But until we put our finger on my loader, yeah, my loader and my dumper, we kind of had a a hard time migrating that much data. So that was the hardest part for sure. Yeah. Which is actually another one where we've started working internally on how can we have more guides and things on better ways to do DB migrations to make that a little easier for customers as they're coming in. Yeah. And and and just I just figured out where the Q&A is. to somebody that that has pretty much the same story and yeah the we have the exact same story like the AWS was mostly database and yes you can expect that to drop dramat dramatically when you move to cloud that's exactly what's pushed us to move and that's what happened to us awesome so we're at time I want to be respectful of everybody's time who's came we got a couple more questions here we'll try to answer those as a parting thought here fa where can people find you out on social media x LinkedIn where if anybody's got questions they want to ask you, where can they reach you? Yeah, they can reach me at fa pero. So pretty much my name, but just fa on x the everything app. And yeah, I'm I'm not that active on LinkedIn, but you can find me pretty easily using pile or pileoft. Awesome. Well, FA, it's always a pleasure to hang out with you and talk about the migration. I'm like so excited that we got the opportunity to sit here and kind of just debrief and and talk through this and look at it from the other side. And thank you everybody for joining today and this was a great time here at Laraveville and we'll we'll see you next time. Yeah, likewise. Thank you very much, Evan.

Get daily recaps from
Laravel

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.