#18 | Database Migrations - AdonisJS 7

Adocasts| 00:24:55|Apr 24, 2026
Chapters21
Explains migrations as a version-control-like system for database structure, highlighting benefits for onboarding teammates and keeping development databases up to date.

AdonisJS 7 migrations explained clearly by Adocasts: how to create, run, rollback, and organize migrations with Lucid and the ACE CLI using SQLite in a practical onboarding workflow.

Summary

Adocasts’ #18 dives into database migrations in AdonisJS 7, showing how migrations act as a version-control system for your schema. You’ll see how Lucid translates TypeScript-like definitions into relational tables, with an up method for applying changes and a down method for undoing them. The guide emphasizes practical sequencing: migrations must run in a reliable order, so you’ll learn about natural vs. timestamp-based naming, and why the roles table should come before users when adding a foreign key. We walk through a starter migration that creates a users table, how Adonis tracks migrations in the adonis_schema table, and what the batch concept means for grouped migrations. The ACE CLI commands featured—migration run, rollback, fresh, reset, and status—provide a handy toolkit for development, testing, and rollbacks. You’ll also see how to inspect the actual SQL logged during migrations and how to disable those logs in development if you prefer a cleaner terminal. Finally, the video demonstrates expanding your schema with new migrations (roles, profiles, challenges, and a pivot table for challenge_user), and discusses the importance of respecting foreign key constraints across databases like SQLite, PostgreSQL, and MySQL. The session wraps with practical tips on planning migration order and using the ORM to manage relationships, setting the stage for the next topic on schema files and models.

Key Takeaways

  • Lucid migrations are class-based with an up method for applying changes and a down method for reverting them.
  • When you run node ace migration run, Adonis executes the up methods in a single batch and records it in the adonis_schema table.
  • SQLite can show SQL logs in development; you can turn these off by setting the debug flag in the database config.
  • To avoid FK errors in other databases, change migration order by renaming files so the parent table is created first (e.g., roles before users).
  • The migration status command reads the adonis_schema table to show which migrations are pending or completed.
  • Pivot tables (e.g., challenge_user) use an ID primary key even though they model many-to-many relationships and FKs to both related tables.
  • Rolling back a batch runs the corresponding down methods in reverse order, effectively undoing the most recent migrations.

Who Is This For?

Essential viewing for AdonisJS 7 developers who want to master migrations, understand Lucid ORM basics, and manage safe schema changes across SQLite, PostgreSQL, and MySQL with the ACE CLI.

Notable Quotes

""
Introduction to migrations as a version-control like system for the database.
""
Explains the up and down methods and their forward/backward usage.
""
Notes about the Adonis schema table that tracks which migrations have run.
""
Demonstrates how to rename migrations to enforce proper execution order for foreign keys.
""
Describes rolling back a batch and how the down statements undo the up changes.

Questions This Video Answers

  • How do I run, rollback, and status migrations in AdonisJS 7 using ACE CLI?
  • Why should I create a roles table before a users table in AdonisJS migrations?
  • What is the Adonis schema table and how does it track migration batches?
  • How do I set up a pivot table for a many-to-many relationship in AdonisJS migrations?
AdonisJS 7MigrationsLucid ORMACE CLISQLiteSQL loggingForeign keysPivot tablesBatch migrationsMigration status
Full Transcript
Migrations are a version control like system for our database. They allow us to track, share, and roll back changes to our database's structure like adding in a table or changing a column's type. They're great for many reasons, but two stand out are one, they allow teammates to easily onboard onto our project or keep their development database up to date with the latest. And two, they allow code changes to go out in unison with our database changes in our production environment within our project structure. If we go ahead and condense some of these folders down to get within the root of our project here, database migrations can be found within the database folder and then there will be a migrations folder inside of there. And looking within here, we can see that our starter kit has started us with one database migration already. And you might note the file name looks a little different. So if we expand this out to give us slightly more room, the number at the beginning of the file name here is a timestamp of when the migration was created. This is used to naturally sort our migrations both in the migrations folder and in the order that they're going to run to give us a consistent and reliable order for them. After the time stamp is the action of the migration. So we're creating a users table. This portion after the timestamp is strictly for informative reasons to help us find specific migrations if we need to look back through them to make changes or note what's been done inside of it. If we go ahead and click into here, we can see the contents of this migration. And I'll go ahead and condense our sidebar back down a little bit. Within here, we'll note that it's exporting a class that's extending a base schema class from Lucid. And Lucid is AdonisJS's object relational mapping system or OM. And OMS are like a translator, taking in an object-like representation we'll use in TypeScript and translating it to the relational tables inside of our database. This gives us a simplified syntax for things like querying data and even building tables like we're doing here. Inside of the migrations class are two different methods. We have an up method and a down method. The up method is in charge of forward progress. This is where we're going to create our new tables, constraints, and any other changes to our database that we want to apply. The down method then allows us to go backward or undo any of the changes that we've done inside of the up method. Whatever we do in up, we should undo inside of down. For example, inside of this users migration, we are creating a table in our up method and we're dropping that table in the down method. And this updown flow is primarily to aid with development, allowing us incremental progress or changes without having to create tons of migrations for a single feature as we're working on it. The down method to importantly note is destructive. So it will delete tables, delete columns, anything that you're doing inside of that down method, it will do. So you don't want to run that in production unless you're explicitly set in your reasoning on why. And Adonjs will require a confirmation in order to do so as well for your protection. Now the forward progress implementation of our migrations is called running the migration. So in order to actually create this users table inside of our database, what we'll want to do is run this migration, which in turn will execute the up method of the migration. And this can easily be done if we jump back into our terminal with command or control J there. And let's go ahead and stop it with control C and clear that out using the ACE CLI's node ACE migration run command. I'm going to go ahead and collapse my resources folder down before we do that as well as the start folder there. And let's go ahead and now hit enter to run that command. As mentioned previously, our starter kit here is using SQLite as its database driver. So, it's a file-based database that will be housed within our actual project's temporary directory. Your project most likely started with a temporary directory already. And now, since we've run our migrations, we've created this file-based database, which is a db.sql i3 file inside of here. And when we ran our migration, you'll notice that we got a lot of information here in our terminal. This is detailing each individual query that was run in order to execute and run through our one migration thus far. You'll see that the very first check here was towards an Adonis schema versions table. This is a table used to track Lucid's schema versioning. We don't really need to worry about this table. It's self-managed by Adonis.js. The next check within here is for the last migration bash that was run. When we run our migrations, Lucid will run any migration that hasn't yet run. All migrations that are processed within a single run command are considered a batch. So whenever we ran our migration run command here, it created our users table in the very first batch. If we had multiple migrations run in this single command run, each one of those migrations would have been included in this single batch as well. This batch information is housed within the adonish schema table. Again, just like Adonis schema versions, this is self-managed by Adon.js JS and for the most part you won't need to worry about it. Using the info housed inside of that adon schema table it determines which migrations from our database migrations directory needs to run skipping those that have already run. So since we have now run our create users table migration we would see inside of our adish schema table a row for this migration noting that it was run inside of batch number one. And a small note about these SQL logged statements here in regards to the question mark and squared brackets that we have. These are noting SQL parameterization within these queries. It's a precautionary step that protects our database against SQL injection attacks. For example, it prevents somebody from being able to drop a table inside of our database via an input shown on our web page. The hard brackets at the end of these queries represent an array of parameters that we're passing into the SQL query. And then the question mark serves as a placeholder for each item in that array, working its way from left to right, filling them in from our array. Oh, and actually there is a small change here from whenever I was preparing the series to right now. Noted by this already up to-date line right here. This is saying that the migrations that Adonjs found whenever we ran our migration run command are already present inside of our Adonis schema table. So there was already a line whenever we ran this for our create users table migration inside of our Adonis schema table and our users table already existed inside of our database. If any migrations had actually run, we would have seen a migrated line printed out here for each of those migrations that were run. And we will see that here in a moment. But the reason why we were already up to date here is because whenever we created this project as part of its configuration step on setting our project up, Adonisjs went ahead and ran our migration run command for us from the get- go. So this DBSQLite 3 file has been sitting here quietly idle all along for us. Then these last three lines that we've yet to talk about are for schema class generation and we'll talk about those in a little bit as well. By the way, we'll be using these queries that are printed out inside of our terminal to help understand what Adonisjs is doing underneath the hood as we run these migration commands. If however at any point you'd want to turn these off, you can easily do that within your configuration inside of the database configuration file. If we go ahead and scroll down a little bit within the connections, we'll find a SQLite object. configuring our SQL light connection parameters as well as options. If we scroll down a little bit further, we should see just towards the bottom here a debug flag that will be set to true if our app is in development and false if it is not. So if you want to overwrite that and set that to false, that will hide all of the SQLite or whatever your database driver is, all of those log statements there. So if you prefer a clean log, uh that might be your preference to do that. Okay, next let's jump back into our migration here and let's talk about migration roll backs. So, whenever we roll back a migration, that's going to run the down method of each of the migrations in the batch that we're rolling back. Undoing whatever we've specified to be done inside of this down method, which ultimately should be the inverse of what we're doing in the up method. So, back within our terminal here, if we go ahead and clear this out, we can do this again with the ACLI's migration roll back command. So we'll hit enter there and again we'll see a number of logs printed out. Rollbacks occur one batch at a time unless otherwise specified. So we can actually see that this is trying to query for the max number of batches within our adonna schema table to then get all migrations that are listed within that particular batch that was run. Again every time that we run a migration run command that increments the batch. So our max statement here found a batch of zero which was the first batch that was run. So it's going to revert all the migrations that were executed in that batch which just happens to be our create users table migration at this time. We can then see that it actually ran the drop table SQL statement as specified by our migration right here. And then after it dropped it, we got a successful reverted. So this one was reverting noting that it's preparing to run the down statement. And then this one was reverted noting that it successfully occurred. And then of course we again regenerate our schema classes we'll talk about momentarily. So at this point we no longer have a users table within our database because it has been dropped. Before we move on let's go ahead and clear this out. There are a few other migration commands that I want to talk about offered by the ACE CLI. So running Node Ace here we can see our migration section and there's six in total at this point in time. There is migration fresh which will drop all tables and rerun all our migrations essentially giving us a clean data slate. There is refresh which will roll back and remigrate again doing one batch at a time unless otherwise specified. So it's just going to essentially run the roll back command followed by the run command there. Then there's migration reset which will roll all the way back regardless of batch undoing all of the migrations within our migrations folder. Then there is migration status which will read our Adon schema table to print out the status of all of the migrations showing which have already been run and which are pending. So right now if we were to go ahead and run that. So node ACE migration status we would see that we have our create users table migration listed out here in pending status and it's pending because we just rolled it back. So it's no longer actually run and housed within our adonish schema table. And since it's not in there, it's not a part of any batch. Okay. So for our application, we're going to need a little bit more than just our users table. And a migration should only consist of creations or alterations for a single table at a time. So if you need to make multiple tables or change multiple tables, then we're going to want to create a new migration for each table that we want to touch with those alterations. Again, like everything else, we can do this with the ACL. So there's a make migration command, and all that we need to do is pass in the name of the table that we want to create. And like the other make commands, Adonjs will keep to a convention naming system. And the convention for table names is that they're pluralized. So if we run make migration for our RO table here, we'll see that it has now created a migration with the current timestamp for the date time calling our table rolls instead of the role that we provided there. We can then jump into our new migration and we'll see that the table name that we've specified is set right here allowing it to conveniently be used for both our create and drop statements. If you are altering a table with a migration, there is also, if we take a look at the help options of the make migration command, an alter flag right down here, which will slightly change the stubs from create table to update table. And then it will allow you to fill out the down method with whatever it is that you're changing in the up. Okay, I'm going to clear that out, close our terminal so that we can define our roles table here. Again, as a convenient step, Adonisjs has already defined a couple of columns within our table. We have an auto incrementing ID as well as a created at and updated at timestamp. The ID uniquely identifies the role inside of our table and is set up as a primary key. And then the created at allows us to track when a row was created and updated at allows us to track when a row was last updated. And of course, it has a convention here as well of snake case column names for our database. So we'll be keeping with that throughout as well. Okay, let's add in a name column to our ROS table. So using this migration structure here, we can reach through to the table that's being provided inside of this create table callback to add additional things to our table whenever our up method is run. Off of this table property, we have a number of methods that we can use to describe the type of the column it is that we're adding. So since we're adding in a name column, that's going to be a type of string. So we can call a string method here that accepts in the column name and an optional max length for that column as well. So for our column name, we'll provide a name and we can keep it as simple as that or we can optionally add in a max length. So row names are going to be relatively short. So maybe we want to limit that to 30 characters for this name column. The purpose of the length there is to save us ultimately on space because now our table will only designate 30 characters of space inside of our database. Off of the method designating a type for the column, we can chain additional options as well. Now by default columns that we define here will be entered in as nullable. So we can actually make these not nullable by including a notnull method on the column designation requiring a value for our name in order for a role to be created inside of our roles table. And again since the ID here is our primary key that too is going to be not nullable as well automatically. Next we need to be able to add a role onto our user. So, let's hop back over into our create users table migration and scroll back up to our up method here and add in a ro ID so that we can assign a role to an individual user inside of this table. So, this will be an integer which is the type for our increments ID method and we'll give this a column name of RO ID. Again, keeping with that camel case naming convention and the increments columns are also unsigned. So, we'll want to match that here meaning that it cannot be a negative value. Then we're going to want to designate a foreign key constraint for this role id ensuring that any value that we try to insert as the role id actually exists within our roles table in the ID column. So to do that we can state that this ro ID column references the ID column in the table roles. The column name also comes from convention based off of this relationship as well. role being the singular version of the table name. ID being the primary key of that table used to bind this relationship. We could alternatively reference both the table and column within the single string of the references method as well if we wished as ro ID. That would allow us to truncate off the end table because that's not being included in references just as an alternative way to define that there. Finally, we also want this relationship to be not nullable. Each user must have a role ID and we can also define a default value for it if omitted whenever entering into our user's table. So for right now we can just hardcode a one into there. And now anytime that we insert a user into our user's table, if we don't explicitly define a role for them, one will be assigned to them. So we can give that a save. Our squiggly there's just for formatting because next we need to talk about our migrations order of execution. In most databases, the order that we execute our migrations in does matter. In order for our users to have a role ID in its table, for example, the roles table must have been created first. Otherwise, we won't be able to add that foreign key constraint to ensure that the user isn't being assigned a role that doesn't exist. SQLite, the database driver that we're using, uses a lazy foreign key. So, that won't actually be the case. We could run these migrations just as we have it right now, and everything would execute just fine. We will however go ahead and treat this as a good practice to do because if you then switch to Postgress SQL or MySQL then the migrations order that we have here will be an issue. So to put it in practice here what I'm on about is that if we try to run these migrations at present since we're using a natural sort it's going to execute in the same order listed within our migrations folder here. So it's going to create our users table first followed by our roles table in Postgress or my SQL. Since our users table references our RO ID, we're going to get a foreign key constraint error whenever we try to create this table because it's not going to be able to find the ROS table as it's creating the users table. So, we need our ROS table to be created first, hence to be listed first in this natural sort. And with this being a natural sort, we can easily do that by just renaming these files to change that sort order. So, what I'm going to do is rightclick and rename. And I'm going to go ahead and just select the timestamp there. Give that a copy. And then we can switch over to our create users table migration. rightclick it, rename and select its time stamp, pasting in the one that we copied from our create rules table. In order to change the natural sort of this, all that we need to do is increment it by one. So take it from 7 to 8. And now our rules table is listed first, meaning that it will then be executed first when we run these migrations. So if we jump back into our terminal here, node ACE migration run, we should now see between our SQL log statements that we have migrated our ROS table followed by migrated our users table following that natural sort that we have there. If we then go ahead and do node ACE migration status, we should see that we have both our roles and users table migrations in a completed state assigned to batch number one. So if we were to run node ace migration roll back, both of those migrations would be rolled back, meaning that both of those tables are then dropped as we can see right there. And you'll also note that it's doing it in the inverse order going from the latest to oldest timestamp wise. And again, that's helpful for our foreign key constraint because we can't drop our roles table in Postgress or MySQL if the users table is still relying on it with a foreign key constraint. And again, it's this references in table that's creating that foreign key constraint for us inside of our migration. Okay, let's clear this out. We have a couple more migrations to go ahead and create here. So, we'll do node ace make migration and we'll go ahead and create one for a profiles table. And we can fill that out. So, we'll jump into it. Within this migration, we want each row to be bound to a user. And this will be a onetoone relationship. One user is going to have one profile. So, we'll do table integer and give this a name of user ID. make it unsigned again because it's referencing an increments column. And we can use our references method to state that this should create a foreign key constraint for our users ID table. We also want this to be not nullible to ensure that we don't have any ghost profiles sitting out there. Next, we can go ahead and create a biography. So, we'll do table string. Just give this a short name of bio and 255 should be the default max, but we can go ahead and explicitly define that there as well if we wish. And then we can also add in a boolean with a column name of is_public. And we can default that to true so that whenever a profile is created by default we'll have their profile set to be public. All right, that should do for our profiles table. We can then jump back into our terminal here and let's create another one. So node ace make migration for a challenges table. All right, we can hide that back away. Jump into that migration file. Now inside of which we will go ahead and note which user created this challenge again using a foreign key constraint and we'll go against convention for this one and give it the name of creator ID instead of user ID but we'll still make this unsigned and have it reference to create a foreign key constraint pointing to our users ID column. Each challenge must also have a creator. So we'll make that not nullable as well. In addition to that, we can go ahead and add in a string of text and make that not nullable too. And we can give it an integer point value with a column name of points. And that two will be not nullable. So our creator ID here is going to be a many to one relationship, meaning that a user can have many challenges that they've created, but a challenge can only be created by one user. And we'll go against convention for this one to show how you can rig that up inside of our models, which serve as a translation layer between our database and our code. Finally, we need to be able to join users as members or participants to a challenge. To do that, we're going to need a many to many relationship. So, we'll create a new table just for that. So, we'll do node ace make migration and we'll call this challenge user. This will serve as a pivot table joining our users table to our challenges table inside of our database, allowing a user to belong to many challenges as a participant and a challenge to have many users as participants. The first thing that we're going to want to do inside of this migration is change the plural version of our table name to a singular table name. The default naming convention Luca will look for for pivot tables is a sorted concatenation of the two related entities in a singular form. So our challenge table alphabetically comes first. So that will get listed first here is joined to our user table which will come second. C comes before U alphabetically. So that determines which one goes first in the pivot table name by convention. Now with pivot tables, it's important to note, especially if you're coming from an other OM, that Lucid does not support composite primary keys. So we want to keep our increments ID here to serve as the primary key for our pivot table rows. In addition to that, we also want to define a foreign key constrainted column for each one of the rows we're binding between our challenge and our users table. So we'll do a table integer for our challenge ID. We'll make that unsigned just like the others and have it reference the challenges tables ID column and that two is going to be not nullable. In addition to that, we'll do another one for our user. So we'll do user ID that two unsigned referencing the user's ID column as not nullable. And this allows us to assign a user as a participant to a particular challenge. By having this as a pivot table, it gives flexibility allowing one user to belong to many challenges and many challenges to have many participants. Then finally for this pivot table, we want to track whether or not the user has completed the challenge. So for that we can add in a timestamp with a column name of completed at. By using a time stamp here we have a single column with two different purposes. One, we're tracking when exactly the user completed the challenge. And two, by making this nullable, we can note whether or not the user has completed the challenge by the presence of a value in this column. If it's null, then the user hasn't completed the challenge. If it has a date, time in here or time stamp, then the user has completed the challenge. So that's just a nifty little way to do two things in a single package right there. Okay, I missed saving my challenges table there. So I'm going to jump back into that and give that a save. Let's go ahead and run these now. So jump back into our terminal and do node ace migration run. Give that a run through and we're going to get plenty of logs here because for each one of these migrations, not only are we creating the table, but we're also creating the foreign key constraints that we've set up and adding the migration name into the Adonis schema. So there's a few different things happening per migration. For example, this whole section right here is just for our pivot table for our challenge user table. So you can either take my word for it that we're all prepared and each one of these tables now exists inside of our database or we can see it for ourselves. So let's go ahead and jump into the extensions panel. Command shift X or you can click on the little cube icon right there. And let's install a Visual Studio Code extension called SQL Light Viewer. It should be this one here by Florian Clamper. And we can go ahead and just click on install there. Might ask you whether or not you want to trust this publisher. You can click learn more there to do your own proper vetting. I'm going to go ahead and click trust publisher and install. And now with this, we can jump back into our file explorer. I'm going to hide our terminal away there and close this extension panel out. You should notice that the icon for our DBSQLite file down inside of our temporary folder has changed. It's now a little database icon right here. Now, you can also switch your file icons in Visual Studio Code. So, that may look different for you if you've done that. But what this extension allows us to do is now we can just click on this file and it will open up a viewer for our SQLite database. So the first table that I've plopped me into at least is the Adonis schema table. And inside of here we can see that we have a list of rows for each one of the migrations that we have executed thus far which just happens to be all five of the migrations that we have. In addition to that we can see which batch it ran in and the migration time or the time that the migration was run there. Schema version is again version control management for the AdonJS schema. We really don't need to worry about that. Then we also have our challenge user pivot table. our challenges, profiles, roles. Uh the SQLite sequence is an internal table keeping track of ids. So for example, it's got a sequence of eight there for our adon schema. If we go look at our adon schema, the latest ID is eight. So that's all that that's doing right there is keeping track of IDs. And then we also have our users table. Jumping back into our pivot table, we can note primary keys by this golden key icon and foreign keys by a gray key icon. So we have our challenge ID and user ID foreign keys there. And as we add data to these tables, we'll start to see rows populated within here as well. Okay, we can close that out. And in the next lesson, we're going to talk about the schema files that the migrations here have generated for us, as well as models as a

Get daily recaps from
Adocasts

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.