How do software projects achieve zero downtime database migrations?

📘 T3 Stack Tutorial: 1017897100294.gumroad.com/l/j...
🤖 SaaS I'm Building: www.icongeneratorai.com/
✂️ Background Cutter: www.backgroundcutter.com/
💬 Discord: / discord
🔔 Newsletter: newsletter.webdevcody.com/
📁 GitHub: github.com/webdevcody
📺 Twitch: / webdevcody
🤖 Website: webdevcody.com
🐦 Twitter: / webdevcody

Пікірлер: 48

  • @alazar7685
    @alazar768511 ай бұрын

    cody, you are like a window to the gated community of production grade software and service design. your work is always appreciated.

  • @cybroxde
    @cybroxde11 ай бұрын

    Can confirm, this is what we do as well. We usually keep [and keep updating] the old column for 1-2 iterations, in case some unexpected issue shows up. Just make sure to actually schedule deleting it eventually, otherwise it can become very confusing.

  • @anoozg9565
    @anoozg956529 күн бұрын

    Next level content. Thanks for sharing this info.

  • @XelleczixTV
    @XelleczixTV11 ай бұрын

    I was literally doing some research on this topic today. Thank you so much, we want more content like this!

  • @FarisEdits

    @FarisEdits

    11 ай бұрын

    Me too!😅

  • @muhammadmejanulhaque3305
    @muhammadmejanulhaque3305Ай бұрын

    Thanks, Very well explained

  • @SeibertSwirl
    @SeibertSwirl11 ай бұрын

    Niiiice babe!!! Good job!

  • @SaiPhaniRam
    @SaiPhaniRam5 ай бұрын

    Awesome explanation. Thanks for this 👍

  • @rishabhgupta1282
    @rishabhgupta12823 ай бұрын

    Great explanation !!

  • @Lolleka
    @Lolleka8 ай бұрын

    That's like the natural thing to do. That is how I handle migrations. Great content!

  • @patolorde
    @patolorde6 ай бұрын

    Thanks for this great video

  • @luisrortega
    @luisrortega11 ай бұрын

    We use a middleware approach to facilitate "preping" the data for an upgrade... Normally, a process will run to do the updates on the names (we don't use database script because of excessive locks), in conjunction with this, middleware can decide (a) read from the name, and (b) update both sides (if the customer sent an update)... it will update the name field, but also the first/last name fields. This normally takes longer than a database script but ensures a smoother transition. Once all records are updated the same middleware can be switched to read from first/last name fields. This is also true when doing new complex indexes... sometimes we have to move the data from field A to field B (field with index). and have the middleware select/update the proper target field. If you use a Node/Express "getCustomer" route, you can do a quick "middleware" between the user request and the actual database.

  • @luisrortega

    @luisrortega

    11 ай бұрын

    Forgot to mention that those "features middleware" are relatively easy to activate, deactivate, and detach... you don't want to keep that old code there... :)

  • @FuzzyAnkles
    @FuzzyAnkles11 ай бұрын

    ALways good contet from you Cody! Jah bless!

  • @cloudcover540
    @cloudcover54011 ай бұрын

    Are there scenarios in which creating a fully separate database instance with the modified schema is more effective for migrations? My assumption is that it is easier to handle zero downtime, testing, and rollbacks by just switching the database connection. I suppose this is also more appropriate for complicated schema changes but is more expensive

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    Yes and I do think sometimes doing tons of row writes into your production database can cause a lot of performance issues for your users, so migrating all data into a new database and switch users over to use that database might be a better solution. But that’s getting into territory of blue green deployments

  • @tiennguyenthanh7344
    @tiennguyenthanh73446 ай бұрын

    Hello, for example if I have a worker process that update the "first name" and "last name" column from the original "name" column - Maybe it will incur a write-lock on that row, unfortunately at the same time a user is changing that row? What would happen in this case?

  • @AbdulFourteia
    @AbdulFourteia11 ай бұрын

    Speaking of databases, what is your experience with connecting relational databases to severless functions? from what I understand databases are not meant to handle many connection requests, so they don't scale well. The likes of PlanetScale and Supabase are there to plug this gap, but they can be pricy.

  • @abhishekgn2626

    @abhishekgn2626

    11 ай бұрын

    In my experience with AWS. RDBMS and Serverless do not scale well together. RDBMS usual defaults are very small compared to the parallelism at which serverless can execute. The solution is to use some proxy tools like RDS proxy which will act like a multi-channel connection pool and re-use the already established connections to the database and will keep the connection count to database consistent no matter what load serverless is running on. We were able to achieve ~20k TPS with this setup for read operations on RDBMS via AWS Lambda.

  • @ChigozieOrunta
    @ChigozieOrunta10 ай бұрын

    Thanks, Cody. You are a genius! I am curious to know how I could migrate a production WordPress database to another without downtime. Your insights would be much appreciated as always.

  • @WebDevCody

    @WebDevCody

    10 ай бұрын

    I’m not sure I don’t really work with word press. Are you installing a plugin that requires a migration?

  • @ChigozieOrunta

    @ChigozieOrunta

    10 ай бұрын

    @@WebDevCody No. Actually, I have a WordPress e-commerce database in production that requires migration to another database. The caveat is that users must not have any downtime whatsoever. I must confess, watching your video was a bit of a relief as I have been cracking my head on how to get this done. Any insights like I mentioned earlier would be so much appreciated. Thanks once again.

  • @WebDevCody

    @WebDevCody

    10 ай бұрын

    @@ChigozieOrunta is this a huge database and you have tons of users? Also, are users able to add records (leave comments, etc)?

  • @ChigozieOrunta

    @ChigozieOrunta

    10 ай бұрын

    Hi Cody. Yes it is a big one and users are constantly purchasing stuff and making orders and all that activity data for each user is stored in the database.

  • @WebDevCody

    @WebDevCody

    10 ай бұрын

    @@ChigozieOrunta how long does a migration take? Have you tried it out? You may just be able to migrate all data into the new database, but also setup a trigger on your database to write any database changes of the old table into the new one. If it also requires a Wordpress update, then you may need to have a separately running Wordpress instance which has been updated and switch users over using a load balancer. Sorry it’s hard to say for sure without every single fact, and I’ve never worked with Wordpress

  • @mayankmaheshwari2544
    @mayankmaheshwari25446 ай бұрын

    can you tell about database migration in detail like what happen to the data present in the earlier database while user requesting that, do we replicate whole data from earlier database to new database and then divert user requests?

  • @WebDevCody

    @WebDevCody

    6 ай бұрын

    Are you asking if you need to move data from one database to another?

  • @mayankmaheshwari2544

    @mayankmaheshwari2544

    6 ай бұрын

    @@WebDevCody Yes while the user request keeps processing and depends on data present in the earlier database, how does that happen?

  • @real-oppenheimer
    @real-oppenheimer11 ай бұрын

    In your example, if writing new data into the two new columns, but reading is still done from the old column, wouldn't the user get stale data? Imagine you update your first name in an app and after reloading the page it still shows the old name, no? The solution to that is to write the new data not only to the new columns, but also the old column, right? In the process you described, I don't think you can skip this step (continue writing to the old column) without having stale data.

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    You write into both the old and new. Maybe I didn’t explain that well but in the diagram you can see a line going from api into all three columns, and I don’t delete it until after all data is migrated

  • @CrackerWeb

    @CrackerWeb

    11 ай бұрын

    @@WebDevCody Nope, why would you write both old and new field, you only need to write in new fields: `firsname` and `lastname`. Also when you first read data so user can edit, at that point you also have to check that if `firstname and lastname` already contains new record. If they do, use that for edit record again. becuase while the process of copy data from `name` to `firstname and lastname` other user might also want to update their name who alredy did once or doint first time. Once copy process is done, you can change your code and remove code of `name` and `name` from database table.

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    @@CrackerWebim not sure at what point the question is about. Before you run a script to move all the data into new fields, you need to deploy an api change which writes data into both the new and old fields, otherwise your api will be reading from the old field and only writing to the new fields. After all the data has been updated, you update the api code to read from the new fields. If there is any confusing read the article on prismas blog because they are smarter than I am

  • @nimmneun
    @nimmneun11 ай бұрын

    I honestly gotta ask our admins how we do it exactly, especially when tables with 10s and 100s of billions of rows are involved. I think it was done on one of the replicas first which then becomes the new master after deployment etc.

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    yeah sometimes a migration could take hours, so they may just do it on a non-live database and have a sync stream setup between the live data and non-live

  • @anonAcc575
    @anonAcc57511 ай бұрын

    Why not just update the database incrementally and then swap the api? If database was updated anyways for less frequent users then I didn't quite understand reason for the first part.

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    The first part meaning writing to both old and new schema? Because if you have real traffic, the time between you running the migration to fill in the columns and deploying the new api version could take a few seconds or minutes, meaning if a single person creates a new account during that time and uses the old logic, then the new columns would be missing the data, so you have to have your api write to both to prevent data loss

  • @anonAcc575

    @anonAcc575

    11 ай бұрын

    @@WebDevCody Now I get it, thanks!

  • @curlymike
    @curlymike11 ай бұрын

    Ok, so your first name is Cody...

  • @SkEiTaDEV
    @SkEiTaDEV11 ай бұрын

    next time please put the prisma link into the video description so we don't have to type it down from the video

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    Oops yeah sorry

  • @joshuagabal9444
    @joshuagabal94446 ай бұрын

    The Problem with this is that Users who are logged in the SaaS and doing there stuff are using the old API. If you update the System then you expect that evey user is using the new API. What are you going to do with the Users who are still logged in the "old" UI that uses the old API? The old API needs to be working.

  • @WebDevCody

    @WebDevCody

    6 ай бұрын

    never make breaking changes to the old api, only provide new functionality to old apis.

  • @sadensmol
    @sadensmol11 ай бұрын

    This is normally called “3 steps migration”. The one thing wrong in your video - on 1st step you write to both old and new schema

  • @WebDevCody

    @WebDevCody

    11 ай бұрын

    This is called expand and contract, and I literally say write to both the old and new schemas in the video. I’m not sure what you mean