Zero Downtime MySQL Migration
Migrating a massive MySQL table with over a billion rows while ensuring zero downtime can be a complex task. However, using a shadow tables and triggers, you can perform this migration seamlessly. In this blog post, we’ll walk you through the process with a practical example, helping you achieve a smooth migration without affecting your live environment.
Why Zero Downtime Migration?
Zero downtime migration is crucial for applications requiring high availability. Downtime can result in lost revenue, poor user experience, and brand damage. By using a shadow table and triggers, you can migrate large datasets without interrupting your service.
Example Scenario
Suppose you have an e-commerce application with a large orders table that you need to migrate. The orders table has the following schema:
CREATE TABLE orders (
order_id BIGINT AUTO_INCREMENT PRIMARY KEY,
customer_id BIGINT,
product_id BIGINT,
order_date DATETIME,
status VARCHAR(20),
amount DECIMAL(10, 2)
);
Your goal is to migrate this table to a new schema without downtime.
Step-by-Step Guide to Zero Downtime Migration
1. Preparation
Start with a comprehensive plan:
Define the scope and objectives.
Analyze the current schema and data volume.
Identify potential bottlenecks and challenges.
Ensure Data Consistency*
Ensure your data is consistent and that you have recent backups to prevent data loss and discrepancies during the migration process.*
2. Create the Shadow Table
Create a new table (orders_new
) that matches the schema of your original table.
CREATE TABLE orders_new LIKE orders;
ALTER TABLE orders_new
ADD INDEX idx_customer_id (customer_id);
3. Implement Triggers for Real-Time Sync
To keep your data synchronized, create triggers on the original orders
table. These triggers will capture INSERT, UPDATE, and DELETE operations, applying them to the shadow table.
a. INSERT Trigger
CREATE TRIGGER orders_after_insert
AFTER INSERT ON orders
FOR EACH ROW
BEGIN
INSERT INTO orders_new SET
order_id = NEW.order_id,
customer_id = NEW.customer_id,
product_id = NEW.product_id,
order_date = NEW.order_date,
status = NEW.status,
amount = NEW.amount;END;
b. UPDATE Trigger
CREATE TRIGGER orders_after_update
AFTER UPDATE ON orders
FOR EACH ROW
BEGIN
UPDATE orders_new SET
customer_id = NEW.customer_id,
product_id = NEW.product_id,
order_date = NEW.order_date,
status = NEW.status,
amount = NEW.amount
WHERE order_id = NEW.order_id;END;
c. DELETE Trigger
CREATE TRIGGER orders_after_delete
AFTER DELETE ON orders
FOR EACH ROW
BEGIN
DELETE FROM orders_new WHERE order_id = OLD.order_id;END;
4. Initial Data Migration
Use a script to copy the existing data from the orders
table to the orders_new
table in batches. This minimizes the load on your database and avoids locking issues.
INSERT INTO orders_new SELECT * FROM orders LIMIT 10000;
Repeat this process until all data has been copied. Use a script or a tool to automate this in chunks:
-- Example script logic in pseudocode
DECLARE @last_id BIGINT = 0;
DECLARE @batch_size INT = 10000;
WHILE (SELECT COUNT(*) FROM orders WHERE order_id > @last_id) > 0
BEGIN
INSERT INTO orders_new
SELECT * FROM orders
WHERE order_id > @last_id
ORDER BY order_id
LIMIT @batch_size;
SET @last_id = (SELECT MAX(order_id) FROM orders_new);END;
5. Verify Data Consistency
Once the initial data migration is complete and the triggers are in place, verify data consistency between the two tables. Ensure both tables have the same data.
SELECT COUNT(*) FROM orders;SELECT COUNT(*) FROM orders_new;
-- Check sums of critical columns to ensure data integrity
SELECT SUM(amount) FROM orders;
SELECT SUM(amount) FROM orders_new;
6. Switch Over
When you are confident that the orders_new
table is fully synchronized with the orders
table, update your application to point to the new table. This switch should be quick and without downtime if done correctly.
RENAME TABLE orders TO orders_backup, orders_new TO orders;
This renaming process is atomic in MySQL, ensuring no data is lost during the switch.
7. Monitor and Cleanup
After the switch, monitor your application closely for any issues. If everything is working fine, you can drop the old table (now renamed orders_backup
) after a safe period.
DROP TABLE orders_backup;
Conclusion
Migrating a massive MySQL table with zero downtime is challenging but achievable with the right approach. By using a shadow table and triggers, you can ensure a seamless transition with minimal impact on your application’s availability. Planning, careful execution, and thorough testing are key to a successful migration.
This method allows you to handle even the largest tables efficiently, keeping your application up and running throughout the migration process. Happy migrating!
Subscribe to my newsletter
Read articles from Hansraj Deghun directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Hansraj Deghun
Hansraj Deghun
Software Engineer with expertise in designing scalable systems, optimizing database performance, and leading high-impact projects across multiple domains. Skilled in both relational and non-relational databases, containerization with Docker, and orchestrating applications using Kubernetes. Experienced in improving system efficiency and reliability through proactive problem-solving and performance tuning. Passionate about leveraging new technologies, including AI, to streamline workflows and enhance development processes. Known for delivering robust solutions in fast-paced environments and collaborating effectively with cross-functional teams.