We've just finished some fairly serious data shuffling in the WLD database.
We've been working on changes to reduce the amount of storage space private messages take up, aiming to decrease the storage requirements by around 80%, and also to dramatically lower the database hit each time a new message is sent (there's a write-query reduction of around 75%).
However, one of the changes required a database table schema change to the main
messages table (the table that stores the references to messages for each member). The problem we had: there were nearly a quarter of a billion records in that table.
We estimated that running an
ALTER TABLE on that, waiting for the changes to replicate over all the slaves, and then for the slaves to catch up again, could take up to two days, which would mean two days of effective downtime. Obviously, this wasn't an option, so we had to come up with an alternative solution.
What we ended up doing was dumping all the data in the existing
messages table to a file, and then reimporting on to each slave (and the master, unreplicated) individually, into a new table,
messages_new. That in itself took around 36 hours on and off, dropping various DB slaves out of the database pool, including our superstar operations manager Ian working from around 2am–5am this morning while the system was quiet.
By the time we'd imported all this data, the original
messages had another 2 days worth of data which also had to be migrated, so we just repeated what we'd already done (dump and import), but just for the new data (which took significantly less time, but was still a very manual process).
At this point, we only had an hour or so of new data to catch up with, which we deemed was a small enough amount to run in a single transaction. So it was "the big switch time"... The process was:
- Lock any message tables so no new messages got written;
- Sync any missing messages by running an
INSERT INTO SELECT FROMstraight on the master and allowed it to replicate;
- Re-run any message deletes and reads that had happened since the initial data export (we'd had the foresight to start recording all these before the export);
- Rename the
messages_old, and the
messages(i.e. put the new table in to service);
- Unlock all tables.
This whole process took less than 30 seconds on the master, and the same again to replicate across all slaves. Any queries which had backed up because of the table locks got pushed through as soon as the lock was released, so other than a small wait, no one would be any the wiser.
So, we're now running on the new table schema, and everything's running smoothly. It's been an interesting couple of days: we've not performed any serious maintenance on tables of anywhere near that size before, but it's something that we'll be doing more and more as the service grows. And it's great to be able to say that we completed the migration with zero system downtime.