The effect of temporary tables on MySQL's replication
6/Feb 2010
The other day I needed to set up a new set of MySQL instances at work what would replicate from an existing node. I was setting these up because the master node is running out of disk space and is very slow.
Usually, when you need to restore a database you do it in three parts:
Install the binaries
Load the initial data from the most recent MySQL backup.
Set it replicating from one of the nodes by specifying the binary log file and position from which you want it to replicate (which usually corresponds to the day you took the backup).
Now, because we usually compress the binary logs and because the master didn’t have enough disk space to have all these binary logs uncompressed (such that the new slave could replicate by connecting to the master and talking the MySQL protocol), I needed to transfer them to the new slave and pipe them into MySQL from there. Seems simple, huh?
Everything went fine on point 1 and 2. But then, while piping the contents of the MySQL binary logs into the new databases, it all went wrong. What I used to pipe them was:
for file in master-bin* ; do echo "processing $file" ; ../mysql/bin/mysqlbinlog "$file" | ../mysql/bin/mysql -u root -ppassword ; done
Which is how you usually do these things, but this is what I got:
db@slave:~/binlogs$ for i in master-bin* ; do echo "processing $file" ; ../mysql/bin/mysqlbinlog "$file" | ../mysql/bin/mysql -u root -ppassword ; done
processing master-bin.1853
processing master-bin.1854
processing master-bin.1855
processing master-bin.1856
processing master-bin.1857
processing master-bin.1858
processing master-bin.1859
processing master-bin.1860
ERROR 1146 at line 10024: Table 'av.a2' doesn't exist
processing master-bin.1861
ERROR 1216 at line 1378: Cannot add or update a child row: a foreign key constraint fails
processing master-bin.1862
ERROR 1216 at line 22825: Cannot add or update a child row: a foreign key constraint fails
processing master-bin.1863
So, table av.a2 does not exist. WTF?
Investigating a bit about this table, it seems there’s a script which executes the following stuff on it everyday:
...
if test $ZZTEST -lt 300000; then
echo "ERROR: Less than 300k"
exit 1
fi
cat > sql << EOF
create temporary table a1 (mzi char(16) default null, key mzi(mzi));
create temporary table a2 (mzi char(16) default null, key mzi(mzi));
create temporary table a3 (mzi char(16) default null, key mzi(mzi));
EOF
cat v | grep ^447.........$ | awk '
...
Now, create temporary table, if you read about it on MySQL docs you’ll see that temporary tables are only visible to the current connection and are dropped automatically when that connection finishes. There are a few problems with replication and temporary tables, but this could not possibly be the same problem as these were the binary logs from the master. So, what’s going on here?
The problem here comes from the binary logs being rotated and the way I was inserting them. It just happened that the three SQL statements:
create temporary table a1 (mzi char(16) default null, key mzi(mzi));
create temporary table a2 (mzi char(16) default null, key mzi(mzi));
create temporary table a3 (mzi char(16) default null, key mzi(mzi));
were created at the end of binary log file master-bin.1859 and then there was a SQL statement which made it fail on file master-bin.1860 (inserting data into av.a2) because it was expecting those temporary tables to exist (and they didn’t). This happened because we are using a for loop in bash to insert the binary logs, so there’s one mysql connection for each binary log file and thus, when file master-bin.1859 finished it automatically made MySQL drop the three temporary tables (that connection was finished) and then on the next connection (file master-bin.1860) these tables were missing.
There are a few ways in which you can work around this.
One approach is to get one big sql file and pipe that into MySQL, something like:
for file in master-bin.1* ; do echo "Using $file" ; ../mysql/bin/mysqlbinlog "$file" >> all.sql; date ; done
cat all.sql > ../mysql/bin/mysql -u root -ppassword
Alternatively, doing something like:
(for file in master-bin.1*; do echo “Using $file” 1>&2; ../mysql/bin/mysqlbinlog $file; date 1>&2; done) | ../mysql/bin/mysql -u root -ppassword
If you want to avoid creating one big fat file.
Which should work as in this case it’s only going to be one connection.
But, these ways have an obvious setback, which is that you cannot have a look at what failed (well, sort of, but extremely difficult) if something goes wrong; It’ll fail on one of the files and then will fail with the rest of them.
The better approach, as discussed on High Performance MySQL is to use a log server, that’s it, a MySQL server that would not use any storage but will only be used to replay binary logs, so you won’t have this problem and also, it will let you interact with the server and its diagnostic messages in case something goes awry.
Use temporary tables?
My advice here would be to encourage you not to use CREATE TEMPORARY TABLE because it can break replication in mysterious ways, but that could be too harsh. There are a few workarounds that can be done from an application level that you can read in http://www.xaprb.com/blog/2007/05/11/how-to-eliminate-temporary-tables-in-mysql/ which I think they could be worth thinking about.
Any experience with these problems?