A Little Noise

March 7, 2015

log event entry exceeded max_allowed_packet

Filed under: MySQL — snoyes @ 1:07 pm

Sometimes replication halts with an error like:

Slave I/O: Got fatal error 1236 from master when reading data from binary log
Error reading packet from server: log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master;

If it’s the SQL thread instead of the I/O thread, it might complain about ‘Event too big’. The error could also be the other direction, complaining of ‘Event too small’.

I rarely see this error actually have anything to do with max_allowed_packet. You can check the largest event in the master’s binary log file. Even though the binary log records the length in each entry, mysqlbinlog doesn’t expose it, so we have to do the math ourselves:

mysqlbinlog mysql-bin.00XXX | gawk "/^# at / { diff = $3 - prev; prev = $3; } (diff > max) { max = diff } END {print max}" -

If the result is larger than max_allowed_packet, then the problem and solution are exactly what the the error message says. If it’s smaller, or if that also gives you an error about the length, then read on.

Assuming that’s not the problem, here’s what may be happening:

After the 4-byte magic number at the very beginning of the binary log, every entry in the binary log (regardless of the binlog_format) starts with a 19-byte header.

  1. 4 bytes for the timestamp
  2. 1 byte for the event type
  3. 4 bytes for the server_id that wrote the event
  4. 4 bytes for the length of the event (including the header)
  5. 4 bytes for the position where the next event starts (which should be the position where this event starts plus the length of this event)
  6. 2 bytes for some flags that are usually set to 0.

These are all written little-endian. An example:

123e d254 0201 0000 005b 0000 0083 9f3c 3300 00…

4 byte timestamp: 123e d254 = 1423064594 = 2015-02-04 09:43:14 CST
1 byte type code: 02 = QUERY_EVENT
4 byte server id: 01 0000 00 = 1
4 byte length: 5b 0000 00 = 91 bytes
4 byte next pos: 83 9f3c 33 = 859,611,011

Replication works like this:

  1. the master executes a statement and records the statement or resulting row events in its binary log
  2. the slave’s IO thread says to the master, “I’m at position X. What’s next?”
  3. the master reads the binary log and passes the statement or row events to the slave
  4. the slave’s IO thread writes what it got from the master into its relay log
  5. the slave’s SQL thread reads the relay log and executes the statement or row events

Occasionally, something goes wrong somewhere in that chain:

  • Sometimes a piece of the event never makes it into the binary log (the header says “I contain 100 bytes”, but then only 90 bytes appear in the file before the next event begins).
  • Sometimes the slave asks the master for the wrong position (someone has been mucking about with CHANGE MASTER TO and using the wrong numbers?)
  • Sometimes the event doesn’t make it across the wire from the master to the slave in one piece (network problems?)

However it happens, we end up in the wrong place in the stream of bits, usually somewhere in the middle of the next event’s data section. The slave doesn’t know it’s in the wrong place (there are no sign posts or magic numbers to identify an event), so it just reads the next 19 bytes it sees as if they were the event header. Let’s see what happens when just one byte has gone missing:

123e d254 0201 0000 005b 0000 0083 9f3c 3300

4 byte timestamp: 3ed2 5402 = 39113278 = 1971-03-29 11:47:58 CST
1 byte type code: 01 = START_EVENT_V3
4 byte server id: 0000 005b = 1526726656
4 byte length: 0000 0083 = 2,197,815,296 bytes
4 byte next pos: 9f3c 3300 = 3,357,855

So the slave gets this header and thinks the event length is over 2GB, which is double the highest possible max_allowed_packet, so it throws an error. The real problem isn’t that the event is too big, but that it’s reading the wrong place and interpreting the wrong data as a length.

It would be nice if it would do a sanity check and notice that the current position plus the length doesn’t equal the next position and produce a more helpful error message.

From MySQL 5.6.6 on, by default a checksum is written at the end of each event in the log, which would also catch most of these kinds of errors. But neither the master nor the slave examine those checksums by default. Consider enabling –master-verify-checksum on the master and –slave-sql-verify-checksum on the slave.

How do you fix it?

First check if the master’s binary log is OK. Just run it through mysqlbinlog and redirect the output to NUL or /dev/null. If it makes it through the whole thing with no errors, then the problem is just with the slave’s relay log or file position. If you’re not using GTIDs, then use CHANGE MASTER TO and give it the Exec_Master_Log_Pos from SHOW SLAVE STATUS. It will discard the old relay logs and copy them fresh from the master. Hopefully this time it will get the whole event properly and the problem will go away. If you are using GTIDs, then a RESET SLAVE will discard the relay logs, and when you START SLAVE it will figure out on its own what it needs from the master. If you have to do this frequently, check your network.

If the master’s binary log produces errors when you run it through mysqlbinlog, recovery is not so easy. There have been a few bugs where the master doesn’t record the whole event properly, often having to do with binlogs that exceed 4GB in size. You (or the people you pay for MySQL support) might be able to examine the binary log file, figure out exactly what statement has been corrupted, run it directly on the slave, find the position where the next good event begins, and continue from there. If you can’t recover that statement but you know the tables it affected, you might be able to just copy those few tables from the master to the slave, and have replication ignore them until it catches up to the time of the copy. Or you can use some utility to compare the two servers and fix any differences. Or you can copy all the data from the master and set up the slave again as if it was brand new.

Whatever you do, don’t just set SQL_SLAVE_SKIP_COUNTER=1 or slave_skip_errors and hope the problem goes away. It will only make it worse.

1 Comment »

  1. Hey! Is there anyway to reproduce this issue?

    Comment by Sangeeth Devaiah — December 20, 2018 @ 4:17 am

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by WordPress