A Little Noise

22Apr/150

Breakpoints for stored procedures and functions

and without creating a table to pass the state around (really just an excuse to use the named locks feature).

DELIMITER //
DROP FUNCTION IF EXISTS SET_BREAKPOINT//
CREATE FUNCTION SET_BREAKPOINT()
RETURNS tinyint
NO SQL
BEGIN
	-- Acquire lock 1
	-- Wait until lock 2 is taken to signal that we may continue
	DO GET_LOCK(CONCAT('lock_1_', CONNECTION_ID()), -1);
	REPEAT
		DO 1;
	UNTIL IS_USED_LOCK(CONCAT('lock_2_', CONNECTION_ID())) END REPEAT;
	DO RELEASE_LOCK(CONCAT('lock_1_', CONNECTION_ID()));

	-- Acquire lock 3 to acknowledge message to continue.
	-- Wait for lock 2 to be released as signal of receipt.
	DO GET_LOCK(CONCAT('lock_3_', CONNECTION_ID()), -1);
	REPEAT
		DO 1;
	UNTIL IS_FREE_LOCK(CONCAT('lock_2_', CONNECTION_ID())) END REPEAT;
	DO RELEASE_LOCK(CONCAT('lock_3_', CONNECTION_ID()));

	RETURN 1;
END//

DROP FUNCTION IF EXISTS NEXT_BREAKPOINT//
CREATE FUNCTION NEXT_BREAKPOINT(connection_id int)
RETURNS tinyint
NO SQL
BEGIN
	-- Acquire lock 2 as a signal to go past the breakpoint
	-- Wait until lock 3 is taken as signal of receipt.
	DO GET_LOCK(CONCAT('lock_2_', connection_id), -1);
	REPEAT
		DO 1;
	UNTIL IS_USED_LOCK(CONCAT('lock_3_', connection_id)) END REPEAT;
	DO RELEASE_LOCK(CONCAT('lock_2_', connection_id));

	RETURN 1;
END//

DROP PROCEDURE IF EXISTS test_the_breakpoints//
CREATE PROCEDURE test_the_breakpoints()
NO SQL
BEGIN
	SELECT CONCAT('In another session: DO NEXT_BREAKPOINT(', CONNECTION_ID(), ');') as `instructions`;

	DO SET_BREAKPOINT();

	SELECT 'do it again' as `now:`;

	DO SET_BREAKPOINT();

	SELECT 'end' as `the`;
END//
DELIMITER ;

CALL test_the_breakpoints();
Filed under: MySQL No Comments
7Mar/150

log event entry exceeded max_allowed_packet

Sometimes replication halts with an error like:

Slave I/O: Got fatal error 1236 from master when reading data from binary log
Error reading packet from server: log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master;

If it's the SQL thread instead of the I/O thread, it might complain about 'Event too big'. The error could also be the other direction, complaining of 'Event too small'.

I rarely see this error actually have anything to do with max_allowed_packet. You can check the largest event in the master's binary log file. Even though the binary log records the length in each entry, mysqlbinlog doesn't expose it, so we have to do the math ourselves:

mysqlbinlog mysql-bin.00XXX | sed -n "s/^# at \([0-9]\+$\)/\1/p" \
| gawk "{d=$0-p; p=$0} (d>m) {m=d} END {print m}" -

If the result is larger than max_allowed_packet, then the problem and solution are exactly what the the error message says. If it's smaller, or if that also gives you an error about the length, then read on.

Assuming that's not the problem, here's what may be happening:

After the 4-byte magic number at the very beginning of the binary log, every entry in the binary log (regardless of the binlog_format) starts with a 19-byte header.
- 4 bytes for the timestamp
- 1 byte for the event type
- 4 bytes for the server_id that wrote the event
- 4 bytes for the length of the event (including the header)
- 4 bytes for the position where the next event starts (which should be the position where this event starts plus the length of this event)
- 2 bytes for some flags that are usually set to 0.

These are all written little-endian. An example:

123e d254 0201 0000 005b 0000 0083 9f3c 3300 00...

4 byte timestamp: 123e d254 = 1423064594 = 2015-02-04 09:43:14 CST
1 byte type code: 02 = QUERY_EVENT
4 byte server id: 01 0000 00 = 1
4 byte length: 5b 0000 00 = 91 bytes
4 byte next pos: 83 9f3c 33 = 859,611,011

Replication works like this:

- the master executes a statement and records the statement or resulting row events in its binary log
- the slave's IO thread says to the master, "I'm at position X. What's next?"
- the master reads the binary log and passes the statement or row events event to the slave
- the slave's IO thread writes what it got from the master into its relay log
- the slave's SQL thread reads the relay log and executes the statement or row events

Occasionally, something goes wrong somewhere in that chain. Sometimes a piece of the event never makes it into the binary log (the header says "I contain 100 bytes", but then only 90 bytes appear in the file before the next event begins). Sometimes the slave asks the master for the wrong position (someone has been mucking about with CHANGE MASTER TO and using the wrong numbers?) Sometimes the event doesn't make it across the wire from the master to the slave in one piece (network problems?)

However it happens, we end up in the wrong place in the stream of bits, usually somewhere in the middle of the next event's data section. The slave doesn't know it's in the wrong place (there are no sign posts or magic numbers to identify an event), so it just reads the next 19 bytes it sees as if they were the event header. Let's see what happens when just one byte has gone missing:

123e d254 0201 0000 005b 0000 0083 9f3c 3300

4 byte timestamp: 3ed2 5402 = 39113278 = 1971-03-29 11:47:58 CST
1 byte type code: 01 = START_EVENT_V3
4 byte server id: 0000 005b = 1526726656
4 byte length: 0000 0083 = 2,197,815,296 bytes
4 byte next pos: 9f3c 3300 = 3,357,855

So the slave gets this header and thinks the event length is over 2GB, which is double the highest possible max_allowed_packet, so it throws an error. The real problem isn't that the event is too big, but that it's reading the wrong place and interpreting the wrong data as a length.

It would be nice if it would do a sanity check and notice that the current position plus the length doesn't equal the next position and produce a more helpful error message.

From MySQL 5.6.6 on, by default a checksum is written at the end of each event in the log, which would also catch most of these kinds of errors. But neither the master nor the slave examine those checksums by default. Consider enabling --master-verify-checksum on the master and --slave-sql-verify-checksum on the slave.

How do you fix it?

First check if the master's binary log is OK. Just run it through mysqlbinlog and redirect the output to NUL or /dev/null. If it makes it through the whole thing with no errors, then the problem is just with the slave's relay log or file position. If you're not using GTIDs, then use CHANGE MASTER TO and give it the Exec_Master_Log_Pos from SHOW SLAVE STATUS. It will discard the old relay logs and copy them fresh from the master. Hopefully this time it will get the whole event properly and the problem will go away. If you are using GTIDs, then a RESET SLAVE will discard the relay logs, and when you START SLAVE it will figure out on its own what it needs from the master. If you have to do this frequently, check your network.

If the master's binary log produces errors when you run it through mysqlbinlog, recovery is not so easy. There have been a few bugs where the master doesn't record the whole event properly, often having to do with binlogs that exceed 4GB in size. You (or the people you pay for MySQL support) might be able to examine the binary log file, figure out exactly what statement has been corrupted, run it directly on the slave, find the position where the next good event begins, and continue from there. If you can't recover that statement but you know the tables it affected, you might be able to just copy those few tables from the master to the slave, and have replication ignore them until it catches up to the time of the copy. Or you can use some utility to compare the two servers and fix any differences. Or you can copy all the data from the master and set up the slave again as if it was brand new.

Whatever you do, don't just set SQL_SLAVE_SKIP_COUNTER=1 or slave_skip_errors and hope the problem goes away. It will only make it worse.

Filed under: MySQL No Comments
6Nov/1412

Undelete rows with the binary log

Can you use the binary logs to undo a DELETE? Maybe, if you're using ROW format logging. The difference between a delete and an insert event could be just one byte - the one that maps it as a DELETE_ROWS_EVENT or a WRITE_ROWS_EVENT. Let's try it.

I've already populated this table with a few rows:

CREATE TABLE `undo_test` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`t` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`v` varchar(20) DEFAULT NULL,
`b` blob,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;

Save a hash so we can see if the undo really works later:

mysql -e "SELECT * FROM test.undo_test" | md5sum > before.md5

Delete an unwanted row:

DELETE FROM undo_test;
Query OK, 1693 rows affected (0.14 sec)

Oops! Forgot the WHERE clause! And of course I don't have any recent backups. Is the data really gone?

mysql> select count(*) from undo_test;
+----------+
| count(*) |
+----------+
| 0 |
+----------+

But maybe there's hope. Find the delete event in the binary log.

mysql> SHOW BINLOG EVENTS;
+------------+--------+-------------+-----------+-------------+-----
| Log_name   | Pos    | Event_type  | Server_id | End_log_pos | Info
+------------+--------+-------------+-----------+-------------+-----
| bin.000001 |      4 | Format_desc |         1 |         120 | Server ver: 5.6.20-log, Binlog ver: 4
<snip>
| bin.000001 |    516 | Query       |         1 |         588 | BEGIN
| bin.000001 |    588 | Table_map   |         1 |         647 | table_id: 70 (test.undo_test)
| bin.000001 |    647 | Delete_rows |         1 |        8859 | table_id: 70
| bin.000001 |   8859 | Delete_rows |         1 |       16932 | table_id: 70
| bin.000001 |  16932 | Delete_rows |         1 |       25040 | table_id: 70
| bin.000001 |  25040 | Delete_rows |         1 |       33244 | table_id: 70
| bin.000001 |  33244 | Delete_rows |         1 |       41408 | table_id: 70
| bin.000001 |  41408 | Delete_rows |         1 |       49549 | table_id: 70
| bin.000001 |  49549 | Delete_rows |         1 |       57764 | table_id: 70
| bin.000001 |  57764 | Delete_rows |         1 |       65956 | table_id: 70
| bin.000001 |  65956 | Delete_rows |         1 |       74096 | table_id: 70
| bin.000001 |  74096 | Delete_rows |         1 |       82237 | table_id: 70
| bin.000001 |  82237 | Delete_rows |         1 |       90445 | table_id: 70
| bin.000001 |  90445 | Delete_rows |         1 |       98586 | table_id: 70
| bin.000001 |  98586 | Delete_rows |         1 |      106728 | table_id: 70
| bin.000001 | 106728 | Delete_rows |         1 |      114886 | table_id: 70
| bin.000001 | 114886 | Delete_rows |         1 |      123085 | table_id: 70
| bin.000001 | 123085 | Delete_rows |         1 |      131290 | table_id: 70
| bin.000001 | 131290 | Delete_rows |         1 |      139476 | table_id: 70
| bin.000001 | 139476 | Delete_rows |         1 |      146945 | table_id: 70 flags: STMT_END_F
| bin.000001 | 146945 | Xid         |         1 |      146976 | COMMIT /* xid=36 */

The delete starts at position 516 and ends at 146976. We can pass those positions to mysqlbinlog to get just this delete event, along with the necessary header so mysql knows how to interpret the event.

From sql/log_event.h we see that a delete is 0x20 and an insert is 0x1E. This gawk program will parse the mysqlbinlog output and make the necessary changes.

undelete.awk:

BEGIN {
	base64cmd = "xxd -r -p | base64"
	inHexSection = 0
	handledHexSection = 0
	inBinlogSection = 0 
}

/# Position  Timestamp/ && !handledHexSection { 
	inHexSection = 1 
}

!handledHexSection && inHexSection && ($7 == "20") { 
	hexString = $3 $4 $5 $6 $7 $8 $9 $10 $11
	print hexString |& base64cmd
	close(base64cmd, "to")
	base64cmd |& getline oldBase64String
	close(base64cmd)


	$7 = "1E" 
	hexString = $3 $4 $5 $6 $7 $8 $9 $10 $11
	print hexString |& base64cmd
	close(base64cmd, "to")
	base64cmd |& getline newBase64String
	close(base64cmd)

	inHexSection = 0
	handledHexSection = 1
}

/BINLOG '/,/'\/\*!\*\/;/ { 
	inBinlogSection = 1 
	handledHexSection = 0
}

/'\/\*!\*\/;/ { inBinlogSection = 0 }

inBinlogSection && substr($0, 1, 12) == oldBase64String { $0 = newBase64String substr($0, 13) }

{ print $0 }



Pass the event through that code, and then back into mysql:

mysqlbinlog --hexdump --start-position=516 --stop-position=146976 bin.000001 | gawk -f undelete.awk | mysql

Compare it to the original hash to see if it's really back:

mysql -e "SELECT * FROM test.undo_test" | md5sum -c before.md5
-: OK

Looks like it worked.

Filed under: MySQL 12 Comments
1Oct/140

Ambiguous column names (not that kind)

Time for another quiz to impress your friends, if you have that kind of friends.

The manual page in question

mysql --skip-enable-disable-column-names=0 -e "select 1"
mysql --skip-disable-enable-column-names=0 -e "select 1"
mysql --loose-disable-enable-skip-column-names=0 -e "select 1"

What's the result? An error about unknown options? A warning? Column headers enabled? Disabled?

Answer ▼

Filed under: MySQL Gotchas No Comments
25Sep/130

Pre-5.0.30 ADDTIME()

For when you want to add times greater than 839 hours.

CREATE FUNCTION ADDTIME_OLD(d datetime, t varchar(12))
RETURNS datetime
DETERMINISTIC
RETURN ADDTIME(
d + INTERVAL SUBSTRING_INDEX(t, ':', 1) HOUR,
CONCAT(IF(LEFT(t, 1) = '-', '-', ''), '00:', SUBSTRING_INDEX(t, ':', -2))
);

(Almost the same. I'm not going to figure out how to do the "you can use any delimiter you want" that MySQL supports.)

Filed under: MySQL No Comments
23Jul/133

mysqldump privileges required

"mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used. Certain options might require other privileges as noted in the option descriptions."
- http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html

Format Default Privileges Required
--add-drop-database Off  
--add-drop-table On in --opt  
--add-drop-trigger Off  
--add-locks On in --opt  
--all-databases Off SELECT, SHOW DATABASES ON *.*
--allow-keywords Off  
--apply-slave-statements Off  
--bind-address=ip_address Off  
--comments On  
--compact Off  
--compatible=name[,name,...] Off  
--complete-insert Off  
--create-options On in --opt  
--databases Off  
--debug[=debug_options] Off  
--debug-check Off  
--debug-info Off  
--default-auth=plugin Off  
--default-character-set=charset_name utf8/latin1  
--delayed-insert Off  
--delete-master-logs Off SUPER ON *.*
--disable-keys On in --opt  
--dump-date On in --comments  
--dump-slave[=value] Off SUPER or REPLICATION CLIENT ON *.*
--events Off EVENT
--extended-insert On in --opt  
--fields-enclosed-by=string '' in --tab  
--fields-escaped-by '\\' in --tab  
--fields-optionally-enclosed-by=string Off  
--fields-terminated-by=string '\t' in --tab  
--flush-logs Off RELOAD ON *.*
--flush-privileges Off  
--help Off  
--hex-blob Off  
--host localhost  
--ignore-table=db_name.tbl_name Off  
--include-master-host-port Off  
--insert-ignore Off  
--lines-terminated-by=string '\n' in --tab  
--lock-all-tables Off LOCK TABLES ON *.*
--lock-tables On in --opt LOCK TABLES
--log-error=file_name Off  
--login-path=name Off controlled at OS level
--master-data Off RELOAD ON *.*
SUPER or REPLICATION CLIENT ON *.*
--max_allowed_packet=value 24MB  
--net_buffer_length=value 1022KB  
--no-autocommit Off  
--no-create-db Off  
--no-create-info Off  
--no-data Off  
--no-set-names Off  
--no-tablespaces Off  
--opt On  
--order-by-primary Off  
--password[=password] Off  
--pipe Off  
--plugin-dir=path Off  
--port=port_num 3306  
--quick On in --opt  
--quote-names On  
--replace Off  
--result-file=file Off  
--routines Off SELECT ON mysql.proc
--set-charset On  
--set-gtid-purged=value Auto  
--single-transaction Off  
--skip-add-drop-table Off in --opt  
--skip-add-locks Off in --opt  
--skip-comments Off  
--skip-compact On  
--skip-disable-keys Off in --opt  
--skip-extended-insert Off in --opt  
--skip-opt Off  
--skip-quick Off in --opt  
--skip-quote-names Off  
--skip-set-charset Off  
--skip-triggers Off  
--skip-tz-utc Off  
--ssl-ca=file_name Off  
--ssl-capath=dir_name Off  
--ssl-cert=file_name Off  
--ssl-cipher=cipher_list Off  
--ssl-crl=file_name Off  
--ssl-crlpath=dir_name Off  
--ssl-key=file_name Off  
--ssl-verify-server-cert Off  
--tab=path Off  
--tables Off  
--triggers On TRIGGER since 5.1.23, SUPER ON *.* earlier
--tz-utc On  
--user=user_name system user on Linux, 'ODBC' on Windows  
--verbose Off  
--version Off  
--where='where_condition' Off  
--xml Off  
Filed under: MySQL FAQ 3 Comments
17Jul/130

Where’s my line?

mysql -e "select * from test.t where d < '2013-07-17 17:00:00'"
 +---------------------+
 | d                   |
 +---------------------+
 | 2013-07-17 15:34:19 |
 +---------------------+
 
 mysqldump -t --compact test t --where="d < '2013-07-17 17:00:00'"
 (no output)

Where's my line?

Hint ▼

Filed under: MySQL Gotchas No Comments
25Apr/130

Five Number Summary

From Freenode: how do you generate a five number summary in MySQL? There is no "median" aggregate function built in. You could do some clever things involving self joins or temporary tables, or build an aggregate UDF - see the comments section in the manual for those approaches.

Here's another way using a single query. Be sure to set group_concat_max_len high enough for your data, and since it relies on string manipulation, it's probably not a good choice if your data is millions of rows.

First, a helper function to get the Nth element of a comma-delimited string, just to make the query shorter:

CREATE FUNCTION LIST_ELEM(inString text, pos int) 
RETURNS TEXT DETERMINISTIC 
RETURN SUBSTRING_INDEX(SUBSTRING_INDEX(inString, ',', pos), ',', -1);


Now, fetching the min, max, median, and first and third quartiles (computing method 2) for each group:

SELECT 

  groupId,

  GROUP_CONCAT(data ORDER BY data) AS dataSet,

  MIN(data) AS min,

  (
    LIST_ELEM(GROUP_CONCAT(data ORDER BY data), CEIL(COUNT(*)/4))
    + LIST_ELEM(GROUP_CONCAT(data ORDER BY data), FLOOR(COUNT(*)/4 + 1))
  ) / 2 AS q1,

  (
    LIST_ELEM(GROUP_CONCAT(data ORDER BY data), CEIL(COUNT(*)/2))
    + LIST_ELEM(GROUP_CONCAT(data ORDER BY data), FLOOR(COUNT(*)/2 + 1))
  ) / 2 AS median,

  (
    LIST_ELEM(GROUP_CONCAT(data ORDER BY data DESC), CEIL(COUNT(*)/4))
    + LIST_ELEM(GROUP_CONCAT(data ORDER BY data DESC), FLOOR(COUNT(*)/4 + 1))
  ) / 2 AS q3,

  MAX(data) AS max

FROM t 
GROUP BY groupId;
+---------+---------------------+------+------+--------+------+------+
| groupId | dataSet             | min  | q1   | median | q3   | max  |
+---------+---------------------+------+------+--------+------+------+
|       1 | 0,0,1,2,13,27,61,63 |    0 |  0.5 |    7.5 |   44 |   63 |
|       2 | 0,0,1,2,25          |    0 |    0 |      1 |    2 |   25 |
+---------+---------------------+------+------+--------+------+------+
Filed under: MySQL No Comments
6Feb/130

Default user

It came up twice in two days: if you do not specify the user name when connecting, what gets picked?

The manual says:
"On Unix, most MySQL clients by default try to log in using the current Unix user name as the MySQL user name, but that is for convenience only."
http://dev.mysql.com/doc/refman/5.6/en/user-names.html

"The default user name is ODBC on Windows or your Unix login name on Unix."
http://dev.mysql.com/doc/refman/5.6/en/connecting.html

Let's be a little more specific. The relevant section of code is in libmysql/libmysql.c

On Linux, we check the following in this order:
- if (geteuid() == 0), user is "root"
- getlogin()
- getpwuid(geteuid())
- environment variables $USER, $LOGNAME, $LOGIN
If none of those return non-NULL results, use "UNKNOWN_USER"

On Windows:
- environment variable $USER
If that's not set, use "ODBC".

I wondered why on Windows we check $USER but not $USERNAME. I gather that it's an ODBC thing.

Filed under: MySQL No Comments
1Oct/122

Connection Conundrum

Define a user like this:

GRANT ALL ON *.* TO 'myuser'@'localhost' IDENTIFIED BY 'super$ecret';

Then try a PHP script like this:

<?php
mysqli_connect("localhost", "myuser", "super$ecret");
?>

What happens and why? How could you avoid it?

And for glory, what single line could you add that would prevent the error, without making any changes to the mysqli_connect line?