Saturday, December 27, 2014

Improvements for XA in MySQL 5.7

Today I was doing some tests with XA transactions in MySQL 5.6.

The output of the XA RECOVER command to list transactions was hard to read because of the representation of the data column:

The good news is that 5.7 has transaction information in performance_schema:
mysql> select trx_id, isolation_level, state, xid, xa_state, access_mode 
    -> from performance_schema.events_transactions_current;
+-----------------+-----------------+--------+--------+----------+-------------+
| trx_id          | isolation_level | state  | xid    | xa_state | access_mode |
+-----------------+-----------------+--------+--------+----------+-------------+
|            NULL | REPEATABLE READ | ACTIVE | x-1    | PREPARED | READ WRITE  |
| 421476507015704 | REPEATABLE READ | ACTIVE | NULL   | NULL     | READ WRITE  |
|            NULL | REPEATABLE READ | ACTIVE | foo-1  | ACTIVE   | READ WRITE  |
|            NULL | REPEATABLE READ | ACTIVE | NULL   | NULL     | READ ONLY   |
|            NULL | REPEATABLE READ | ACTIVE | 0x0101 | PREPARED | READ WRITE  |
+-----------------+-----------------+--------+--------+----------+-------------+
5 rows in set (0.00 sec)

This is not only more readable but allows you to use functions on the output. It is also much more complete as all transactions are listed here, not only XA transactions.
 
Update: It's not only possible with performance_schema, In 5.7 there is also a new option to print the xid in hex for XA RECOVER:
mysql> XA RECOVER;
+----------+--------------+--------------+------+
| formatID | gtrid_length | bqual_length | data |
+----------+--------------+--------------+------+
|        1 |            2 |            0 |      |
+----------+--------------+--------------+------+
1 row in set (0.00 sec)

mysql> XA RECOVER CONVERT XID;
+----------+--------------+--------------+--------+
| formatID | gtrid_length | bqual_length | data   |
+----------+--------------+--------------+--------+
|        1 |            2 |            0 | 0x0001 |
+----------+--------------+--------------+--------+
1 row in set (0.00 sec)

Tuesday, November 4, 2014

Throttling MySQL Enterprise Backup with cgroups

Today I encountered a situation where MySQL Enterprise Backup caused to much load on the I/O subsystem of the server to cause the application to be so slow that it wasn't usable any longer. So I wanted to limit the mysqlbackup process so it wouldn't cause any more issues.

The mysqlbackup command has settings to for the number of read, write and process threads. The defaults are 1 read, 1 write and 6 process threads. So that isn't really useful for throttling as I was using the defaults.

Using the ionice utility wouldn't work as that requires the CFG I/O scheduler.

I found a solution in this blog post. It is to use cgroups on Linux. I had used cgroups before to test how a galera setup works when one of the three servers had a much slower CPU.

# mkdir /cgroup/blkio
# mount -t cgroup -o blkio non /cgroup/blkio
# cgcreate -g blkio:/mysqlbackup
# ls -lh /dev/mapper/vgdb01-lvdb01
lrwxrwxrwx 1 root root 7 Sep 26 14:22 /dev/mapper/vgdb01-lvdb01 -> ../dm-2
# ls -lh /dev/dm-2
brw-rw---- 1 root disk 253, 2 Sep 26 14:22 /dev/dm-2
# cgset -r blkio.throttle.read_iops_device="253:2 20" mysqlbackup
# cgset -r blkio.throttle.write_iops_device="253:2 20" mysqlbackup
# echo $$ > /cgroup/blkio/mysqlbackup/tasks
# cat /proc/$$/cgroup
1:blkio:/mysqlbackup
# mysqlbackup --user=root --password=xxxxxx --with-timestamp --backup-dir=/data/backup backup

This worked exactly as I had hoped and expected. Both read and write operations were limited to 20 iops.

It turned out to be a issue with the storage system, so I won't have to use this in production but I hope that this will be to value of someone. This can also be useful in other situations.

Sunday, October 26, 2014

When your query is blocked, but there is no blocking query - Part 3

In the previous blog posts I've talked about transactions which block other transactions but don't do anything and about some possible solutions.

In this post I will show you how to get even more information about what is locked by a transaction.

As you might have noticed the information_schema.innodb_locks table doesn't show all locks. This is what the documentation says:
"The INNODB_LOCKS table contains information about each lock that an InnoDB transaction has requested but not yet acquired, and each lock that a transaction holds that is blocking another transaction."

So if would like to know all locks held by a transaction this doesn't help.

There is lots of information in the output of "SHOW ENGINE INNODB STATUS\G" in the section about TRANSACTIONS.


------------
TRANSACTIONS
------------
Trx id counter 8991
Purge done for trx's n:o < 8991 undo n:o < 0 state: running but idle
History list length 50
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 3, OS thread handle 0x7ff0c8727700, query id 202 localhost msandbox init
show engine innodb status
---TRANSACTION 8972, ACTIVE 1313 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 360, 1 row lock(s)
MySQL thread id 2, OS thread handle 0x7ff0c8768700, query id 106 localhost msandbox update
insert into t1 values(4)
------- TRX HAS BEEN WAITING 1313 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 18 page no 3 n bits 80 index `GEN_CLUST_INDEX` of table `test`.`t1` trx id 8972 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

------------------
---TRANSACTION 8970, ACTIVE 1652 sec
4 lock struct(s), heap size 1184, 10 row lock(s), undo log entries 9
MySQL thread id 1, OS thread handle 0x7ff0c87a9700, query id 21 localhost msandbox cleaning up

So there are 10 row locks held by transaction 8970 and transaction 8972 is waiting on record lock on test.t1. That's nothing new, that information is also in INNODB_LOCKS and INNODB_LOCK_WAITS.

But if we set  innodb_status_output_locks to ON this changes.

mysql [information_schema] > set global innodb_status_output_locks=ON;
Query OK, 0 rows affected (0.00 sec)

mysql [information_schema] > show engine innodb status\G
...some output removed...
------------
TRANSACTIONS
------------
Trx id counter 8991
Purge done for trx's n:o < 8991 undo n:o < 0 state: running but idle
History list length 50
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 3, OS thread handle 0x7ff0c8727700, query id 204 localhost msandbox init
show engine innodb status
---TRANSACTION 8972, ACTIVE 1544 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 360, 1 row lock(s)
MySQL thread id 2, OS thread handle 0x7ff0c8768700, query id 106 localhost msandbox update
insert into t1 values(4)
------- TRX HAS BEEN WAITING 1544 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 18 page no 3 n bits 80 index `GEN_CLUST_INDEX` of table `test`.`t1` trx id 8972 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

------------------
TABLE LOCK table `test`.`t1` trx id 8972 lock mode IX
RECORD LOCKS space id 18 page no 3 n bits 80 index `GEN_CLUST_INDEX` of table `test`.`t1` trx id 8972 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

---TRANSACTION 8970, ACTIVE 1883 sec
4 lock struct(s), heap size 1184, 10 row lock(s), undo log entries 9
MySQL thread id 1, OS thread handle 0x7ff0c87a9700, query id 21 localhost msandbox cleaning up
TABLE LOCK table `test`.`t1` trx id 8970 lock mode IX
RECORD LOCKS space id 18 page no 3 n bits 80 index `GEN_CLUST_INDEX` of table `test`.`t1` trx id 8970 lock mode S
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

Record lock, heap no 2 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000313; asc       ;;
 1: len 6; hex 000000001f4e; asc      N;;
 2: len 7; hex bd000001750110; asc     u  ;;
 3: len 4; hex 80000001; asc     ;;

Record lock, heap no 3 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000314; asc       ;;
 1: len 6; hex 000000001f4e; asc      N;;
 2: len 7; hex bd00000175011e; asc     u  ;;
 3: len 4; hex 80000002; asc     ;;

Record lock, heap no 4 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000315; asc       ;;
 1: len 6; hex 000000001f4e; asc      N;;
 2: len 7; hex bd00000175012c; asc     u ,;;
 3: len 4; hex 80000003; asc     ;;

Record lock, heap no 5 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000406; asc       ;;
 1: len 6; hex 000000002113; asc     ! ;;
 2: len 7; hex 92000001800110; asc        ;;
 3: len 4; hex 80000004; asc     ;;

Record lock, heap no 6 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 00000000040c; asc       ;;
 1: len 6; hex 000000002116; asc     ! ;;
 2: len 7; hex 95000001590110; asc     Y  ;;
 3: len 4; hex 80000004; asc     ;;

Record lock, heap no 7 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000412; asc       ;;
 1: len 6; hex 00000000211c; asc     ! ;;
 2: len 7; hex 990000015c0110; asc     \  ;;
 3: len 4; hex 80000004; asc     ;;

Record lock, heap no 8 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000419; asc       ;;
 1: len 6; hex 00000000211e; asc     ! ;;
 2: len 7; hex 9b0000014d0110; asc     M  ;;
 3: len 4; hex 80000004; asc     ;;

Record lock, heap no 9 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000500; asc       ;;
 1: len 6; hex 00000000230a; asc     # ;;
 2: len 7; hex 8a0000013c0110; asc     <  ;;
 3: len 4; hex 80000004; asc     ;;

TABLE LOCK table `tmp`.`#sql2cfe_1_0` trx id 8970 lock mode IX
RECORD LOCKS space id 18 page no 3 n bits 80 index `GEN_CLUST_INDEX` of table `test`.`t1` trx id 8970 lock_mode X locks rec but not gap
Record lock, heap no 9 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 6; hex 000000000500; asc       ;;
 1: len 6; hex 00000000230a; asc     # ;;
 2: len 7; hex 8a0000013c0110; asc     <  ;;
 3: len 4; hex 80000004; asc     ;;
...some output removed...

Now we get to see the full information about the InnoDB locks.

There is one more thing in the SHOW ENGINE INNODB STATUS output you should pay attention to: deadlocks.

As you can see the record locks use the GEN_CLUST_INDEX index which is the index that is used if no primary key is specified. For some locking issues adding one or more indexes can fix the issue as InnoDB then doesn't need to take as many row locks.

Saturday, October 25, 2014

When your query is blocked, but there is no blocking query - Part 2

In my previous post I talked about a transaction which blocked other transactions without doing anything. I talked about finding data from the blocking transaction using SYS and performance_schema.

But what are the possible solutions?

The first solution is to (automatically) kill the blocking transactions. Long running transactions can also stall the purging in InnoDB. See this blog post by Mark Leith about a possible solution.

The second solution would be make the application end the transaction sooner and/or to commit more often. Depending on your application this might or might not work. I consider this the best solution.

The third solution is to change the transaction isolation level of the blocking transaction to READ COMMITTED.

mysql [test] > set transaction isolation level read committed;
Query OK, 0 rows affected (0.00 sec)

mysql [test] > start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql [test] > insert into t2 select * from t1;
Query OK, 6 rows affected (0.00 sec)
Records: 6  Duplicates: 0  Warnings: 0

This works. The transaction won't need to lock as much as it did with REPEATABLE READ and the INSERT/DELETE on t1 from another transaction won't be blocked. Be careful to set the binlog_format to ROW before trying this as READ COMMITTED requires this to work.

mysql [test] > set session binlog_format=STATEMENT;
ERROR 1559 (HY000): Cannot switch out of the row-based binary log 
format when the session has open temporary tables
mysql [test] > drop table t2;
Query OK, 0 rows affected (0.01 sec)

mysql [test] > set session binlog_format=STATEMENT;
Query OK, 0 rows affected (0.00 sec)

mysql [test] > create temporary table t2 (id int);                              
Query OK, 0 rows affected (0.01 sec)

mysql [test] > set transaction isolation level read committed;                  
Query OK, 0 rows affected (0.00 sec)

mysql [test] > insert into t2 select * from t1;
ERROR 1665 (HY000): Cannot execute statement: impossible to write to 
binary log since BINLOG_FORMAT = STATEMENT and at least one table 
uses a storage engine limited to row-based logging. InnoDB is limited 
to row-logging when transaction isolation level is READ COMMITTED or 
READ UNCOMMITTED.

Setting the transaction isolation can also be done in my.cnf.

When your query is blocked, but there is no blocking query

When I queried information_schema.innodb_trx (introduced in 5.1 with the InnoDB Plugin) I noticed there were a few transactions in LOCK WAIT state.

Example:
mysql [information_schema] > select trx_id,trx_state 
    -> from information_schema.innodb_trx;
+--------+-----------+
| trx_id | trx_state |
+--------+-----------+
| 7974   | LOCK WAIT |
| 7962   | RUNNING   |
+--------+-----------+
2 rows in set (0.00 sec)

Then I made a query to join a few information_schema and performance_schema tables to find out which query was blocking my transactions. It turned out that the blocking transaction had a trx_query=NONE. So my query was block by a transaction doing nothing. That's not really helpful.

Let's try to recreate the situation and see exactly what happened. I'll use two sessions for the transactions and a third to monitor the server.

The first session:
mysql [test] > create table t1 (id int);
Query OK, 0 rows affected (0.01 sec)

mysql [test] > insert into t1 values(1),(2),(3);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql [test] > create temporary table t2 (id int);
Query OK, 0 rows affected (0.01 sec)

mysql [test] > start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql [test] > insert into t2 select * from t1;
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

The second session:
mysql [test] > insert into t1 values(4);
Now the second session is waiting.

Let's start a third session to see what's happening. If the second session hits a wait timeout, you can just restart it.

mysql [information_schema] > select * from innodb_trx\G
*************************** 1. row ***************************
                    trx_id: 8022
                 trx_state: LOCK WAIT
               trx_started: 2014-10-25 11:38:25
     trx_requested_lock_id: 8022:18:3:1
          trx_wait_started: 2014-10-25 11:38:25
                trx_weight: 2
       trx_mysql_thread_id: 19
                 trx_query: insert into t1 values(4)
       trx_operation_state: inserting
         trx_tables_in_use: 1
         trx_tables_locked: 1
          trx_lock_structs: 2
     trx_lock_memory_bytes: 360
           trx_rows_locked: 1
         trx_rows_modified: 0
   trx_concurrency_tickets: 0
       trx_isolation_level: REPEATABLE READ
         trx_unique_checks: 1
    trx_foreign_key_checks: 1
trx_last_foreign_key_error: NULL
 trx_adaptive_hash_latched: 0
 trx_adaptive_hash_timeout: 10000
          trx_is_read_only: 0
trx_autocommit_non_locking: 0
*************************** 2. row ***************************
                    trx_id: 8020
                 trx_state: RUNNING
               trx_started: 2014-10-25 11:35:07
     trx_requested_lock_id: NULL
          trx_wait_started: NULL
                trx_weight: 6
       trx_mysql_thread_id: 1
                 trx_query: NULL
       trx_operation_state: NULL
         trx_tables_in_use: 0
         trx_tables_locked: 0
          trx_lock_structs: 3
     trx_lock_memory_bytes: 360
           trx_rows_locked: 4
         trx_rows_modified: 3
   trx_concurrency_tickets: 0
       trx_isolation_level: REPEATABLE READ
         trx_unique_checks: 1
    trx_foreign_key_checks: 1
trx_last_foreign_key_error: NULL
 trx_adaptive_hash_latched: 0
 trx_adaptive_hash_timeout: 10000
          trx_is_read_only: 0
trx_autocommit_non_locking: 0
2 rows in set (0.00 sec)

mysql [information_schema] > select * from innodb_lock_waits\G
*************************** 1. row ***************************
requesting_trx_id: 8022
requested_lock_id: 8022:18:3:1
  blocking_trx_id: 8020
 blocking_lock_id: 8020:18:3:1
1 row in set (0.00 sec)

mysql [information_schema] > select * from innodb_locks\G
*************************** 1. row ***************************
    lock_id: 8022:18:3:1
lock_trx_id: 8022
  lock_mode: X
  lock_type: RECORD
 lock_table: `test`.`t1`
 lock_index: GEN_CLUST_INDEX
 lock_space: 18
  lock_page: 3
   lock_rec: 1
  lock_data: supremum pseudo-record
*************************** 2. row ***************************
    lock_id: 8020:18:3:1
lock_trx_id: 8020
  lock_mode: S
  lock_type: RECORD
 lock_table: `test`.`t1`
 lock_index: GEN_CLUST_INDEX
 lock_space: 18
  lock_page: 3
   lock_rec: 1
  lock_data: supremum pseudo-record
2 rows in set (0.00 sec)

mysql [information_schema] > use sys
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql [sys] > select * from innodb_lock_waits\G
*************************** 1. row ***************************
     waiting_trx_id: 8022
     waiting_thread: 19
      waiting_query: insert into t1 values(4)
    waiting_lock_id: 8022:18:3:1
  waiting_lock_mode: X
  waiting_lock_type: RECORD
 waiting_lock_table: `test`.`t1`
 waiting_lock_index: GEN_CLUST_INDEX
    blocking_trx_id: 8020
    blocking_thread: 1
     blocking_query: NULL
   blocking_lock_id: 8020:18:3:1
 blocking_lock_mode: S
 blocking_lock_type: RECORD
blocking_lock_table: `test`.`t1`
blocking_lock_index: GEN_CLUST_INDEX
1 row in set (0.00 sec)

The last query is from the SYS schema. You'll need version 1.3.0 of SYS for this to work.

So we now know which transaction is blocking which transaction, but we don't yet know what that transaction is doing as blocking_query is NULL.

But we have blocking_thread=1. With performance_schema we can find the statements executed in that thread:
mysql [performance_schema] > SELECT SQL_TEXT FROM events_statements_history_long 
    -> WHERE thread_id=(SELECT thread_id FROM threads WHERE processlist_id=1) 
    -> order by TIMER_START;
+---------------------------------+
| SQL_TEXT                        |
+---------------------------------+
| rollback                        |
| start transaction               |
| insert into t2 select * from t1 |
+---------------------------------+
3 rows in set (0.00 sec)

You should be aware that there are 3 ID's with which we are dealing:
  • The InnoDB Transaction ID (trx_id)
  • The MySQL Processlist ID (information_schema.processlist.ID, used in blocking_thread)
  • The MySQL Thread ID (performance_schema.threads.thread_id)
This can help you to identify what the application in the idle thread was doing and what might happen to it if you would just kill the query.

It might also be possible to find OS pid if the application uses a connector which supports connection attributes. The host on which this pid lives can be found in the processlist.
mysql [performance_schema] > select * from session_connect_attrs where PROCESSLIST_ID=2;
+----------------+-----------------+----------------+------------------+
| PROCESSLIST_ID | ATTR_NAME       | ATTR_VALUE     | ORDINAL_POSITION |
+----------------+-----------------+----------------+------------------+
|              2 | _os             | linux-glibc2.5 |                0 |
|              2 | _client_name    | libmysql       |                1 |
|              2 | _pid            | 25415          |                2 |
|              2 | _client_version | 5.6.21         |                3 |
|              2 | _platform       | x86_64         |                4 |
|              2 | program_name    | mysql          |                5 |
+----------------+-----------------+----------------+------------------+
6 rows in set (0.00 sec)

For this procedure to succeed you'll need this:
  1. MySQL 5.6 or newer.
  2. performance_schema must be enabled
  3. In setup_consumers events_statements_history_long must be enabled.
  4. The performance_schema_events_statements_history_long_size variable must be set large enough. Changing this requires a restart of the server. With this variable you specify how many statements there are kept in the events_statements_history_long table. You probably need to raise this on a busy server to get the statements of the blocking transaction as the transaction might be waiting for quite some time.

Sunday, October 12, 2014

Disabling old_passwords=1

It is possible to disallow users from using old_passwords=1. This can be done by adding 'maximum-old_passwords=0' to your my.cnf

This prevents users from generating passwords hashes in pre-4.1 format. In MySQL 5.7 old_passwords=1 is already disabled, so this is only useful on 5.6 and earlier.

Be aware that this also restricts old_passwords=2 which is needed for sha256 hashes.

mysql> select @@old_passwords;
+-----------------+
| @@old_passwords |
+-----------------+
|               0 |
+-----------------+
1 row in set (0.00 sec)

mysql> set old_passwords=1;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings;
+---------+------+----------------------------------------------+
| Level   | Code | Message                                      |
+---------+------+----------------------------------------------+
| Warning | 1292 | Truncated incorrect old_passwords value: '1' |
+---------+------+----------------------------------------------+
1 row in set (0.00 sec)

mysql> select @@old_passwords;
+-----------------+
| @@old_passwords |
+-----------------+
|               0 |
+-----------------+
1 row in set (0.00 sec)

Sunday, September 21, 2014

Downgrading from MySQL 5.6 to MySQL 5.5

Last week I had to downgrade from MySQL 5.6 to 5.5. The reason for this was that the application used a very old Connector/J and that's incompatible with MySQL 5.6 because the removal of SET OPTION syntax.

We're now planning to upgrade Connector/J to be able to upgrade to 5.6 again.

There are two methods of downgrading:
  • Dump/Restore with mysqldump. This is easy and reliable, but can take more time.
  • In place (replace binaries, don't change data). This fast, but won't work if file formats have changed.
As expected this is documented in the MySQL Reference Manual.

I went for the in place method. I expected this to work without many issues as this database was not using the fancy new features like fulltext indexes for InnoDB. All tables used the Antelope format. As both MySQL versions support Antelope and Barracuda this shoud be fine. I don't know why Oracle didn't introduce a new Cheeta format with the introduction of the fulltext indexes.

The issues I encountered were:
  • A different innodb_log_file_size default (had to set it in my.cnf)
  • Missing mysql.host table.  Bug #73634 (Fixed by copying it from another instance)
  • Various changes in the privilege tables. (Fixed by dropping them and running mysql_upgrade)
  • Changes in mysql.user (set plugin to '' when plugin='mysql_native_plugin')
  • Warnings about unknown flag 50. Bug #72371 (Not a Bug)
This server didn't user replication, so that wasn't an issue.

CIS Oracle MySQL 5.6 Security Benchmark

I've been working on the CIS security benchmark for Oracle MySQL 5.6. There already is a benchmark document for earlier versions of MySQL.

The benchmark document helps you to ensure you have a secure configuration for MySQL.

Center for Internet Security (CIS) is a nonprofit organization.

The Call For Participation is now open.

If you are already a CIS member then you can download the document here. If you don't have an account you can register for free.

Sunday, August 31, 2014

Putting MySQL Cluster in a container

To get more familiar with docker and to create a test setup for MySQL Cluster I created docker images for the various components of MySQL Cluster (a.k.a. NDB Cluster)

At first I created a Fedora 20 container and ran all components in one container. That worked and is quite easy to setup. But that's not how one is supposed to use docker.

So I created Dockerfile's for all components and one base image.

The base image:
  • contains the MySQL Cluster software
  • has libaio installed
  • has a mysql user and group 
  • serves as a base for the other images
The management node (ndb_mgmd) image:
  • Has ndb_mgmd as entrypoint
  • Has a config.ini for the cluster config
  • Should be started with "--name=mymgm01"
The data node (ndbmtd) image:
  • Has ndbmtd as entrypoint
  • Uses the connect string: "host=${MGM01_PORT_1186_TCP_ADDR}:1186"
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • You should create 2 containers of this type to create a nodegroup of 2 nodes.
The API node (mysqld) image:
  • has a my.cnf
  • Runs mysqld_safe
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • The ndb-connectstring is given as parameter to mysqld_safe as it comes from an environment variable. It's not possible to use environment variables from within my.cnf. Docker is supposed to also update /etc/hosts but that didn't work for me.
  • You should expose port 3306 for your application
The management client (ndb_mgm) image:
  • Runs ndb_mgm as entrypoint
  • Should be started with "--link mymgm01:mgm01" to allow it to connect to the management node.
  • Running the ndb_mgm in a container removes the need to publish port 1186 on the management server. More info here.
  • You can override the entrypoint to run other NDB utilities like ndb_desc or ndb_select_all
The images can be found on https://registry.hub.docker.com/u/dveeden/mysqlcluster72/
The Dockerfiles can be found on https://github.com/dveeden/dve-docker  

Possible improvements
  • Use hostnames in the config.ini instead of IPv4 addresses. This makes it more dynamic. But that means updating /etc/hosts or fideling with DNS.
  • Using VOLUMES in the Dockerfiles to make working with data easier.

Sunday, August 17, 2014

The new cloud backup option of MySQL Enterprise Backup

MySQL Enterprise Backup 3.10 support backups to the cloud. The only supported cloud service is Amazon S3.

When the cloud destination is used mysqlbackup will upload the backup as an image file.

You can specify all options on the commandline:
mysqlbackup --cloud-service=s3 --cloud-aws-region=eu-west-1 \
--cloud-access-key-id=AKIAJLGCPXEGVHCQD27B \
--cloud-secret-access-key=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 \
--cloud-bucket=meb_myserver --cloud-object-key=firstbackup --cloud-trace=0 \
--backup-dir=/tmp/firstbackup --backup-image=- --with-timestamp backup-to-image

But you can also put the settings in the my.cnf
[mysqlbackup_cloud]
cloud-service=s3
cloud-aws-region=eu-west-1
cloud-access-key-id=AKIAJLGCPXEGVHCQD27B
cloud-secret-access-key=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
cloud-bucket=meb_myserver
cloud-trace=0
backup-dir=/data/cloudbackup
backup-image=-
with-timestamp

The with-timestamp option is important as the backup won't start if the backup-dir already exists. This is because mysqlbackup will leave the backup directory exists after uploading the backup. The backup directory will only have meta info and the log file, not the actual backup.

By using a group suffix like _cloud you can put settings for multiple types of backups in one cnf file.

mysqlbackup --defaults-group-suffix='_cloud' \
--cloud-object-key=backup_2014081701 backup-to-image

The account you're using should have this policy to be allowed to read and write to the s3 bucket:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1408302840000",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket", 
        "s3:ListBucketMultipartUploads",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::meb_myserver/*"
      ]
    }
  ]
}

This looks like a good option to me if you're already using mysqlbackup and amazon. It would be nice if the next version would support other cloud providers (e.g. openstack swift, ceph). Implementing this should be easy for those with an s3 compatibility layer, but will probably take more time for others.

I did find some bugs (just search for tag=cloud on http://bugs.mysql.com if you're interested).

Thursday, August 7, 2014

MySQL User Group Meetup in Amsterdam

This Tuesday Markus Winand will talk at the MySQL User Group NL meetup about "Indexes: The neglected performance all-rounder".
Markus is known for the http://use-the-index-luke.com website and the SQL Performance Explained book.

Date: Tuesday August 12
Location: Marktplaats/eBay Office Amsterdam
Registration: http://www.meetup.com/MySQL-User-Group-NL/events/196440532/

Sunday, July 20, 2014

Decoding (encrypted) MySQL traffic with Wireshark

In a comment on my post about Using SSL with MySQL xiaochong zhang asked if it is possible to decode SSL/TLS encrypted MySQL traffic. The short answer is: It depends.

To test this we need a MySQL server which is SSL enabled. I used MySQL Sandbox to create a sandboxed 5.6.19 server. Then I used mysslgen to create the config and the certificates.

$ make_sandbox 5.6.19
$ ./mysslgen.py --config=sandboxes/msb_5_6_19/my.sandbox.cnf --ssldir=sandboxes/msb_5_6_19/ssl

This assumes there already is a extracted tarball of MySQL 5.6.19 in ~/mysql/5.6.19

The mysslgen.py script will return a message with the changes you should make in your mysqld and client sections of the my.sandbox.cnf file. Then restart the server to make it active.

For SSL to work we need to connect using TCP/IP instead of over a UNIX socket. So we connect with "./my sql -h 127.0.0.1". Now execute "\s" or "status" to see if we're indeed using SSL.

It probably looks like this:

mysql [127.0.0.1] {msandbox} ((none)) > \s
--------------
/home/dveeden/opt/mysql/5.6.19/bin/mysql  Ver 14.14 Distrib 5.6.19, for linux-glibc2.5 (x86_64) using  EditLine wrapper

Connection id:  3
Current database: 
Current user:  msandbox@localhost
SSL:   Cipher in use is DHE-RSA-AES256-SHA
Current pager:  stdout
Using outfile:  ''
Using delimiter: ;
Server version:  5.6.19 MySQL Community Server (GPL)
Protocol version: 10
Connection:  127.0.0.1 via TCP/IP
Server characterset: latin1
Db     characterset: latin1
Client characterset: utf8
Conn.  characterset: utf8
TCP port:  5619
Uptime:   1 hour 32 min 48 sec

Threads: 1  Questions: 18  Slow queries: 0  Opens: 67  Flush tables: 1  Open tables: 60  Queries per second avg: 0.003
--------------

Now disconnect and start the trace.

sudo tcpdump -i lo -s 65535 port 5619 -w /tmp/mysql.pcap

First connect w/o SSL: "./my sql -h 127.0.0.1 --skip-ssl". And then with SSL: "./my sql -h 127.0.0.1"
Stop the tcpdump session and start wireshark and open the mysql.pcap file. Now we can inspect the protocol. If MySQL is using the default port (3306) then wireshark will automatically decode the traffic, but now we have to use 'Decode as...' to tell wireshark this is MySQL traffic. The server greeting packet and the login request should now be visible. In the login request there are client capability flags, one of the flags indicates 'Switch to SSL after handshake' and should be set for the SSL session.



Both SSL and non-SSL sessions will use the same port and start an unencrypted session. The encrypted session will switch to SSL after the handshake. This is a bit like STARTTLS for IMAP. The current version of the MySQL protocol dissector is not (yet) aware of some of the new information in the initial handshake. So the information for the authentication plugins and connection attributes is not decoded yet. The documentation about the protocol can be found in the MySQL Internals manual.

So that's the plaintext part. Now we get to the SSL part. In my setup the default cipher suite which is used for SSL is  DHE-RSA-AES256-SHA. With OpenSSL's ciphers command we can get some more details:

$ openssl ciphers -v 'DHE-RSA-AES256-SHA'
DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1

This means that SHA 1 is use for the MAC part and that AES-256 is used for encryption and the keyexchange is done with DH (Diffie-Hellman). This poses a problem as DH will generate a session key, and we don't have that in the traffic dump as it's not sent over the network. We could use gdb (and maybe a debug trace?) to get the DH keys out, but for now we have an easier solution: use a different cipher suite.

So start tcpdump again and run "./my sql -h 127.0.0.1 --ssl-cipher=AES256-SHA". This cipher uses RSA for keyexchange instead of DH. This means everything we need is send over the network or is present in the SSL certificate and/or key.

Now start wireshark again and use 'Decode as...' and choose SSL. Then go Edit→Preferences→Protocols→SSL→'RSA key list' and add the server's SSL key. Now you should see the decoded traffic.


So decoding SSL/TLS encrypted MySQL traffic is possible. But you need to have:
  • All traffic since the beginning of the TCP/IP connection
  • The server's SSL key
  • The DH session key if DH is used. (you might want to read about Forward secrecy (PFS) if you're interested in the details).

Wednesday, July 16, 2014

Oracle Critical Patch Update for MySQL

Oracle has released the Critical Patch Update (CPU) for July 2014. The Oracle MySQL Risk Matrix lists 10 security fixes.

It took me some time to understand the subcomponent names. So here is the list with the full name of each subcomponent:

SubcomponentFull name
SRFTSServer: Full Text Search
SRINFOSCServer: INFORMATION_SCHEMA
SRCHARServer: Character sets
ENARCEngine: Archive
SROPTZRServer: Optimizer
SRREPServer: Replication
SRSPServer: Stored Procecure
ENFEDEngine: Federated

I don't think there is anything really important in the list, but it might be a good trigger to update to the latest release.

Upgrading should be easy especially if you're using the APT or YUM repositories from Oracle. If you're upgrading to a new major release (e.g. from 5.5 to 5.6) then you should read the instructions in the reference manual.

Sunday, July 6, 2014

The MySQL 6.0 goodybag

After MySQL 5.1 was released work started on MySQL 5.2, but then this was renamed to MySQL 6.0. There were many new features introduced in 6.0. But then stabilizing this branch became as huge task. Eventually the decision was made to start of with a stable branch and backport the new features from 6.0. This is how many of the 6.0 features landed in 5.5 and 5.6.

So let's see which features 6.0 brought and were they landed. I'll use the What Is New in MySQL 6.0 section of the MySQL 6.0 Reference Manual for this.

  • The Falcon storage engine. This never landed anywhere as far as I know. It's not even included in the list of storage engines in the MariaDB knowledgbase. As both InnoDB and MySQL are now part of Oracle I don't see any reason for Falcon to exist anymore.
  • 4-byte utf8 and support for utf16 and utf32. This is included in MySQL 5.5 together with many other Unicode enhancements.
  • Database backup with SQL. This allows you to make backups by executing 'BACKUP DATABASE' SQL statements. This is has not landed anywhere as far as I know, but some of the code might have made it into MySQL Enterprise Backup (both use  backup_history and backup_progress tables in the mysql database). This might be an interesting thing to have, but with MEB there is not a real need for it.
  • Subquery enhancements, BKA and MRR. This all made it into MySQL 5.6.
  • LOAD XML. This made it into MySQL 5.5, but I don't think it is used often.
  • The Maria storage engine. This is a transactional MyISAM storage egine. This is not used in Oracle MySQL. It is included in MariaDB, but renamed to Aria as it might otherwise cause confusion. As far as I known there is not much development being done on Aria.
  • Foreign Keys. Of course MySQL supports foreign keys (now even with NDB!), but those are implemented in the storage engine, not in the server. This was on the roadmap for 6.x but I don't know if it was actually implemented. Implementing this in the server could make writing storage engines easier and would probably also make it easier to combine partitioning and foreign keys.


Did I forget any 6.x features? Did you ever try 6.x? Let me know in the comments!

 

Some comments from Federico Razzoli on this:

- In 2024, it would be safe to say that Aria will never be transactional. With TRANSACTIONAL=1 the flushed rows can't be lost, but it's also slow. It's used for Maria internal temporary table because it's faster for GROUP BY... but with some remarkable exceptions.
- I seem to recall that the BACKUP SQL statement was implemented, and then removed because it proved unsafe.


Wednesday, July 2, 2014

Single database backup and restore with MEB

I was recently asked about if MySQL Enterprise Backup would be able to restore single databases.

My initial answer was that this was complicated, but might be doable with the Transportable Table Space (TTS) option.

But first let's go back to the basics. A common way of working with mysqldump is to get a list of databases and then loop through the databases and dump the data and schema to a SQL file. But both backups and restores will take a lot of time if the size of the database grows. And it's a luke-warm backup at best instead of a hot backup. So that's why we have MySQL Enterprise Backup.

MySQL Enterprise Backup allows you to make a hot backup of InnoDB tables by copying the datafiles while watching the InnoDB redo log files.

On disk the data from the InnoDB storage engine consists of a system tablespace (one of more ibdataX files), the redo log files (iblogfileX) and zero or more table-specific tablespace files (*.ibd).

The data dictionary data is located in the system tablespace. This is were the tablespace number is stored for each ibd file. The redo logfiles is were the changes are written to before they are written to the datafiles.

This all works fine if you use MEB to backup and restore a whole instance.

Even if you stop MySQL you can't just copy a ibd file to another server and expect it to work. This is because there might be changes for that file still in the redo logfile and the table space ID in the system tablespace might not match.

But you can copy a ibd file to another server if you follow the right procedure. This (partly) works with 5.5, and is greatly enhanced in 5.6.

This is how it works:
a FLUSH TABLES..FOR EXPORT command is issued for certain tables. The tables are then read-only. The changes in the logfile are written to the ibd file and the information from the system tablespace is written to a .cfg file. Then the .ibd and .cfg files can be copied to some other location. Then the table can be unlocked.

You could create a per-database backup with MEB with the --include-tables option. Then if will copy the specified tables' .ibd files and the system tablespace and the redo logs.

This works, but the system tablespace might get big because of many reasons. This is where TTS comes into play. This allows you to make a backup of only the tables w/o copying the system tablespace.

With the --use-tts and --include-table options I can backup all tables for one database and then restore one or more of these tables on another instance. This is without stopping the destination instance.

Some per-database defaults like the default character set are stored in the db.opt file, but this file is not copied. So be aware!

But what if we want to do a point-in-time recovery for just one database? That should work. First restore the database you're concerned about and then use mysqlbinlog with the --database option to restore the binlog entries which are for that specific database.

I haven't use per-database (point-in-time) restores with MEB in production, but it all seems to work fine in a test setup.

I've always used physical backups on a per-instance basis and then mysqldump for single-table and single-database backups. But this might be faster and the data only needs to be backupped once. Also the mysqldumps I took were not prepared for point-in-time restores as the locking required for that would block the server for too long.

Please let me know your experiences in the comments.

Monday, June 23, 2014

On Dolphins, Panda's and Bugs

MySQL Bugs

On Dolphins, Panda's and Bugs

Like any good OpenSource project the MySQL Bugs website is open for anyone to search through. This ofcourse doesn't include the security bugs.

There is a second collection of bugs in the My Oracle Support and these bugs are only accesseble by customers with a support contract. Even when I have access to MOS I still prefer to use the community bugs site. For service requests etc. I would use MOS.

The openness of the bugs database is one of the topic the IOUG MySQL Council discusses with Oracle.

The bugs database has more to offer than just information about initial bugs:

  • Bugs Statistics: This has a big matrix with components and states, some per developer stats and some totals (Did you known the fastest bug closure was 9 seconds?).
  • Bugs Tide This gives you per month statistics.

For both there are some filter option for version and whether to include feature requests.

You might want to read life cycle of a MySQL bug by Valeriy Kravchuk if you're not familiar with the different states of MySQL bugs.

The bugs website offers search results not only in the HTML reports as shown on the website but also offers RSS and CSV format so it can be consumed by computer programs.

To demostrate this I wrote this script. It fetches the CSV for my bugs and generates some graphs. As you might (or might not) notice this doesn't include my older bug reports as I have two different reporter-id's as I had different email accounts for my MySQL account and my My Oracle account before MySQL/Sun was merged with Oracle.

This page is made with IPython Notebook, which is a tool which allows you to combine code, output, graphs and text. You can also view this notebook here.

In [1]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib

In [3]:
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from StringIO import StringIO
from datetime import datetime

This is the URL as found in the bugs website. Feel free to modify the parameters.

In [4]:
bugs_url = "http://bugs.mysql.com/search-csv.php?status=all&severity=all&mine=9242646"
  1. Fetch the data from the URL with the Requests library
  2. Convert the CSV data to a Pandas DataFrame (a DataFrame is a special kind of table)
In [5]:
bugs_req = requests.get(bugs_url)
csv_file = StringIO()
csv_file.write(bugs_req.text)
csv_file.seek(0)
bugsdf = pd.read_csv(csv_file)

Let's have a look at the data.

In [6]:
bugsdf.head()
Out[6]:
ID Entered Modified Type Status Severity Version OS Summary Assign First Assign Last
0 68132 2013-01-21 13:53:28 2013-03-26 16:14:20 Server: Docs Closed S3 5.6, 5.5.29 Any Documentation about verification with GnuPG is... Philip Olson
1 68199 2013-01-28 08:23:59 2013-05-13 13:10:00 bugs.mysql.com Closed S3 5.7 Any Month-by-month (tide) stats for 5.7 absent Sveta Smirnova
2 68200 2013-01-28 08:54:53 2013-03-10 16:03:34 Server: Privileges Closed S3 5.6.9-rc Any Password logging doesn't work as documented NaN NaN
3 68536 2013-03-01 10:29:46 2013-07-31 13:24:26 MySQL Workbench Closed S3 5.2.47 Any Better support IPv6 addresses for new connections NaN NaN
4 68571 2013-03-05 09:08:23 2014-02-26 09:41:24 Monitoring: Advisors/Rules Verified S3 NaN Microsoft Windows (Win7 SP1) False positive for 32-binary check on win64 NaN NaN

Now we have the data, let's make a horizontal barplot for the number of bugs per category (the Type column).

We change the size of the figure as the default is too small to be readable.

In [7]:
fig = plt.figure(figsize=(8,10), dpi=100)
bugsdf.Type.value_counts(ascending=True).plot(kind='barh')
Out[7]:
<matplotlib.axes.AxesSubplot at 0x394de90>

The Version column has a the text for the version. Let's grab the first three characters to get the major version of for the bug. This is not really perfect as it will only return 5.1 if the string is '5.1.30, 5.5.16', but it's good enough for now.

The function will be mapped to the Version column of the dataframe. And we will save the result in a new column called major_version.

In [8]:
def getversion(inputver):
    if isinstance(inputver, float):
        return inputver
    return inputver[:3]
bugsdf['major_version'] = bugsdf.Version.map(getversion)

Besides Pandas we can use matplotlib's pyplot, which is a bit like MATLAB.

Let's create a heatmap for bug status and category

In [9]:
compstat = bugsdf.groupby('major_version').Type.value_counts().unstack().T
fig = plt.figure(figsize=(15,10), dpi=100)
plt.pcolor(compstat, vmin=0, vmax=5, cmap='Blues')
plt.yticks(np.arange(0.5, len(compstat.index), 1), compstat.index)
plt.xticks(np.arange(0.5, len(compstat.columns), 1), compstat.columns)
Out[9]:
([<matplotlib.axis.XTick at 0x3c067d0>,
  <matplotlib.axis.XTick at 0x3d87410>,
  <matplotlib.axis.XTick at 0x3f2d550>,
  <matplotlib.axis.XTick at 0x3f2da50>,
  <matplotlib.axis.XTick at 0x3f2df50>,
  <matplotlib.axis.XTick at 0x3f15490>,
  <matplotlib.axis.XTick at 0x3f15990>,
  <matplotlib.axis.XTick at 0x3f15e90>,
  <matplotlib.axis.XTick at 0x40983d0>,
  <matplotlib.axis.XTick at 0x4098b10>,
  <matplotlib.axis.XTick at 0x3d8f2d0>],
 <a list of 11 Text xticklabel objects>)

Now we can create a heatmap which compares major versions and components.

In [10]:
fig = plt.figure(figsize=(8,10), dpi=100)
plt.pcolor(compstat, vmin=0, vmax=10, cmap='Blues')
plt.yticks(np.arange(0.5, len(compstat.index), 1), compstat.index)
plt.xticks(np.arange(0.5, len(compstat.columns), 1), compstat.columns)
Out[10]:
([<matplotlib.axis.XTick at 0x3f15bd0>,
  <matplotlib.axis.XTick at 0x3f176d0>,
  <matplotlib.axis.XTick at 0x42bcad0>,
  <matplotlib.axis.XTick at 0x42bcfd0>,
  <matplotlib.axis.XTick at 0x4459d50>,
  <matplotlib.axis.XTick at 0x4455710>,
  <matplotlib.axis.XTick at 0x4451310>,
  <matplotlib.axis.XTick at 0x42cdc50>,
  <matplotlib.axis.XTick at 0x42c9810>,
  <matplotlib.axis.XTick at 0x42c51d0>,
  <matplotlib.axis.XTick at 0x42bdb10>],
 <a list of 11 Text xticklabel objects>)

Now we have the major version, let's filter on MySQL 5.6 and then graph the different values for the Status field.

In [11]:
bugsdf[bugsdf.major_version == '5.6'].Status.value_counts().plot(kind='bar')
Out[11]:
<matplotlib.axes.AxesSubplot at 0x4451e10>

Or maybe check the status for all versions.

In [12]:
bugsdf.groupby('major_version').Status.value_counts().unstack().plot(kind='barh', stacked=True)
Out[12]:
<matplotlib.axes.AxesSubplot at 0x446f790>

That's not really helpful, let's remove the Closed ones.

In [13]:
bugsdf[bugsdf.Status != 'Closed'].groupby('major_version').Status.value_counts().unstack().plot(kind='bar', stacked=True)
Out[13]:
<matplotlib.axes.AxesSubplot at 0x4464650>

The Entered and Modified fields are not yet in a 'real' date format. So these must be coverted before we can use them.

In [14]:
bugsdf['Entered'] = pd.to_datetime(bugsdf.Entered)
bugsdf['Modified'] = pd.to_datetime(bugsdf.Modified)

Which are the oldest open bugs?

The - before the bugsdf.Status.isin reverses the result, so it behave like 'is not in'.

In [15]:
bugsdf[-bugsdf.Status.isin(['Closed', 'Duplicate', 'Won\'t fix', 'Can\'t repeat'])].sort(columns='Entered')[:5]
Out[15]:
ID Entered Modified Type Status Severity Version OS Summary Assign First Assign Last major_version
4 68571 2013-03-05 09:08:23 2014-02-26 09:41:24 Monitoring: Advisors/Rules Verified S3 NaN Microsoft Windows (Win7 SP1) False positive for 32-binary check on win64 NaN NaN NaN
5 68574 2013-03-05 13:42:26 2014-02-23 11:25:48 Server: InnoDB Verified S3 5.5.30, 5.6.16 Microsoft Windows (win7 sp1 ent) No instrumentation for InnoDB files in P_S on ... Marc Alff 5.5
19 68925 2013-04-11 05:57:29 2013-04-11 06:15:40 Client Verified S3 5.6.10, 5.5.29 Any Compatibility issue with mysql history ("\040"... NaN NaN 5.6
22 69147 2013-05-05 10:07:16 2013-07-28 14:43:57 Server: I_S Verified S4 5.7 Any Variable defaults in information_schema NaN NaN 5.7
33 69223 2013-05-14 06:22:48 2013-06-24 20:03:16 Server: DDL Verified S4 5.6.10 Any Give a warning on CREATE TABLE without PRIMARY... NaN NaN 5.6

Now let's find bugs which are open and not modified in a some time.

In [17]:
bugsdf[-bugsdf.Status.isin(['Closed', 'Duplicate', 'Won\'t fix', 'Can\'t repeat'])] \
      [bugsdf.Modified < datetime(2013, 6, 1)] \
      [bugsdf.Modified > datetime(1970, 1, 1)] 
Out[17]:
ID Entered Modified Type Status Severity Version OS Summary Assign First Assign Last major_version
19 68925 2013-04-11 05:57:29 2013-04-11 06:15:40 Client Verified S3 5.6.10, 5.5.29 Any Compatibility issue with mysql history ("\040"... NaN NaN 5.6
35 69226 2013-05-14 10:31:09 2013-05-14 16:09:09 Server: Options Verified S3 5.6.11 Any Status variable for SSL/TLS implementation NaN NaN 5.6
36 69314 2013-05-24 21:33:48 2013-05-25 07:55:24 Server: Options Verified S3 5.7.1 Any ignore-db-dir option doesn't work for database... NaN NaN 5.7

Besides searching for bugs by reporter you can also use this to search on other conditions.