Thursday, April 14, 2011

More Stable Query Execution Times by Improving InnoDB Statistics

As part of ensuring that changes to the MySQL Optimizer does not introduce performance regressions, we have started a project to create a new performance regression test suite. One component in this test suite will be the DBT-3 test suite. However, we observed that the execution times for DBT-3 varied so much that, in its present form, it was not usable for detecting performance regressions.

In order to get a better understanding of what was going on, I looked closer at one of the queries that were run, Query 8. For this particular query, which contains an 8-table join, the execution times varied from 1 minute to 5 hours! Looking at 100 runs of this query, I detected 8 different query execution plans. Four of these plans represented differences in which sequence the tables were joined, while the last four differed from the first four with respect to which indexes were used.

Since the MySQL Optimizer is cost based, when the execution plans vary, this is usually because the underlying statistics reported from the storage engine varies. One way to investigate the statistics that the optimizer bases its decisions on, is to use the SHOW INDEX command:

mysql> show index from customer;
+----------+------------+---------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table    | Non_unique | Key_name      | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+----------+------------+---------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| customer |          0 | PRIMARY       |            1 | c_custkey   | A         |      150000 |     NULL | NULL   |      | BTREE      |         |               |
| customer |          1 | i_c_nationkey |            1 | c_nationkey | A         |          47 |     NULL | NULL   | YES  | BTREE      |         |               |
+----------+------------+---------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

In the above example, the customer table has two indexes, a primary index on c_custkey and a secondary index on c_nationkey. The most important thing to note from this output, is the estimated number of different key values, cardinality. For the two indexed columns of the customer table, the cardinality is 150,000 and 47, respectively.

The InnoDB statistics are calculated on-the-fly the first time a table is used after the server has been started. The statistics may be automatically recalculated at various times, and ANALYZE TABLE can be used to force a recalculation. For a description of how InnoDB calculates its statistics, see the MySQL 5.5 Reference Manual. The important thing to note is that there is a system variable, innodb_stats_sample_pages, that controls the accuracy of the statistics. It determines the number of index pages that are sampled in order to calculate the statistics. The default value is 8.

Continuing with the example above, I ran ANALYZE on the customer table, and here is what I got:

mysql> analyze table customer;
+---------------+---------+----------+----------+
| Table         | Op      | Msg_type | Msg_text |
+---------------+---------+----------+----------+
| dbt3.customer | analyze | status   | OK       |
+---------------+---------+----------+----------+
1 row in set (0.03 sec)

mysql> show index from customer;
+----------+------------+---------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table    | Non_unique | Key_name      | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+----------+------------+---------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| customer |          0 | PRIMARY       |            1 | c_custkey   | A         |      150000 |     NULL | NULL   |      | BTREE      |         |               |
| customer |          1 | i_c_nationkey |            1 | c_nationkey | A         |         134 |     NULL | NULL   | YES  | BTREE      |         |               |
+----------+------------+---------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
2 rows in set (0.01 sec)

As the observant reader has already noticed, the estimated cardinality of the column c_nationkey has changed significantly. Running ANALYZE several times, I saw numbers as low as 5 and as high as 135 for this column.

The question is then whether we can get better and more stable statistics by increasing the setting of innodb_stats_sample_pages. I increased the value to 100, and ran ANALYZE 100 times. This time, the estimated cardinality was between 22 and 84, and 80% of the time it was between 30 and 60.

So how did this affect the execution plan for query 8? Running the query another 100 times with this new setting of innodb_stats_sample_pages, gave only 3 different execution plans, all with the same join ordering. The remaining difference was in which index was used to access one of the tables. I will discuss how to further reduce the number of plans in a future blog post.

Note that increasing innodb_stats_sample_pages will cause the calculation of statistics to take longer time. In my case, sampling 100 pages did not seem to take a noticeable amount of time, and this should normally not cause any problems when ANALYZE TABLE is run explicitly. However, if InnoDB decides to do an automatic recalculation, the increased time may have some unwanted impact. Hence, this variable should be used with care, and its value should not be set higher than necessary to get reasonably accurate estimates.

5 comments:

  1. Did the column with the variable rows per key have a lot of NULL values? If so we currently get around this by using the innodb_stats_method=nulls_ignored in xtradb. The other approach we take is increase the number of samples taken and persist the stats between restarts with innodb_use_sys_stats_table (also an xtradb feature).

    ReplyDelete
  2. Prior to the MDL changes in MySQL 5.5, sampling more pages can be a disaster on mysqld restart as sampling is done on table open and that is serialized by LOCK_open.

    Sampling after too many updates can still cause stalls for that table in 5.5+. The facebook patch has a change to let other queries on the table proceed while the updates stats are collected.

    ReplyDelete
  3. @rshuddleston: No, there were no NULL values.
    Persistent InnoDB stats are available in the new MySQL 5.6.2 milestone release. See http://blogs.innodb.com/wp/2011/04/innodb-persistent-statistics-at-last/

    @Mark Callaghan: Thanks, for sharing your experience with InnoDB statistics. It is true that increasing the sampling size, may give some unwanted side effects. I think your issues are solved with the new persistent statistics in 5.6 since that gives the user full control over when the statistics are updated.

    ReplyDelete
  4. Oysten, I was able to repeat your findings using DBT-3 scale 1 . Even with innodb_stats_sample_pages = 512, three query plans remain, out of which two have a 60/40 chance of being picked.

    What is your solution for further reducing the number of plans?

    ReplyDelete
  5. @Philip Stoev: In order to further reduce the number of plans, I switched to using InnoDB persistent statistics which is a new feature in the recent 5.6 milestone release. See http://oysteing.blogspot.com/2011/05/innodb-persistent-statistics-save-day.html for more details.

    ReplyDelete