Author: 陈江
作者在polardb下一个版本开发中修复了大量带groupby+rollup集合结果不正确的bug, 顺带抽象总结一下None SPJ算子执行流程,不理解思想的话,单纯通过代码逆向反推逻辑会耗费大量精力,希望本篇文章对于入坑mysql同学有帮助。代码基于版本是mysql 8.4.2。
先抛出一个问题,我们查看一个常见groupby+order by sql执行计划时,
CREATE TABLE t1 (a INT, b INT);
INSERT INTO t1 (a, b) VALUES (1,1), (1,2), (1,3);
INSERT INTO t1 SELECT a + 1, b FROM t1;
mysql> desc format=tree SELECT a*a, sum(b)+1, count(1)+1 as cnt FROM t1 GROUP BY a ORDER BY cnt DESC;
+----------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+----------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Sort: cnt DESC
-> Table scan on <temporary>
-> Aggregate using temporary table
-> Table scan on t1 (cost=0.85 rows=6)
|
+----------------------------------------------------------------------------------------------------------------------------------------------------+
我们复习一下关系代数,上游算子执行结果作为下游算子输入,执行完下游算子再继续传递输出给下下游算子,可以简单理解为多个阶段执行 关键none-spj算子在mysql执行方式介绍
Filesort(THD *thd, Mem_root_array<TABLE *> tables, bool keep_buffers,
ORDER *order, ha_rows limit_arg, bool remove_duplicates,
bool force_sort_rowids, bool unwrap_rollup);
比较奇葩的是在8.018版本中构造filesort传递的是单Table对象,需要上游算子保证输出数据流在一张表里面。在8.018里面一个简单的join+sort会额外转临时表一次
mysql> desc format=tree select * from t1,t2 order by a;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Sort: <temporary>.a
-> Stream results
-> Inner hash join (no condition) (cost=0.80 rows=2)
-> Table scan on t1 (cost=0.45 rows=2)
-> Hash
-> Table scan on t2 (cost=0.35 rows=1)
8.4.2 Hyper优化器中没这问题, 但旧优化器社区基本没重构,仍是原版本行为
set optimizer_switch='hypergraph_optimizer=on';
mysql> desc format=tree select * from t1,t2 order by t1.a,t2.a;
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Sort: t1.a, t2.a (cost=10.3..10.3 rows=18)
-> Nested loop inner join (cost=0.0556..1 rows=18)
-> Table scan on t2 (cost=0.0833..0.25 rows=3)
-> Table scan on t1 (cost=0.0417..0.25 rows=6)
mysql有两种场景需要临时表
所以举个常见的Groupby+sort的例子,假定GB使用filesort+AGG实现,算子树就是
FileSort
|->Agg //到这实现了Groupby
|->Filesort
|->Scan table
因为Filesort必须绑定table限制,以及Agg算子输出没有绑定table对象[2],它们需要一张临时表做下转换。这个时候会在AGG算子上添加streaming算子,转成临时表,这样就满足了filesort对输入数据要求。如果filesort上游算子本身就提供了临时表,是不需要添加额外的桥接算子的, 如window + sort,window自身提供tmptable,sort就不再需要额外临时表。
[2]):参考AggregateIterator::Read, 聚合函数的结果是写到单独的group buff中,不在扫描的table record0中
mysql> desc format=tree select c, count(1) as cnt from t2 force index(idx_c) group by c order by cnt ;
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Sort: cnt
-> Stream results (cost=1.6 rows=1.73) //添加streaming算子,新加的临时表
-> Group aggregate: count(1) (cost=1.6 rows=1.73)
-> Covering index scan on t2 using idx_c (cost=1.3 rows=3) //索引提供了order序,就不用filesort了
上面我们介绍了None-SPJ所有算子的执行方式,这章我们介绍一下桥接算子
用于将上一算子输出绑定到Table对象上,供下游算子使用,表象是写到一张新临时表里
代码StreamingIterator,具体实现就是一表达式求值写入新tuple buff
代码MaterializeIterator,跟stream差不多,不过它的临时表数据需要落盘,不是流式的。 比如后面的filesort需要blob列进行排序,桥接算子会变成materialize
跟none-spj执行流程关系不大,我们只是记录一下前置内存状态。我们拿如下sql调试举例
mysql> desc format=tree SELECT a*a, sum(b)+1, count(1)+1 as cnt FROM t1
-> GROUP BY a ORDER BY cnt DESC;
+----------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+----------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Sort: cnt DESC
-> Table scan on <temporary>
-> Aggregate using temporary table
-> Table scan on t1 (cost=0.85 rows=6)
经过optimize groupby/optimize order后,queryblocks的fields及base_ref_items内存状态是这样的
gdb打印fields每个item对象, orderby, groupby的item是二级指针,指向base_ref_items的槽位。当前base_ref_items存放的fields就是初始slice0
`--$m1 (Query_block *) 0x7fdf0817f398 tables: ($n0)t1 t1 <-
(gdb) p $m1->fields
$11 = mem_root_deque<Item*> = {$o0 = (Item *) 0x7fdf081897d0, $o1 = (Item *) 0x7fdf08180760, $o2 = (Item *) 0x7fdf08180118, $o3 = (Item *) 0x7fdf0817f928, $o4 = (Item *) 0x7fdf081803f0, $o5 = (Item *) 0x7fdf08180990}
(gdb) my e $o0
$p0 (Item_field *) 0x7fdf081897d0 field = t1.a
(gdb) my e $o1
$q0 (Item_sum_count *) 0x7fdf08180760
`--$q1 (Item_int *) 0x7fdf081805d8 value = 1
(gdb) my e $o2
$r0 (Item_sum_sum *) 0x7fdf08180118
`--$r1 (Item_field *) 0x7fdf081887a0 field = t1.b
(gdb) my e $o3
$s0 (Item_func_mul *) 0x7fdf0817f928
|--$s1 (Item_field *) 0x7fdf081884b0 field = t1.a
`--$s2 (Item_field *) 0x7fdf08188620 field = t1.a
(gdb) my e $o4
$t0 (Item_func_plus *) 0x7fdf081803f0
|--$t1 (Item_aggregate_ref *) 0x7fdf0818ac48 field =
| `--$t2 (Item_sum_sum *) 0x7fdf08180118
| `--$t3 (Item_field *) 0x7fdf081887a0 field = t1.b
`--$t4 (Item_int *) 0x7fdf08180328 value = 1
(gdb) my e $o5
$u0 (Item_func_plus *) 0x7fdf08180990
|--$u1 (Item_aggregate_ref *) 0x7fdf0818ad68 field =
| `--$u2 (Item_sum_count *) 0x7fdf08180760
| `--$u3 (Item_int *) 0x7fdf081805d8 value = 1
`--$u4 (Item_int *) 0x7fdf081808c8 value = 1
//groupby
(gdb) my e $v1->group_list.first->item
$x0 (Item_field *) 0x7fdf081897d0 field = t1.a
//orderby
(gdb) my e * $v1->order_list.first->item
$y0 (Item_func_plus *) 0x7fdf08180990
|--$y1 (Item_aggregate_ref *) 0x7fdf0818ad68 field =
| `--$y2 (Item_sum_count *) 0x7fdf08180760
| `--$y3 (Item_int *) 0x7fdf081805d8 value = 1
`--$y4 (Item_int *) 0x7fdf081808c8 value = 1
因为filesort算子对于输入数据的要求(绑定table对象)及streaming AGG算子输出数据的限制(聚合值没绑定到Table上),我们视情况添加临时表进行桥接。BTW,mysql原生的make_tmp_tables_info代码逻辑乱的令人发指,完全背离低耦合,高内聚编程思想,想要逆向推导作者设计思想非常难,完全是一坨屎山代码。
这种桥接算子越少越好,mysql目前最多添加1-2张额外临时表,会带来如下消耗
mysql> desc format=tree select distinct sum(b) from t2 group by a with rollup having count(1)+1 > 10;
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Table scan on <temporary> (cost=5..5 rows=0)
-> Temporary table with deduplication (cost=2.5..2.5 rows=0) #物化去重实现distinct
-> Table scan on <temporary> (cost=2.5..2.5 rows=0)
-> Temporary table (cost=0..0 rows=0) #桥接物化,绑定到Table对象上
-> Filter: ((rollup_sum_switcher(count(1)) + 1) > 10)
-> Group aggregate with rollup: count(1), sum(t2.b) (cost=0.85 rows=2.73)
-> Sort: a (cost=0.55 rows=3)
-> Table scan on t2 (cost=0.55 rows=3)
对比之下hyper:添加streaming算子,避免物化写盘,实际执行效率也是优于原生
mysql> set optimizer_switch='hypergraph_optimizer=on';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> desc select distinct sum(b) from t2 group by a with rollup having count(1)+1 > 10;
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Sort with duplicate removal: `sum(b)` (cost=2.24..2.24 rows=1.65)#filesort实现distinct
-> Stream results (cost=0.576..1.57 rows=2.73) #streaming桥接,绑定到Table对象上
-> Filter: ((rollup_sum_switcher(count(1)) + 1) > 10) (cost=0.576..1.57 rows=2.73)
-> Group aggregate with rollup: count(1), sum(t2.b) (cost=0.476..1.3 rows=2.73)
-> Index scan on t2 using idx_a (cost=0.333..1 rows=3)
理想的执行plan,只需一次去重物化实现distinct,魔改mysql是可以做到
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Table scan on <temporary> (cost=5..5 rows=0)
-> Temporary table with deduplication (cost=2.5..2.5 rows=0) #物化去重实现distinct,groupbuff中的聚合结果直接物化
-> Filter: ((rollup_sum_switcher(count(1)) + 1) > 10)
-> Group aggregate with rollup: count(1), sum(t2.b) (cost=0.85 rows=2.73)
-> Index scan on t2 using idx_a (cost=0.333..1 rows=3)
示例SQL groupby算子选择了tmptable grouping,orderby算子物理实现是filesort。 tmptable grouping会额外创建一张临时表,每张临时表代表一个新stage,会额外新申请一个slice,存放该阶段的fields,例子中是REF_SLICE_TMP1(第一张临时表) slice存放的item都是临时表的fields以及上一stage没计算的算子。 例子中REF_SLICE_TMP1 内存状态如下图:(Tmp是临时表)
REF_SLICE_ACTIVE永远指向base_ref_items数组,当前就是上面GDB打印的fields. REF_SLICE_TMP1 slice是从上一stage构建而来的。这一张临时表会完成聚合函数运算,存储t1.a, count(1), sum(t1.b), t1.a*t1.a到临时表 构建REF_SLICE_TMP1遵循如下规则
每新生成一个临时表就是一个新stage,join算子是stage0,也就是初始的fields,一开始就会暂存到 REF_SLICE_SAVED_BASE。 none-SPJ算子会构建多张临时表,所以stage构建出下图:
在JOIN::create_root_access_path_for_join中
上面生成新slice也介绍了怎么把上一stage数据写入当前临时表tuple buff,以及写完切slice,作为下一stage的读取,下一stage就能看到正确的表达式了。下面我们结合例子看下调试信息 TemptableAggregateIterator:输入slice是saved,也就是原始的fields,输出slice是1
(gdb) p *param->items_to_copy
$205 = Func_ptr_array = {$ih0 = (Func_ptr) {m_func = 0x7fdef80c6028, m_result_field = 0x7fdef80cc130, m_result_item = 0x0, m_func_bits = 33}, $ih1 = (Func_ptr) {m_func = 0x7fdef80c9028, m_result_field = 0x7fdef80cbe68, m_result_item = 0x0, m_func_bits = 289}}
(gdb) my e $ih0.m_func
$ii0 (Item_func_mul *) 0x7fdef80c6028
|--$ii1 (Item_field *) 0x7fdef80c7d08 field = t1.a
`--$ii2 (Item_field *) 0x7fdef80c7e78 field = t1.a
(gdb) my e $ih0.m_result_field
$ik0 (Field_longlong *) 0x7fdef80cc130 field = <temporary>.a*a // a*a 写入 tmp."a*a" 中
(gdb) my e $ih1.m_func
$il0 (Item_field *) 0x7fdef80c9028 field = t1.a
(gdb) my e $ih1.m_result_field
$ij0 (Field_long *) 0x7fdef80cbe68 field = <temporary>.a // t1.a 写入 tmp.a 中
因为是火山模型,从临时表读一条数据,并切换到当前slice,未执行的表达式可以准备求值了
int TemptableAggregateIterator<Profiler>::Read() {
/*
Enable the items which one should use if one wants to evaluate
anything (e.g. functions in WHERE, HAVING) involving columns of this
table.
*/
if (m_join != nullptr && m_ref_slice != -1) {
if (!m_join->ref_items[m_ref_slice].is_null()) {
m_join->set_ref_item_slice(m_ref_slice);
}
}
int err = m_table_iterator->Read();
}
(gdb) p m_join->current_ref_item_slice
$210 = 1 //当前slice=1
(gdb) my e m_join->ref_items[m_ref_slice].m_array[1]
$jc0 (Item_func_plus *) 0x7fdef80c6af0 //可以看到这个未求值expr变为<temporary>.`sum(t1.b)`+1
|--$jc1 (Item_aggregate_ref *) 0x7fdef80ca6c8 field =
| `--$jc2 (Item_field *) 0x7fdef80ccd08 field = <temporary>.sum(t1.b)
`--$jc3 (Item_int *) 0x7fdef80c6a28 value = 1
(gdb) my e m_join->ref_items[m_ref_slice].m_array[2] //可以看到这个未求值expr变为<temporary>.`count(1)`+1
$jd0 (Item_func_plus *) 0x7fdef80c7090
|--$jd1 (Item_aggregate_ref *) 0x7fdef80ca7e8 field =
| `--$jd2 (Item_field *) 0x7fdef80ccb98 field = <temporary>.count(1)
`--$jd3 (Item_int *) 0x7fdef80c6fc8 value = 1
执行到filesort算子时,可以cnt表达式的值了
$jm0 (Item_func_plus *) 0x7fdef80c7090
|--$jm1 (Item_aggregate_ref *) 0x7fdef80ca7e8 field =
| `--$jm2 (Item_field *) 0x7fdef80ccb98 field = <temporary>.count(1)
`--$jm3 (Item_int *) 0x7fdef80c6fc8 value = 1
(gdb) p item->val_int()
$215 = 4
(gdb) f
#0 (anonymous namespace)::make_sortkey_from_item (item=0x7fdef80c7090, result_type=INT_RESULT, dst_length=std::optional<unsigned long> = {...}, tmp_buffer=0x7fdef80cd088, to=0x7fdef80d0d80 '\217' <repeats 200 times>...,
to_end=0x7fdef80d8d80 "\217\217\217\217\217\217\217\217\021\210", maybe_null=0x7fdfe45f0347, hash=0x7fdfe45f0348) at /home/jacob.cj/mysql-bld/sql/filesort.cc:1314
受限于本文篇幅,就不再展开介绍了,读者可以根据提供的sql自行调试,mysql GB+rollup一定是流式聚集的,保序才能Rollup
mysql> desc format=tree SELECT a*a, sum(b)+1, count(1)+1 as cnt FROM t1 GROUP BY a with rollup ORDER BY cnt DESC;
| -> Sort: cnt DESC
-> Stream results (cost=1.45 rows=3.45)
-> Group aggregate with rollup: count(1), sum(t1.b) (cost=1.45 rows=3.45)
-> Sort: a (cost=0.85 rows=6)
-> Table scan on t1 (cost=0.85 rows=6)
本文聚集于None-SPJ算子执行流程,涉及到JOIN::optimize里面辅助结构构建,Iterator构建,以及算子具体实现。覆盖的代码非常多,模块之间耦合度非常大,需要通盘整体看下来,只看部分模块逻辑非常容易困惑,只见数据不见森林。了解整体思想后,非常有助于源码理解。PolarDB-mysql对于这些复杂算子也是支持多阶段并行的,能大大加速查询,欢迎使用。