大表中海量历史数据的更新与删除一直是令DBA非常头痛的事情,在表已经分区的前提下我们还可以利用并行或者truncate parition等手段来为UPDATE或者DELETE提速, 但是如果对象是普通的非分区对表(non-partitioned heap table)的话,似乎就没有太好的加速方法了, nologging或parallel 对非分区表都没有效果。
之前我也有介绍过一个利用rowid将非分区表分割成指定数量个区间块的方法,见<Script:partition table into rowid extent chunks>;利用该脚本可以获取到这些分割后的区间块的起始rowid和结尾rowid,之后利用between start_rowid and end_rowid的条件构造多条DML语句, 因为这些DML语句所更新的数据都是在互不相关的区间内的,所以可以在多个终端内并行地运行这些DML语句,而不会造成锁的争用或者Oracle并行执行协调(Parallel Execution coordinator ) 所带来的一些开销。
为了加深理解,我们来实践一下<Script:partition table into rowid extent chunks>中提到的方法:
SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for Linux: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.askmac.cn & www.askmac.cn SQL> select count(*) from order_history; COUNT(*) ---------- 1137523 order_history 是一张非分区表, 存有100w条记录 SQL> select count(*) from order_history where order_id<1999999; COUNT(*) ---------- 499999
假设我们要删除order_history上order_id<1999999的所有记录, 占到总行数的50%。由于order_history是非分区表,所以这里无法用Oracle的并行执行Parallel Execution来加速操作。 因此我们利用上述脚本来构造多条DML语句:
SQL> @rowid_chunk Enter value for rowid_ranges: 24 ==>这里输入要构造的rowid分区个数 Enter value for segment_name: ORDER_HISTORY ==> 输入表名 Enter value for owner: SYS ==> owner名 where rowid between 'AAANJFAAEAAEXlBAAA' and 'AAANJFAAEAAEYjdCcP'; where rowid between 'AAANJFAAEAAEYjeAAA' and 'AAANJFAAEAAEZDdCcP'; where rowid between 'AAANJFAAEAAEZDeAAA' and 'AAANJFAAEAAEZhdCcP'; where rowid between 'AAANJFAAEAAEZheAAA' and 'AAANJFAAEAAEaBdCcP'; where rowid between 'AAANJFAAEAAEaBeAAA' and 'AAANJFAAEAAEahdCcP'; where rowid between 'AAANJFAAEAAEaheAAA' and 'AAANJFAAEAAEa3dCcP'; where rowid between 'AAANJFAAEAAEa3eAAA' and 'AAANJFAAEAAEbfdCcP'; where rowid between 'AAANJFAAEAAEbfeAAA' and 'AAANJFAAEAAEbzdCcP'; where rowid between 'AAANJFAAEAAEbzeAAA' and 'AAANJFAAEAAEcbdCcP'; where rowid between 'AAANJFAAEAAEcbeAAA' and 'AAANJFAAEAAEcvdCcP'; where rowid between 'AAANJFAAEAAEcveAAA' and 'AAANJFAAEAAEdXdCcP'; where rowid between 'AAANJFAAEAAEdXeAAA' and 'AAANJFAAEAAEdrdCcP'; where rowid between 'AAANJFAAEAAEdreAAA' and 'AAANJFAAEAAEeTdCcP'; where rowid between 'AAANJFAAEAAEeTeAAA' and 'AAANJFAAEAAEe5dCcP'; where rowid between 'AAANJFAAEAAEe5eAAA' and 'AAANJFAAEAAEfNdCcP'; where rowid between 'AAANJFAAEAAEfNeAAA' and 'AAANJFAAEAAEf1dCcP'; where rowid between 'AAANJFAAEAAEf1eAAA' and 'AAANJFAAEAAEgJdCcP'; where rowid between 'AAANJFAAEAAEgJeAAA' and 'AAANJFAAEAAEgxdCcP'; where rowid between 'AAANJFAAEAAEgxeAAA' and 'AAANJFAAEAAEhZdCcP'; where rowid between 'AAANJFAAEAAEhZeAAA' and 'AAANJFAAEAAEhtdCcP'; where rowid between 'AAANJFAAEAAEhteAAA' and 'AAANJFAAEAAEiVdCcP'; where rowid between 'AAANJFAAEAAEiVeAAA' and 'AAANJFAAEAAEi1dCcP'; where rowid between 'AAANJFAAEAAEi1eAAA' and 'AAANJFAAEAAEjVdCcP'; where rowid between 'AAANJFAAEAAEjVeAAA' and 'AAANJFAAEAAEjldCcP';
以上我们利用脚本构造出来24个由rowid标示的分区块(chunks),相当于我们人为地将一张非分区表划分成24个区域,每个区域都互不重叠,利用rowid做分界线。尝试利用上面获取到的条件condition来构造DML:
UPDATE OR DELETE
where rowid between ‘start_rowid’ and ‘end_rowid’ AND 其他条件
例如要删除非分区表上所有order_id<1999999;的记录,如不优化则是一条语句:
DELETE FROM ORDER_HISTORY where order_id<1999999;
COMMIT;
实际在很大的表上这样删除数据是不理想也不可行的,几点理由:
1. 单条SQL语句串行执行,速度缓慢
2. 运行时间过长可能引发ORA-1555等著名错误
3. 如果失败rollback回滚可能是一场灾难
若利用我这里介绍的方法, 则可以构造出多条DML语句并行删除,每一条均只删除一小部分:
DELETE FROM ORDER_HISTORY where rowid between ‘AAANJFAAEAAEXlBAAA’ and ‘AAANJFAAEAAEYjdCcP’ and order_id<1999999;
COMMIT;
DELETE FROM ORDER_HISTORY where rowid between ‘AAANJFAAEAAEYjeAAA’ and ‘AAANJFAAEAAEZDdCcP’ and order_id<1999999;
COMMIT;
DELETE FROM ORDER_HISTORY where rowid between ‘AAANJFAAEAAEZDeAAA’ and ‘AAANJFAAEAAEZhdCcP’ and order_id<1999999;
COMMIT;
………………………………..
视乎你想要的并行度, 将以上构成DML语句再分割几块,打开多个终端同时执行。
这样做的几个优点:
1. 用户手动控制的并行执行,省去了Oracle Parallel并行控制的开销,使用得当的话比加parallel hint或者表上加并行度效率更高。
2. 将数据分割成小块(chunks)来处理,避免了ORA-1555错误
3. 用户可以根据主机负载和IO 动态地加减并行度
SQL> select count(*) from order_history where rowid between 'AAANJFAAEAAEXlBAAA' and 'AAANJFAAEAAEYjdCcP'; COUNT(*) ---------- 84342 SQL> select count(*) from order_history where rowid between 'AAANJFAAEAAEYjeAAA' and 'AAANJFAAEAAEZDdCcP'; COUNT(*) ---------- 49959 SQL> select count(*) 2 from order_history 3 where rowid between 'AAANJFAAEAAEXlBAAA' and 'AAANJFAAEAAEYjdCcP' 4 and order_id < 1999999; COUNT(*) ---------- 84342 SQL> select count(*) 2 from order_history 3 where rowid between 'AAANJFAAEAAEYjeAAA' and 'AAANJFAAEAAEZDdCcP' 4 and order_id < 1999999; COUNT(*) ---------- 49959 SQL> delete order_history 2 where rowid between 'AAANJFAAEAAEXlBAAA' and 'AAANJFAAEAAEYjdCcP' 3 and order_id < 1999999; 84342 rows deleted. SQL> delete order_history 2 where rowid between 'AAANJFAAEAAEYjeAAA' and 'AAANJFAAEAAEZDdCcP' 3 and order_id < 1999999; 49959 rows deleted. SQL>commit; Commit complete
但是以上方法仍存在几点不足:
1. 《Script:partition table into rowid extent chunks》脚本目前不支持分区表
2. 因为《Script:partition table into rowid extent chunks》的脚本是根据表段的大小均匀地分割成指定数目的区域,试想当一些要更新或者删除的历史数据集中分布在segment的某些位置时(例如所要删除的数据均存放在一张表的前200个Extents中),因为脚本是根据大小均匀分割区域的,所以某些区域是根本没有我们所要处理的数据的,由这些区域构造出来的DML语句都是无意义的。
基于以上这些考虑,我重写了获取rowid分块的SQL脚本,如下:
REM put it in GUI TOOLS! otherwise caused ORA-00933 REM control commit yourself, avoid ORA-1555 select 'and rowid between ''' || ora_rowid || ''' and ''' || lead(ora_rowid, 1) over(order by rn asc) || '''' || ';' from ( with cnt as (select count(*) from order_history) -- 注意替换这里!! select rn, ora_rowid from (select rownum rn, ora_rowid from (select rowid ora_rowid from order_history -- 注意替换这里!! order by rowid)) where rn in (select (rownum - 1) * trunc((select * from cnt) / &row_range) + 1 from dba_tables where rownum < &row_range --输入分区的数目 union select * from cnt)) and rowid between AAANJFAAEAAEZELAAB and AAANJFAAEAAEaRaABm and rowid between AAANJFAAEAAEaRaABm and AAANJFAAEAAEbi+ABu and rowid between AAANJFAAEAAEbi+ABu and AAANJFAAEAAEc0iAB2 and rowid between AAANJFAAEAAEc0iAB2 and AAANJFAAEAAEeGHAAG and rowid between AAANJFAAEAAEeGHAAG and AAANJFAAEAAEfVrAAO and rowid between AAANJFAAEAAEfVrAAO and AAANJFAAEAAEgnPAAW and rowid between AAANJFAAEAAEgnPAAW and AAANJFAAEAAEjQIAB2 and rowid between AAANJFAAEAAEjQIAB2 and
以上脚本同样可以实现rowid分区的目的,但是因为其rowid是直接取自SELECT语句查询,所以不存在不支持分区表等复杂对象的情况。 也因为rowid是来源于SELECT,所以我们可以指定针对那些存在符合条件数据的范围分区。
例如我们希望仅针对存有满足order_id<1999999条件数据的范围rowid分块,那么将replace here的地方替换成你的条件:
select 'and rowid between ''' || ora_rowid || ''' and ''' || lead(ora_rowid, 1) over(order by rn asc) || '''' || ';' from ( with cnt as (select count(*) from order_history where order_id < 1999999) -- replace here select rn, ora_rowid from (select rownum rn, ora_rowid from (select rowid ora_rowid from order_history where order_id < 1999999 -- replace here order by rowid)) where rn in (select (rownum - 1) * trunc((select * from cnt) / &row_range) + 1 from dba_tables where rownum < &row_range union select * from cnt)) and rowid between 'AAANJFAAEAAEZELAAB' and 'AAANJFAAEAAEZf7ABd'; and rowid between 'AAANJFAAEAAEZf7ABd' and 'AAANJFAAEAAEZ9uABB'; and rowid between 'AAANJFAAEAAEZ9uABB' and 'AAANJFAAEAAEaXhAB2'; and rowid between 'AAANJFAAEAAEaXhAB2' and 'AAANJFAAEAAEa3dABW'; and rowid between 'AAANJFAAEAAEa3dABW' and 'AAANJFAAEAAEbTVAA2'; and rowid between 'AAANJFAAEAAEbTVAA2' and 'AAANJFAAEAAEbzRAAW'; and rowid between 'AAANJFAAEAAEbzRAAW' and 'AAANJFAAEAAEcyIAB2'; and rowid between 'AAANJFAAEAAEcyIAB2' and ''; SQL> select order_id from order_history where rowid='AAANJFAAEAAEZELAAB'; ORDER_ID ---------- 538672 SQL> select order_id from order_history where rowid='AAANJFAAEAAEcyIAB2'; ORDER_ID ---------- 1994752 DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEZELAAB' and 'AAANJFAAEAAEZf7ABd'; DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEZf7ABd' and 'AAANJFAAEAAEZ9uABB'; DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEZ9uABB' and 'AAANJFAAEAAEaXhAB2'; DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEaXhAB2' and 'AAANJFAAEAAEa3dABW'; DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEa3dABW' and 'AAANJFAAEAAEbTVAA2'; DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEbTVAA2' and 'AAANJFAAEAAEbzRAAW'; DELETE ORDER_HISTORY WHERE ORDER_ID<1999999 and rowid between 'AAANJFAAEAAEbzRAAW' and 'AAANJFAAEAAEcyIAB2'; 45713 rows deleted. SQL> 45712 rows deleted. SQL> 45712 rows deleted. SQL> 45712 rows deleted. SQL> 45712 rows deleted. SQL> 45712 rows deleted. SQL> 91425 rows deleted. SQL> SQL> SQL> SQL> SQL> commit; Commit complete. SQL> SQL> SQL> select count(*) from ORDER_HISTORY WHERE ORDER_ID<1999999; COUNT(*) ---------- 0
几点注意事项:
1. 请将该脚本放到Pl/SQL Developer或Toad之类的工具中运行,在sqlplus中运行可能出现ORA-00933
2. 不要忘记替换红色标记的replace here的条件
3. 自行控制commit 避免出现ORA-1555错误
该脚本目前存在一个不足,在获取rowid分块时要求大表上有适当的索引,否则可能会因为全表扫描并排序而十分缓慢,若有恰当的索引则会使用INDEX FAST FULL SCAN。 这里的恰当索引是指至少有一个非空列的普通b*tree索引, 最好的情况是有主键索引或者bitmap位图索引。
Oracle在版本11.2中引入了DBMS_PARALLEL_EXECUTE 的新特性来帮助更新超大表,文档<11.2 New Feature : Using DBMS_PARALLEL_EXECUTE to Update Large Tables in Parallel [ID 1066555.1]>对该特性做了介绍。
该DBMS_PARALLEL_EXECUTE新特性的一大亮点就是 可以对表上的行数据进行分组为更小的块(chunks), 且并行更新这些小的块:
dbmspexe.sql – DBMS Parallel EXEcute package
This package contains APIs to chunk a table into smaller units and execute those chunks in parallel.
The DBMS_PARALLEL_EXECUTE package enables you to incrementally update the data in a large table in parallel, in two high-level steps:
- Group sets of rows in the table into smaller chunks.
- Apply the desired UPDATE statement to the chunks in parallel, committing each time you have finished processing a chunk.
This technique is recommended whenever you are updating a lot of data. Its advantages are:
- You lock only one set of rows at a time, for a relatively short time, instead of locking the entire table.
- You do not lose work that has been done if something fails before the entire operation finishes.
- You reduce rollback space consumption.
- You improve performance.
DBMS_PARALLEL_EXECUTE可以以3种方式将数据分块:
Different Ways to Spilt Workload
- CREATE_CHUNKS_BY_NUMBER_COL : Chunks the table associated with the given task by the specified column.
- CREATE_CHUNKS_BY_ROWID : Chunks the table associated with the given task by ROWID
- CREATE_CHUNKS_BY_SQL : Chunks the table associated with the given task by means of a user-provided SELECT statement
其中就包含了CREATE_CHUNKS_BY_ROWID这种按照ROWID分块的方式。
:) 这是一个来的有点迟的新特性(版本11.2才加入),迟到我们已经自行找到了多种原理类似的解决方案。 从另一个角度看, 用户的需求永远是那么正确, 是大量的用户在驱动着Oracle这个巨人!
很好的方法,实际中有帮助
不错,今天正好有遇到这个大表更新的问题,可以尝试一下楼主提供的方法。以前就想的到的是parallel和nologging谢谢!
Hello
It’s great! The new feature is very useful for huge database system.
Please look through the following detail linker about using the DBMS_PARALLEL_EXECUTE package.
http://www.oracle-base.com/articles/11g/dbms_parallel_execute_11gR2.php
Regards, chengwill.
请教下maclean:
获取rowid分块的SQL脚本中,select * from cnt;这个cnt是个什么??
还有,这个脚本能执行么??呵呵 ,后面怎么只有一堆”and rowid between “
同学,显然你没有仔细读这个文档