您好,欢迎来到三六零分类信息网!老站,搜索引擎当天收录,欢迎发信息

SPA游标采集之去除重复

2025/5/31 15:25:15发布18次查看
当我们做数据库升级项目的时候,我们一般会去做性能回归测试,通俗一点来说,就是把10g生产库的语句拿到11g生产环境上运行,如果发现运行过程中,由于优化器、实例参数等改变导致执行计划变化,最终导致性能退化的语句,需要拿出来单独进行分析及验证。要做这
当我们做数据库升级项目的时候,我们一般会去做性能回归测试,通俗一点来说,就是把10g生产库的语句拿到11g生产环境上运行,如果发现运行过程中,由于优化器、实例参数等改变导致执行计划变化,最终导致性能退化的语句,需要拿出来单独进行分析及验证。要做这个事情,首先我们需要把我们的10g上的语句给采集出来,采集方法分为以下几种方式。
cursor cache awr snapshots awr baselines another sql tuning set 10046 trace file(11g+) 对于大型的生产库,我们一般采集的是方式是:游标还有awr snapshots的数据。为了能够完美的抓取到全部的sql语句,我们往往需要一天对cursor cache进行多次采集。大部分建议是放在高峰期的时候采集,这么做主要是为了防止有些sql还没被抓取到sqlset就从shared pool中purge出去了。在这个抓取的过程中,有一个困扰的问题就是literal sql的一些语句。举个例子如下:
select * from emp where empno=1456;
select * from emp where empno=1457;
select * from emp where empno=1458;
这三个sql语句会先后被采集进来,每天都这样采集,会导致我们的sqlset的结果集越来越大。正常情况下,一个大型的生产库的sql语句也就几w条而已,但是如果你的硬解析非常多的话,可能在短短的几天,你采集的语句就会突破到100w条以上。然后在做后面sqlset转换到中转表的这个过程,会执行相当长的时间,搞不好就报ora-01555,导致运行一段时间后无法成功转换。我在这上面被坑了好几次。可能你会说,就100w的数据,oracle应该很快转换出来的吧。这个我得解释一下。我们的中转表里面其实包含了好几个lob字段和特殊type类型。一旦数据量大了,可以说速度完全不行。正是基于这种原因,我们需要考虑一种方式,在采集的过程中进行去除重复的操作。
我们来举个例子说明下。
1.新建sqlset
sql> exec dbms_sqltune.create_sqlset('sqlset1'); pl/sql procedure successfully completed.sql> select * from dba_sqlset; id name owner description created last_modi statement_count ---------- --------------- --------------- ------------------------------ --------- --------- --------------- 1 sqlset1 sys 11-may-14 11-may-14 0
2.使用scott用户,执行几条语句,执行前先flush下shared pool
sql> alter system flush shared_pool; system altered.connect scott/tigerselect * from emp;select * from emp where empno=1456;select * from emp where empno=1457;
3.使用sys用户开始采集语句
declare mycur dbms_sqltune.sqlset_cursor; begin open mycur for select value(p) from table(dbms_sqltune.select_cursor_cache('parsing_schema_name in (''scott'')', null, null, null, null, 1, null, 'all')) p; dbms_sqltune.load_sqlset(sqlset_name => 'sqlset1', populate_cursor => mycur, load_option => 'merge'); close mycur; end; / sql> select * from dba_sqlset; id name owner description created last_modi statement_count ---------- --------------- --------------- ------------------------------ --------- --------- --------------- 1 sqlset1 sys 11-may-14 11-may-14 9
4.查看采集结果
sql> select sql_id,sql_text from dba_sqlset_statements ; sql_id sql_text ------------- -------------------------------------------------------------------------------- 1srhq04p4x0zz select /* opt_dyn_samp */ /*+ all_rows ignore_where_clause no_parallel(samplesub 38mhtu5pc7d07 select * from emp where empno=1456 7hys3h7ysgf9m select attribute,scope,numeric_value,char_value,date_value from system.product_p a2dk8bdn0ujx7 select * from emp bc26hcc8td76f select * from emp where empno=1457 cw6vxf0kbz3v1 select char_value from system.product_privs where (upper('sql*plus') like uppe d6vwqbw6r2ffk select user from dual dyk4dprp70d74 select decode('a','a','1','2') from dual g4y6nw3tts7cc begin dbms_application_info.set_module(:1,null); end;
从这里我们可以观察到我们的三条语句都采集进来了。这里我们可以看到我们的literal sql,如果每天对游标采集好几次的话,我们的literal sql会越采集越多,导致sqlset的结果集非常大。当sql数量达到百万级别后,使得我们的转换非常慢。如何去重呢?我们看下这个dba_sqlset_statements的结构。
sql> desc dba_sqlset_statements name null? type ------------------------------------------- -------- ----------------------------- sqlset_name not null varchar2(30) sqlset_owner varchar2(30) sqlset_id not null number sql_id not null varchar2(13) force_matching_signature not null number sql_text clob parsing_schema_name varchar2(30) parsing_schema_id number plan_hash_value not null number bind_data raw(2000) binds_captured char(1) module varchar2(64) action varchar2(64) elapsed_time number cpu_time number buffer_gets number disk_reads number direct_writes number rows_processed number fetches number executions number end_of_fetch_count number optimizer_cost number optimizer_env raw(2000) priority number command_type number first_load_time varchar2(19) stat_period number active_stat_period number other clob plan_timestamp date sql_seq not null numbersql> select sql_id,sql_text,force_matching_signature from dba_sqlset_statements; sql_id sql_text force_matching_signature ------------- -------------------------------------------------------------------------------- --------------------------- 1srhq04p4x0zz select /* opt_dyn_samp */ /*+ all_rows ignore_where_clause no_parallel(samplesub 4094562552765466770 38mhtu5pc7d07 select * from emp where empno=1456 16946033956547040230 7hys3h7ysgf9m select attribute,scope,numeric_value,char_value,date_value from system.product_p 10967007256268736959 a2dk8bdn0ujx7 select * from emp 7001777653489406494 bc26hcc8td76f select * from emp where empno=1457 16946033956547040230 cw6vxf0kbz3v1 select char_value from system.product_privs where (upper('sql*plus') like uppe 18201431879876406267 d6vwqbw6r2ffk select user from dual 17376422952071979402 dyk4dprp70d74 select decode('a','a','1','2') from dual 1846728577492307645 g4y6nw3tts7cc begin dbms_application_info.set_module(:1,null); end; 0
这里我们主要利用force_matching_signature这个字段。可以看到我们的literal sql的force_matching_signature的值是相同的。这里是16946033956547040230。所以我们要对这个列进行distinct,并将distinct出来的值放在一个我们自定义的table里面。
5.去重采集
sql> create table spaqc as select distinct force_matching_signature from dba_sqlset_statements; table created. sql> select * from spaqc; force_matching_signature --------------------------- 18201431879876406267 1846728577492307645 4094562552765466770 17376422952071979402 10967007256268736959 7001777653489406494 16946033956547040230 0 8 rows selected.
这里需要注意一下force_matching_signature为0的情况下,一般是运行pl/sql、job之类的操作,这个我们不能过滤掉。所以我们要把0这行给删掉。
sql> delete from spaqc where force_matching_signature=0; 1 row deleted. sql> commit; commit complete.
6.再次测试,看看literal sql会不会被采集。
select * from emp where empno=1458;select * from emp where empno=1459;select * from emp where empno=1460;select * from emp where empno=1460 and ename='scott';declare mycur dbms_sqltune.sqlset_cursor; begin open mycur for select value(p) from table(dbms_sqltune.select_cursor_cache('parsing_schema_name in (''scott'') and force_matching_signature not in (select force_matching_signature from spaqc)', null, null, null, null, 1, null, 'all')) p; dbms_sqltune.load_sqlset(sqlset_name => 'sqlset1', populate_cursor => mycur, load_option => 'merge'); close mycur; end; / sql> select sql_id,sql_text,force_matching_signature from dba_sqlset_statements ; sql_id sql_text force_matching_signature ------------- -------------------------------------------------------------------------------- --------------------------- 1srhq04p4x0zz select /* opt_dyn_samp */ /*+ all_rows ignore_where_clause no_parallel(samplesub 4094562552765466770 38mhtu5pc7d07 select * from emp where empno=1456 16946033956547040230 7hys3h7ysgf9m select attribute,scope,numeric_value,char_value,date_value from system.product_p 10967007256268736959 a2dk8bdn0ujx7 select * from emp 7001777653489406494 bc26hcc8td76f select * from emp where empno=1457 16946033956547040230 cw6vxf0kbz3v1 select char_value from system.product_privs where (upper('sql*plus') like uppe 18201431879876406267 d6vwqbw6r2ffk select user from dual 17376422952071979402 d8fw5smyjva0b select * from emp where empno=1460 and ename='scott' 17445701640293030006 dyk4dprp70d74 select decode('a','a','1','2') from dual 1846728577492307645 g4y6nw3tts7cc begin dbms_application_info.set_module(:1,null); end; 0 10 rows selected.
这里我们看到literal sql没有被采集进来,我们实现了游标采集的过滤。
原文地址:spa游标采集之去除重复, 感谢原作者分享。
该用户其它信息

VIP推荐

免费发布信息,免费发布B2B信息网站平台 - 三六零分类信息网 沪ICP备09012988号-2
企业名录 Product