You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suppose we have two tables: T_HUGE of 1'000'000 rows and T_TINY of only 100 rows:
recreate table t_huge(id int primary key using index t_huge_pk);
recreate table t_tiny(id int primary key using index t_tiny_pk);
commit;
set term ^;
execute block as
declare n int = 1000000;
declare n0 int;
declare q int = 100;
begin
n0=n;
while (n>0) do insert into t_huge values(:n) returning :n-1 into :n;
while (q>0) do insert into t_tiny values(rand()*:n0) returning :q-1 into :q;
end
^set term ;^
commit;
set statistics index t_huge_pk;
set statistics index t_tiny_pk;
commit;
If we have to delete all such records from T_HUGE that present also in T_TINY (i.e. they have equal IDs) we could use such statement:
delete from t_huge t where exists(select * from t_tiny s where http://t.id=s.id)
But this variant has poor plan:
PLAN (S INDEX (T_TINY_PK))
PLAN (T NATURAL)
And also it has bad perfomance data:
100 record(s) was(were) deleted from T_HUGE
Execute time = 6s 94ms
...
Memory buffers = 1 024
Reads from disk to cache = 12 351
Writes from cache to disk = 89
Fetches from cache = 4 025 225
Table T_HUGE is *fully* scanned (NIR = 1000'000). Optimizer ignores cardinality of T_TINY and selects T_HUGE as driving table. I thing this is obvious mistake.
Compare with another way, non-SQL:
execute block as
declare v_id type of column t_tiny.id;
begin
for
select id
from t_tiny
into v_id
do
delete from t_huge where id=:v_id;
end
Perfomance data:
Execute time = 125ms
...
Memory buffers = 1 024
Reads from disk to cache = 212
Writes from cache to disk = 0
Fetches from cache = 833
In this case we have only 100 NIRs for T_TINY and 100 IRs for T_HUGE.
The text was updated successfully, but these errors were encountered:
Submitted by: @pavel-zotov
Is related to CORE2142
Duplicates CORE1086
Suppose we have two tables: T_HUGE of 1'000'000 rows and T_TINY of only 100 rows:
recreate table t_huge(id int primary key using index t_huge_pk);
recreate table t_tiny(id int primary key using index t_tiny_pk);
commit;
set term ^;
execute block as
declare n int = 1000000;
declare n0 int;
declare q int = 100;
begin
n0=n;
while (n>0) do insert into t_huge values(:n) returning :n-1 into :n;
while (q>0) do insert into t_tiny values(rand()*:n0) returning :q-1 into :q;
end
^set term ;^
commit;
set statistics index t_huge_pk;
set statistics index t_tiny_pk;
commit;
If we have to delete all such records from T_HUGE that present also in T_TINY (i.e. they have equal IDs) we could use such statement:
delete from t_huge t where exists(select * from t_tiny s where http://t.id=s.id)
But this variant has poor plan:
PLAN (S INDEX (T_TINY_PK))
PLAN (T NATURAL)
And also it has bad perfomance data:
100 record(s) was(were) deleted from T_HUGE
Execute time = 6s 94ms
...
Memory buffers = 1 024
Reads from disk to cache = 12 351
Writes from cache to disk = 89
Fetches from cache = 4 025 225
Table T_HUGE is *fully* scanned (NIR = 1000'000). Optimizer ignores cardinality of T_TINY and selects T_HUGE as driving table. I thing this is obvious mistake.
Compare with another way, non-SQL:
execute block as
declare v_id type of column t_tiny.id;
begin
for
select id
from t_tiny
into v_id
do
delete from t_huge where id=:v_id;
end
Perfomance data:
Execute time = 125ms
...
Memory buffers = 1 024
Reads from disk to cache = 212
Writes from cache to disk = 0
Fetches from cache = 833
In this case we have only 100 NIRs for T_TINY and 100 IRs for T_HUGE.
The text was updated successfully, but these errors were encountered: