Big Delete? Use Insert instead. #JoelKallmanDay

The client I'm contracting for has a transform schema in which data from multiple governmental sources is transformed into one data model. The result is that any single table contains data that originated from multiple sources. Therefore each record has a source_type_id to indicate where the data came from.

Recently the client opted to no longer fetch and treat all those sources themselves but buy the same data from a commercial vendor that does all the cleansing and aligning among the official sources for us (and their other clients).

The result is that tables in the transform schema will have to keep only the data from the new source (it is already in there to be able to check its quality cfr the other sources), while the data from all those other sources needs to be removed.

delete from transform.some_source_tb
where source_type_id not in (20/*new comercial source*/,
                             1/*private data*/,
                             9/*test data*/)

That will do the trick. but since about 50% of the (pretty large!) tables will be removed, it is not very efficient. It is also going to leave a lot of storage space wasted (and the IT crowd is already complaining about how much we use...).

Instead, we came up with a much more complex way of working but with a much better result: The original table is truncated (with a drop of the old storage space) and only the data that needs to be kept gets written back into it.

The scenario in short:

  • Create a new table with the same layout as the original one

  • Fill the new table with only the data you want to keep

  • Truncate the original table

  • Add the data from the temporary table back to the original one

  • Drop the temporary table

In reality, there are some extra steps in between those to make sure things flow efficiently (read: fast) and data remains correct.

Create a table for temporary use

One would assume that the first two steps can be combined into one single create table as select (CTAS). The reality is that we often run into a shortage of PGA while attempting this ...

But a CTAS that does not select anything does work:

create table temp_tb as select * from some_source_tb WHERE 1=2;

Fill it with the needed data

Instead of deleting data that can be thrown out, this step only copies (i.e. selects) the data that must remain. This is how a delete is turned into the first of a pair of inserts.

The benefit of using CTAS would be that it performs a direct path insert to write the data. To achieve the same performance we use the /*+ append*/ hint.

insert /*+ append*/ into temp_tb
select * from some_source_tb
where source_type_id in (20/*new comercial source*/,
                         1/*private data*/,
                         9/*test data*/);
commit;

Prepare for downtime.

As stated above, we'll truncate the original table. This implies that for some time no data will be available to end-users: downtime. To make this time as short as possible, prepare some things in advance. To make the insertion quick (and correct) we'll

  • disable constraints
    You're inserting data that came out of the table, so it will still comply with ordinary constraints. (Only *table-*level constraints might get violated by a delete like this.)

  • disable indexes
    It's a big insert, better (quicker) to rebuild indexes after it has been completed.

  • disable triggers
    Not only to not spend time running the trigger code but also to keep the data intact: we have classic last-modified-by-and-on-fields and triggers to maintain them. It should be clear that since we are mimicking a delete, the last modification (meta)data should remain untouched.

  • Allocate space: an extent large enough for all kept data
    This prevents having to extend the table over and over again while blocks of records are being written.

Disable constraints

Run

select constraint_name, constraint_type, status, validated,
       search_condition_vc, index_name,
       'alter table some_source_tb disable constraint '||constraint_name||';' disable_stmt,
       'alter table some_source_tb enable constraint '||constraint_name||';' enable_stmt
from user_constraints
where table_name='some_source_tb'
order by constraint_type;

to list all constraints on the table, along with generated statements to disable and later re-enable them.

Disable indexes

Run

select index_name, index_type, status,
       domidx_status, domidx_opstatus, funcidx_status,
       'alter index '||index_name||' unusable;' disable_stmt,
       'alter index '||index_name||' rebuild;' enable_stmt
from user_indexes
where table_name='some_source_tb'
order by index_type;

to list all indexes on the table, along with generated statements to disable and later re-enable them.

Disable triggers

Run

select trigger_name, description, status, trigger_body,
       'alter trigger '||trigger_name||' disable;' disable_stmt,
       'alter trigger '||trigger_name||' enable;' enable_stmt,
from user_triggers
where table_name='some_source_tb';

to list all triggers on the table, along with generated statements to disable and later re-enable them.

Allocate Space

Since the temp_tb now contains all data that will eventually end up back in some_source_tb, we can use the size of temp_tb to pre-allocate space for some_source_tb right before the inserting starts.

select segment_name, segment_type, dbms_xplan.format_size(bytes) tab_size
from user_segments
where segment_name='temp_tb'
  and segment_type='TABLE';

will return a tab_size like 123M or 234G. Round it up a bit and note that value down. It will later be used to set the next_extent storage attribute of some_source_tb. But once this operation is finished, you should set it back to its original value, so make sure to also note that orig_next_extent down before changing it:

select table_name, tablespace_name
     , dbms_xplan.format_size(initial_extent) initial_extent
     , dbms_xplan.format_size(next_extent) orig_next_extent
from user_tables where table_name='some_source_tb';

Truncate

After having prepared the statements above, it is time to get started. Once the table is truncated, there is no way back. To reduce downtime, this should be done in one continued succession.

truncate table some_source_tb DROP STORAGE;

The drop storage makes sure that all the space allocated for the big table (of which only a part will remain) is released. The table is now sized only initial_extent seen above. The end-users have lost their data!

Next, set the next_exent. This way, once the initial_extent amount of space is filled, there is one resize operation that will add enough space for the entire dataset.

alter table some_source_tb storage (next <TAB_SIZE>);

Disable triggers, indexes and constraints

Right before inserting, it is time to disable all the functionality that could slow down the inserting (and/or corrupt the data).

alter table some_source_tb disable constraint some_constraint;
alter index some_index unusable;
alter trigger some_trigger disable;

Insert

insert /*+ append parallel 4*/ into some_source_tb
select * from temp_tb;
commit;

The append hint makes sure a direct path load is performed. Depending on your CPU and storage solution the parallel hint might speed up things a bit further. (Adapt the value (4) to your situation or leave it out altogether and Oracle will choose the optimum degree of parallelism for you.)

If you are working on an instance that is not under forcelogging mode, you might want to add the nologging hint. But make sure to ask your DBA about that.

Wrap up

After the commit above, the data is back, but the table can not be used yet: the triggers and constraints need to be re-enabled and to make things perform as usual again, the indexes need to be rebuilt and statistics gathered. Oh, and resetting the next_extend!

alter table some_source_tb storage (next <ORIG_NEXT_EXTENT>);

alter table some_source_tb enable constraint some_constraint;
alter trigger some_trigger enable;
alter index some_index rebuild;

begin
   dbms_stats.gather_table_stats (
      ownname    => user,
      tabname    => 'SOME_SOURCE_TB',
      method_opt => 'for all indexed columns',
      cascade    => TRUE );
end;
/

After this, the downtime is over. End users have their data back (well, only the new source). The only thing left to do is to drop the temp_tb:

drop table temp_db purge;

Hey! What about partition exchange?

Yes, instead of doing an insert select swapping partitions between the two tables would be a much quicker operation and there would be no need to change the extent size etc.

To be able to use alter table tab1 exchange partition p1 with table tab2; the tab1 needs to have at least one partition while tab2 must be not partitioned.

If some_source_tb is partitioned, you could use exchange partition as shown above, but you'll need one temp_tb per partition: the selection of the data to keep becomes a little bit more complex. To create the various temp_tb tables, you can use create table temp_tb1 for exchange with table some_source_tb; .

Alternatively, if some_soure_tb is not partitioned, then you'll need to create temp_tb with a partition.

create table temp_tb
partition by range (source_type_id) (
   partition p_empty values less than (0),
   partition p_keep values less than (100)
)
as
select * from some_table_tb
wherer 1=2;

and the swap becomes

alter table temp_tb exchange partition p_keep with table some_table_tb;

Exchange partition has features to take along indexes as well. You might want to use those features to save downtime: you can build the indexes on temp_tb long before some_table_tb table is truncated.

But I didn't try any of this, because there are no partitions in the DB I generally work with. The reason for that lies in a small but important caveat of Oracle Spatial: The find nearest neighbour function (SDO_NN() to find, for example, the 5 restaurants closest to your current location), will return 5 objects from each partition of the table you are searching in. (see https://docs.oracle.com/en/database/oracle/oracle-database/23/spatl/sdo_nn.html)

0
Subscribe to my newsletter

Read articles from Jochen Van den Bossche directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jochen Van den Bossche
Jochen Van den Bossche

I came into contact with Oracle RDBMS at school somewhere near the end of the 90's. Ever since then I've worked with the Oracle Database. Over time I've written a few documents for myself and my colleagues explaining things that are not immediately clear by reading the doc. Eventually all of these should find their way here.