Oracle to Google AlloyDB PostgreSQL - 1B rows

WirekiteWirekite
3 min read

Google AlloyDB is a cluster-packaged cloud-hosted PostgreSQL-compatible database engine that uses Google’s cloud storage for its storage world. A more in-depth overview of AlloyDB can be found here.

For our benchmark, we used a single-node “cluster”, with the node’s compute instance specified as a Google N2-Standard-64 host. We used the same type of compute instance - N2-Standard-64 - on the Oracle extraction side.

After adding support for “local” file loading to the Wirekite PostgreSQL data loader - this change was needed to load any PostgreSQL managed-cloud target - we were able to use it without additional changes to load to AlloyDB.

Benchmark Details

This is a pure “how fast can Wirekite load AlloyDB” benchmark. We used our 1 billion row “firenibble” table, extracted from Oracle and loaded to AlloyDB, as the basis for this benchmark.

Table

We used our standard firenibble table, with the Oracle table definition shown below.

The table has 16 columns, one of them is a numeric primary key, few columns are numbers of various shapes and sizes, few strings and few dates. This makes the table pretty simple to be absorbed by most databases.

The table also has 1 billion rows.

CREATE TABLE app_db.firenibble (
    f0    number(38)    NOT NULL,
    f1    number(38),
    f2    number(38),
    f3    int,
    f4    float,
    f5    float,
    f6    float,
    f7    float,
    f8    varchar2(128),
    f9    varchar2(128),
    f10    date,
    f11    date,
    f12    date,
    f13    varchar2(128),
    f14    varchar2(128),
    f15    varchar2(128),
    PRIMARY KEY (f0)
);

The 1 billion rows.

SQL> select count(*) from app_db.firenibble;

  COUNT(*)
----------
1000000000

1 row selected.

SQL>

Oracle

Our Oracle environment is a GCP N2-standard-64 host in the us-central1 region. The host OS is Linux RHEL8 (Redhat) and Oracle is Oracle version 19c.

Benchmark Run

For AlloyDB, we created a single AlloyDB node configured as an N2-Standard-64 host. The cluster and node are in the GCP us-central1 region - same as our Oracle world - to minimize network lag.

We ran with 64 extraction threads.

One observation is in test runs against AlloyDB, we reached 100% CPU utilization, but at 64 threads, we only reached 75% CPU utilization. We believe we saturated the network between our Oracle compute instance and the AlloyDB node at about 48 threads.

And this was the result. It took 24 minutes 35 secs to move 1 billion rows from Oracle to AlloyDB. The Oracle extract alone took 21 min 25 secs, so the load to AlloyDB lagged about 2 min 50 secs behind the extract from Oracle. The transfer rate works out to 677,966 rows per second.

Here’s the confirmation queries run on AlloyDB:

wirekite=> select now();
             now
------------------------------
 2025-08-10 20:10:29.26418+00
(1 row)

wirekite=> select sum(mark::int) from wirekite_progress where mark::int <> 0;
    sum
------------
 1000000000
(1 row)

wirekite=> select count(*) from firenibble;
   count
------------
 1000000000
(1 row)
0
Subscribe to my newsletter

Read articles from Wirekite directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Wirekite
Wirekite