Hands-On Integrating IBM Storage Ceph with PoINT Archive Gateway


Installing PoINT Archival Gateway on RHEL 9.3
PoINT Archival Gateway (PAG) can be installed on several servers (multi-node installation) in the Enterprise Edition or on one server in the Compact Edition. The following section describes the deployment of the Compact Edition.
Installing the PAG Compact Edition on RHEL 9.3 begins by transferring the installation tarball to your server and extracting its contents. After unarchiving, the next step is to install any required .NET runtimes and configure systemd services so PAG can run in the background.
Here is an example:
# scp PagCompactInstall-4.1.228.tar.gz root@linux1:/root/PAG/
# ssh root@linux1
# cd /root/PAG/
# tar -zxvf PagCompactInstall-4.1.228.tar.gz
# tar -zxvf PAG-CGN-FULL-4.1.228.tar.gz -C /
# tar -zxvf PAG-GUI-FULL-4.1.228.tar.gz -C /
# dnf install dotnet-runtime-8.0 aspnetcore-runtime-8.0 -y
# cp -pr /opt/PoINT/PAG/CGN/PagCgnSvc.service /etc/systemd/system
# cp -pr /opt/PoINT/PAG/CGN/pag-cgn.conf /etc/opt/PoINT/PAG/CGN/pag-cgn.conf
# cp -pr /opt/PoINT/PAG/GUI/PagGuiSvc.service /etc/systemd/system
# cp -pr /opt/PoINT/PAG/GUI/pag-gui.conf /etc/opt/PoINT/PAG/GUI/pag-gui.conf
After installing the files and dependencies, it is necessary to update the PAG configuration files in order to reflect the correct IP addresses, ports, and license key. The primary changes typically occur in /etc/opt/PoINT/PAG/CGN/pag-cgn.conf
for the S3 REST API and /etc/opt/PoINT/PAG/GUI/pag-gui.conf
for the administrative GUI. An example edit might look like this:
# vi /etc/opt/PoINT/PAG/CGN/pag-cgn.conf
[Administration Address]
CGN-GUI-MY-IP=10.251.0.35
[S3 REST API Addresses]
CGN-HTTP-S3-FQDN=linux1.cephlabs.com
CGN-HTTP-S3-IP=10.251.0.35
CGN-HTTP-S3-PORT-NOSSL=4080
CGN-HTTP-S3-PORT-SSL=4443
CGN-HTTP-S3-SSL-CERT-NAME=FILE:PAG.pfx
CGN-HTTP-S3-SSL-CERT-PWD=
[License]
CGN-Configuration-Key=QWYHM-W1787-5SD3X
Likewise, editing the GUI configuration file might involve similar IP updates:
# vi /etc/opt/PoINT/PAG/GUI/pag-gui.conf
[Administration Address]
GUI-DB-IP=10.251.0.35
GUI-DB-PORT=4000
Once the configurations are in place, the services can be enabled and started:
# systemctl enable --now PagCgnSvc
# systemctl enable --now PagGuiSvc
Confirming everything is running allows you to access the PAG GUI on the configured IP address and port through HTTPS. Log in with the default admin credentials, enter your License Key, and activate the software through the "System Management" → "Information" section in the PAG GUI. After licensing, creating a partition and Object Repository in the PAG interface will prepare the backend for storing objects on tape.
Under the menu command “Storage Management” → “Storage Partitions” you get an overview of all created Storage Partitions:
To create a new Object Repository (Bucket), click “Create Object Repository” and fill out the following dialogue.
Under the menu command “Storage Management” → “Object Repositories” you get an overview about all created Object Repositories (Buckets):
Setting up a user with HMAC credentials will allow Ceph to authenticate against PAG’s S3 endpoint.
Integrating PAG as a Storage Class within Ceph
Integrating PAG as a Storage Class within Ceph RGW involves configuring a cloud-tier placement for tape using the standard Ceph CLI. Adding a new point-tape
storage class to the default placement looks like this:
# radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=point-tape --tier-type=cloud-s3
# radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class point-tape \
--tier config=endpoint=http://linux1.cephlabs.com:4080,access_key=9FD33A27642C45480260,secret="YvFLFqQD+fZF+2gwVD4hbbgYzNoo4QeUiprhh0Tv",target_path="cephs3tape",multipart_sync_threshold=44432,multipart_min_part_size=44432,retain_head_object=true,region=default,allow_read_through=true
Check this link for a complete description of all the configuration parameters available.
We can list our new zonegroup placement configuration with the following command:
# radosgw-admin zonegroup placement list
NOTE: If you have not done any previous Multisite Configuration, a default zone and zonegroup are created for you, and changes to the zone/zonegroup will not take effect until the Ceph Object Gateways are restarted. If you have created a realm for multisite, the zone/zonegroup changes will take effect once the changes are committed with radosgw-admin period update --commit
.
# ceph orch restart rgw.default
Bucket Creation and Lifecycle Policy
Next comes the creation of a bucket and the assignment of a lifecycle policy. This policy will automatically transition objects from the STANDARD tier to the point-tape
tier after a specified number of days. We first create a bucket called dataset
:
# aws --profile tiering --endpoint https://s3.cephlabs.com s3 mb s3://dataset --region default
The contents of point-tape-lc.json
might resemble the following:
# cat point-tape-lc.json
{
"Rules": [
{
"ID": "Testing LC. move to tape after 1 day",
"Prefix": "",
"Status": "Enabled",
"Transitions": [
{
"Days": 1,
"StorageClass": "point-tape"
}
]
}
]
}
To apply the lifecycle configuration to the dataset
Bucket, you can apply it using the AWS CLI:
# aws --profile tiering s3api put-bucket-lifecycle-configuration --lifecycle-configuration file://point-tape-lc.json --bucket dataset
# aws --profile tiering s3api get-bucket-lifecycle-configuration --bucket dataset
{
"Rules": [
{
"ID": "move to tape after 30 days",
"Prefix": "",
"Status": "Enabled",
"Transitions": [
{
"Days": 1,
"StorageClass": "point-tape"
}
]
}
]
}
Testing the Integrated Setup
Uploading a file to the bucket and confirming its presence:
# aws --profile tiering s3 cp 10mb_file s3://dataset/
upload: ./10mb_file to s3://dataset/10mb_file
# aws --profile tiering s3api list-objects-v2 --bucket dataset
{
"Contents": [
{
"Key": "10mb_file",
"LastModified": "2025-03-24T15:40:55.879000+00:00",
"ETag": "\"75821af1e9df6bbc5e8816f5b2065899-2\"",
"Size": 10000000,
"StorageClass": "STANDARD"
}
]
}
The Ceph lifecycle daemon will run at scheduled intervals. After it completes, you can check whether objects have transitioned; the size of the object in the Ceph bucket will now be 0
, and StorageClass
becomes point-tape
:
# radosgw-admin lc list
# aws --profile tiering s3api list-objects-v2 --bucket dataset
{
"Contents": [
{
"Key": "10mb_file",
"LastModified": "2025-03-24T15:43:02.891000+00:00",
"ETag": "\"75821af1e9df6bbc5e8816f5b2065899-2\"",
"Size": 0,
"StorageClass": "point-tape"
}
]
}
In the output, StorageClass
changes to point-tape
for objects migrated to the PAG tier. Validating the actual data in the PAG backend is done by querying the bucket path in PAG via its S3 REST API:
# aws --profile points3 --endpoint http://linux1.cephlabs.com:4080 s3api head-object --bucket cephs3tape --key dataset/10mb_file
{
"AcceptRanges": "bytes",
"LastModified": "2025-03-24T15:42:23+00:00",
"ContentLength": 10000000,
"ETag": "\"e46b7c402bc788a8ea9c4fd02268b744-2\"",
"ContentType": "application/octet-stream",
"Metadata": {
"Rgwx-Source": "rgw",
"Rgwx-Source-Etag": "75821af1e9df6bbc5e8816f5b2065899-2",
"Rgwx-Source-Key": "10mb_file",
"Rgwx-Source-Mtime": "1742830855.879976223",
"Rgwx-Versioned-Epoch": "0"
}
}
Object Retrieval Workflow
A restore request can be made with the restore-object
API call. First, a temporary restore:
# aws --profile tiering s3api restore-object --bucket dataset --key hosts --restore-request Days=3
You can later confirm the restored object is accessible and listed in Ceph:
# aws --profile tiering s3 ls s3://dataset
2025-03-24 11:43:02 10000000 10mb_file
# aws --profile tiering s3api head-object --bucket dataset --key 10mb_file
{
"AcceptRanges": "bytes",
"Restore": "ongoing-request=\"false\", expiry-date=\"Thu, 27 Mar 2025 15:45:25 GMT\"",
"LastModified": "2025-03-24T15:43:02+00:00",
"ContentLength": 10000000,
"ETag": "\"\"e46b7c402bc788a8ea9c4fd02268b744-2\"\"",
"ContentType": "application/octet-stream",
"Metadata": {},
"StorageClass": "point-tape"
}
If we don’t specify Days
In the restore request, it triggers a permanent restore. For example:
# aws --profile tiering - s3 cp 20mb_file s3://dataset/
upload: ./20mb_file to s3://dataset/20mb_file
Once the LC policy transitions the object:
aws --profile tiering s3api head-object --bucket dataset --key 20mb_file | grep StorageClass
"StorageClass": "point-tape"
Then, a permanent restore:
# aws --profile tiering s3api restore-object --bucket dataset --key 20mb_file --restore-request {}
After this, the storage class is back to STANDARD
:
# aws --profile tiering s3api head-object --bucket dataset --key 20mb_file
{
"AcceptRanges": "bytes",
"LastModified": "2025-03-24T15:55:10+00:00",
"ContentLength": 20000000,
"ETag": "\"\"ab1a8bc4a7d7dd90231ae582e7dd35fa-4\"\"",
"ContentType": "application/octet-stream",
"Metadata": {},
"StorageClass": "STANDARD"
}
Conclusion
By following the above approach, you can effectively deploy the PoINT Archival Gateway emulator, integrate it into IBM Storage Ceph as a new tape storage tier, and validate the entire lifecycle workflow — from upload and automatic migration to restore and verification. This combined solution reduces storage costs, enhances data protection and compliance, and provides on‐premises tape capabilities through a familiar S3 interface.
Subscribe to my newsletter
Read articles from Daniel Parkes directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
