On-Prem, On Borrowed Time: Securing Offline Trials

Sachin SankarSachin Sankar
17 min read

Introduction

When you’re building a SaaS product, enforcing a trial period is easy—just ping a licensing server, validate the subscription, and you’re good to go. But what happens when your software is deployed on-prem in a tightly controlled enterprise environment? And what if it can’t even talk to the internet, except for a handful of SOC 2-compliant servers?

That’s exactly the problem I ran into while working on a client project. The software was packaged as a Docker image with Python, meant to run inside corporate infrastructure with strict network restrictions. We needed to implement a time-based trial that would:

  1. Persist trial data even if the container was restarted or reinstalled.

  2. Prevent tampering (like resetting system time or wiping stored data).

  3. Not require a dedicated online licensing server.

At first, it seemed like there were no good options. Most traditional licensing solutions assume cloud connectivity or rely on manual license activation keys—both of which were not viable here. So, we had to get creative.

In this post, we’ll explore different approaches to implementing an offline time-limited trial in a Dockerized Python application, without relying on a dedicated licensing server. We’ll break down the challenges, potential pitfalls, and creative solutions that can make this work in a secure and reliable way. If you’ve ever had to deal with on-prem licensing and found yourself wondering, “Surely, I’m not the only one dealing with this?”—you’re in the right place.

Understanding the Constraints

Before we dive into solutions, let’s take a step back and map out what makes this problem so tricky. On the surface, a time-based trial sounds simple—just store a timestamp and check it later, right? Well, not quite. In an on-prem environment with limited connectivity, things get complicated fast.

Limited Connectivity: No Easy Cloud Licensing

The first and biggest challenge? No reliable internet access. The software could only communicate with SOC 2-compliant servers, meaning we couldn’t just rely on a cloud-based licensing system. No simple API calls to verify trial status. No easy revocation mechanisms. Whatever we built had to work fully offline while still being secure.

Running in Docker: Persistence Problems

The app itself was packaged as a Docker image, which introduced another headache:

  • Containers are ephemeral. If a user wipes the container and restarts it, we can’t rely on in-container storage for tracking the trial period.

  • Volumes could be manipulated. Even if we store trial data in a mounted volume, a savvy user could just delete or modify it.

  • Environment variables are a non-starter. Anything set at runtime can be changed just as easily.

Python-Based Stack: Need a Secure, Lightweight Solution

Since the app was built with Python, we needed a trial mechanism that was:

  • Lightweight: No heavyweight database dependencies just for tracking a trial.

  • Secure: It shouldn’t be easy to manipulate timestamps or reset the trial.

  • Cross-platform: Since the Docker image could run on different OS environments, we couldn’t rely on platform-specific tricks.

Security Concerns: Preventing Easy Workarounds

If users are motivated enough, they’ll try to game the system. Here are the key threats we had to account for:

  1. Clock rollback attacks – A user manually setting their system clock back to extend the trial.

  2. Trial resets via data deletion – Wiping stored trial information to start fresh.

  3. Container cloning exploits – Duplicating a container before the trial expires and restoring it later.

Each of these issues meant we needed a more resilient, multi-layered approach—something stronger than just saving a timestamp in a config file.

With these constraints in mind, let’s start exploring ways to design a secure, time-limited trial that actually works. Up next: How to store trial data securely in a Dockerized environment. 🚀

Designing a Secure Time-Limited Trial Without Cloud Licensing

Storing Trial Data Securely in a Dockerized Environment

Alright, now that we know the constraints, let’s start tackling the first problem: where do we store the trial data so it actually persists, but isn’t trivially wiped or manipulated?

A naïve approach would be to just drop a timestamp in a file somewhere inside the container. Bad idea. Containers are ephemeral—restarting or rebuilding them wipes any internal files. Even if we use a mounted volume, a user could just delete the file and reset their trial. So, what’s a better approach?

Option 1: Using Docker Volumes (With a Twist)

Docker volumes are persistent storage that survives container restarts, making them a tempting option. But they come with a glaring weakness: users with access to the host machine can just wipe or modify the volume.

So, how do we make this more secure?

  • Encrypt trial data before storing it → A deleted volume is a problem, but at least we prevent easy tampering.

  • Store multiple verification timestamps → Instead of a single "trial_start" value, we can track time progression across multiple checkpoints.

  • Use a hidden or non-obvious path → Not bulletproof, but it reduces the chance of casual tampering.

Pros: Works across platforms, easy to implement.
Cons: If a user deletes the volume, trial data is gone.

Option 2: SQLite or an Encrypted Local Database

A lightweight SQLite database inside the container (or in a mounted volume) can help with data integrity. With an added layer of encryption (like SQLCipher), this prevents users from just opening and editing the trial data manually.

But again, this is still vulnerable to the "nuke the volume and restart the trial" exploit.

Pros: Harder to tamper with than raw files, supports encryption.
Cons: Database files can still be deleted.

Option 3: Storing Data Outside the Container (Host Machine Fingerprinting)

Instead of keeping trial data inside Docker, what if we write it to the host machine itself? Some ideas:

  • Write trial data to a system log or registry (Windows/macOS/Linux).

  • Fingerprint the machine (using CPU ID, MAC address, etc.) and tie the trial to it.

  • Store an encrypted license file in a hard-to-find location outside the container.

Now, even if a user nukes the container, the trial data still exists on the host.

Pros: Survives container resets, harder to bypass.
Cons: Requires system-level access, could introduce compatibility headaches.

Option 4: Combining Multiple Approaches for a More Resilient Trial System

No single method is bulletproof, so the best approach is a hybrid strategy:

  • Store encrypted trial data in a Docker volume (easy persistence).

  • Fingerprint the host machine to detect resets.

  • Keep secondary timestamps in logs or system files (redundancy).

  • Use cryptographic integrity checks to detect tampering.

By layering these methods together, we make it significantly harder for users to cheat the system—without overcomplicating things.

Now that we have a secure(ish) way to store trial data, the next challenge is preventing tampering, especially clock rollbacks. That’s where things get even trickier…

Protecting Against Tampering (Because Users Will Try)

Alright, we’ve got a way to store trial data that isn’t wiped the moment a user restarts a container. But that’s just the first battle—because you know some users are going to try and mess with the system.

And let’s be real—most IT guys in enterprises are nerds just like us. If they get wind of a time-based trial, they’ll immediately start poking around. “Let’s check the file system... Maybe it’s in the registry... Oh wait, what if I just set the system clock back?” Next thing you know, they’re disassembling the Docker image for fun on a Friday night. (We’d probably do the same.)

So, let’s make sure we stay ahead of them.

The Classic Cheat: Rolling Back the System Clock

One of the easiest ways to bypass a time-based trial is to set the system clock back. This is especially dangerous in offline environments, where there’s no NTP (Network Time Protocol) server keeping things in sync. If our system just compares trial_start to current_time, rolling back the clock effectively rewinds the trial period.

So, how do we fight this?

Cross-Checking Multiple Timestamps

Instead of relying on a single timestamp, we can store multiple checkpoints:

  • Trial start time → The initial timestamp when the trial begins.

  • Last seen time → The most recent time the app was run.

  • Expected progression → A derived time expectation based on previous runs.

How this helps:

  • If last_seen_time is later than current_time, the user rolled back the clock → Trial is invalidated.

  • If current_time is more than X days ahead of last_seen_time, the user might have manipulated time forward and back → Also suspicious.

Anchoring to System Logs or Filesystem Timestamps

Even if a user rolls back the system clock, they can’t change all timestamps. Some clever ways to anchor time:

  • Check system logs → Most OS logs store timestamps in a way that’s hard to tamper with.

  • Compare against file modification times → A hidden “marker” file can help detect inconsistencies.

  • Look at Docker container creation time → This doesn’t change unless the container is rebuilt.

External Anchors (If Any Are Available)

If the software has any external connectivity (even in a limited form), it can cross-check time from:

  • A SOC 2-compliant server (if it can fetch timestamps).

  • TLS certificate validity periods → Even if a request fails, we can still read expiration dates.

  • Cached timestamps from previous connections → If the system ever connects to a trusted server, store the time for future reference.

Even in a fully offline system, these extra checks make it way harder to roll back the clock without detection.

The Other Cheat: Deleting or Resetting Trial Data

The second common attack: users wiping trial data to restart the countdown.

Fingerprinting the Machine

To prevent a fresh start just by deleting trial files, we can tie the trial to unique machine identifiers, such as:

  • CPU ID / Motherboard serial number

  • MAC address (not perfect, since it can be spoofed)

  • Disk UUIDs

💡
For Python-based apps, uuid.getnode() can retrieve a device's MAC address, while os.popen("dmidecode -s system-uuid").read().strip() can fetch a motherboard serial.

This means even if the user deletes trial files, the software can recognize it’s running on the same machine and reapply trial restrictions.

Spreading Data Across Multiple Locations

If the trial data is only stored in one place (like a single file), it’s easy to delete. But what if we store redundant pieces in multiple places?

  • A hidden trial timestamp in a Docker volume

  • An encrypted entry in SQLite or a system log

  • A subtle marker inside application-specific data (e.g., a config file checksum)

Now, even if a user wipes one of these, the system can cross-check and restore the missing data.

That should keep the enterprise IT nerds busy for a while.

Cryptographic Protection for Trial Validity

So far, we've focused on where to store trial data and how to detect tampering using timestamps and system logs. But let’s be real—if someone is determined enough, they’ll try to manually edit the stored data to reset the trial.

💡 Solution? Cryptography.
Instead of just saving timestamps as plain text, we encrypt, sign, and obfuscate them to make tampering a nightmare.

HMAC-Based Timestamp Validation

One way to ensure that the trial timestamp hasn’t been modified is by using an HMAC (Hash-based Message Authentication Code).

How It Works:

  • Instead of storing a raw timestamp like start_time = "2025-03-26", we hash it with a secret key.

  • Each time the software starts, it recalculates the hash and checks if it matches the stored value.

  • If someone edits the timestamp manually, the hash won’t match, and access is denied.

Example (Python Implementation):

import hmac
import hashlib
import time

SECRET_KEY = b'super_secret_key'  

def generate_hmac(timestamp: int) -> str:
    return hmac.new(SECRET_KEY, str(timestamp).encode(), hashlib.sha256).hexdigest()

# Storing a timestamp securely
start_time = int(time.time())  # Store the trial start time
hmac_value = generate_hmac(start_time)

# Later, we verify:
if generate_hmac(start_time) == hmac_value:
    print("Timestamp is valid!")
else:
    print("Tampering detected!")

🔐 Why HMAC?
Unlike simple hashing (sha256(start_time)), HMAC requires a secret key, making it much harder to fake.

Asymmetric Encryption for Local Trial Keys

Alright, let’s say someone finds and deletes the timestamp file. What if we used a local trial key that’s cryptographically protected?

Solution: RSA or ECDSA Key Signing

Instead of just storing the trial start time, we can:

  1. Generate a trial license key.

  2. Sign it with a private key stored inside the app.

  3. Verify it using a public key.

How This Helps:

  • The trial key is unique for each machine.

  • Even if an attacker copies the key from another machine, it won’t work without the matching private key.

  • Since we never store the private key inside user-accessible files, they can’t generate a new valid trial key.

Example: RSA Signing & Verification

from cryptography.hazmat.primitives.asymmetric import rsa, padding
from cryptography.hazmat.primitives import hashes

# Generate keys (should be done once, not on every run)
private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
public_key = private_key.public_key()

def sign_trial_data(data: bytes) -> bytes:
    return private_key.sign(
        data,
        padding.PSS(mgf=padding.MGF1(hashes.SHA256()), salt_length=padding.PSS.MAX_LENGTH),
        hashes.SHA256()
    )

def verify_trial_data(signature: bytes, data: bytes) -> bool:
    try:
        public_key.verify(
            signature,
            data,
            padding.PSS(mgf=padding.MGF1(hashes.SHA256()), salt_length=padding.PSS.MAX_LENGTH),
            hashes.SHA256()
        )
        return True
    except:
        return False

# Signing trial start time
trial_start = b"2025-03-26"
signature = sign_trial_data(trial_start)

# Verification (later in the app)
if verify_trial_data(signature, trial_start):
    print("Trial data is valid!")
else:
    print("Tampering detected!")

🔐 RSA and ECDSA are commonly used in software activation systems
They allow verification without exposing the private key, making it harder to generate fake trial keys.

Obfuscating Trial Data to Prevent Reverse Engineering

If someone is really determined, they’ll try to reverse-engineer the application and figure out how it checks the trial.

Countermeasures:

  1. Encoding & Splitting Data:

    • Instead of storing trial_start = "2025-03-26", split and encode it:
    encoded = base64.b64encode(b"2025|03|26").decode()
  • Store each part in different system files or registry keys.
  1. Obfuscating Code Execution:

    • Use control flow obfuscation to make it harder to follow logic.

    • Introduce dummy checks to throw off reverse engineers.

  2. Packing & Encrypting the Executable:

    • Use tools like PyArmor or UPX to make static analysis more difficult.

Wrapping Up

With HMAC validation, asymmetric encryption for trial keys, and obfuscation, we’ve massively increased the difficulty of bypassing the trial system. This means users can enjoy the trial period, but if they want to keep using the software, they’ll need to pay up.

Safeguarding Secrets: Protecting Keys & Salts from Tampering

So, we’ve locked down the trial data using cryptography, but there’s one glaring problem—where do we store the keys and salts? If an attacker can extract, modify, or replace them, all our fancy encryption becomes useless.

💡 Core Problem:
If your app can read a secret, so can a determined attacker. The goal isn’t perfect secrecy (because that’s nearly impossible in an on-prem setup), but making tampering so painful that they give up.

Environment Variables: The Illusion of Security

A common suggestion is to store secrets in environment variables:

export SECRET_KEY="super_secret_value"

And then retrieve it in Python like this:

import os
secret_key = os.getenv("SECRET_KEY")

Sounds safe, right? Well… not really.

⚠️ Why It's Not Enough:

  • Anyone with local access can dump env variables (env command on Linux, Get-ChildItem Env: in PowerShell).

  • Some applications log environment variables (especially if debug mode is on).

  • If the app runs with elevated privileges, a user might still find a way to inject their own env variables.

👉 Use env vars sparingly. They’re better than hardcoding secrets in the code, but they shouldn’t be your main line of defense.

Storing Keys in Hardware-Backed Secure Storage

If your target environment has TPM (Trusted Platform Module) or HSM (Hardware Security Module) support, use it.

TPM-Based Key Storage (Linux & Windows)

  • Linux: Use tpm2-tools to securely store and retrieve keys.

  • Windows: Leverage DPAPI or the Windows Credential Store.

Example: Protecting Keys with DPAPI (Windows)

import win32crypt

secret = "super_secret_key".encode()
protected_secret = win32crypt.CryptProtectData(secret, None, None, None, None, 0)

# Later, to retrieve:
retrieved_secret = win32crypt.CryptUnprotectData(protected_secret, None, None, None, 0)[1]

💡 Why Hardware-Backed Storage?
Even if an attacker dumps memory, they won’t find plain-text secrets—only encrypted blobs tied to the machine.

Encrypting Secrets at Rest

If TPM/HSM isn’t an option, the next best thing is storing encrypted secrets in a protected location.

Approach:

  1. Generate a master key dynamically at runtime.

  2. Use it to encrypt the actual secrets before storing them.

  3. Never store the master key in plaintext.

Example using AES-GCM encryption:

from cryptography.hazmat.primitives.ciphers.aead import AESGCM
import os

# Generate a random master key (store it securely!)
master_key = AESGCM.generate_key(bit_length=256)
aesgcm = AESGCM(master_key)

# Encrypt a secret
nonce = os.urandom(12)  # Unique for each encryption
secret = b"MySuperSecret"
ciphertext = aesgcm.encrypt(nonce, secret, None)

# Later, decrypt it
decrypted = aesgcm.decrypt(nonce, ciphertext, None)

🔐 Why AES-GCM?

  • Authenticated encryption ensures that if someone modifies the data, decryption fails.

  • It’s fast and widely supported.

💡 Bonus Tip:
If you must store secrets in files, store them in root-owned, non-world-readable locations (chmod 600 on Linux).

Hiding Secrets in Plain Sight: Code Obfuscation

If all else fails, make secrets harder to extract by obfuscating access.

Techniques to Confuse Reverse Engineers:

  • Split the secret across multiple files/locations.

  • Use runtime-generated keys instead of hardcoding them.

  • Introduce decoy encryption keys (so attackers waste time on useless ones).

Example of split-secret storage:

def get_secret():
    part1 = "abc123"
    part2 = os.getenv("SECRET_PART")
    return part1 + part2

🤖 Reverse engineers love static analysis.
If your secret isn’t fully visible in one place, they have a harder time extracting it.

Protecting secrets isn’t about making them impossible to find, but making the cost of extraction higher than the cost of buying a license.

Coming up next:
Now that we’ve secured trial data and secrets, let’s talk user experience—how do we make all of this work without annoying legit users?

Alright, let's roll! Next section: Balancing Security with User Experience 🚀

Keeping It Smooth: Balancing Security with User Experience

So, we’ve put up walls of encryption, tamper-proofing, and secret management—great! But if our trial system is a pain to use, nobody (not even paying customers) will want to deal with it.

💡 Key Challenge:
How do we enforce security without making users feel like they’re trying to break into Fort Knox just to try the software?

Let’s explore how to avoid frustrating users while keeping our trial intact.

Friction vs. Frustration: Setting the Right Level

Users expect some security, but if the trial process feels like resetting an enterprise VPN password every 30 days, they’ll rage quit.

🔸 Good Friction: A simple, well-communicated activation step.
🔸 Bad Friction: Requiring users to manually enter a 64-character key every time they open the app.

👉 Rule of Thumb: If an IT nerd (like us) finds the activation annoying, normal users will hate it even more.

Seamless Activation for Legitimate Users

🔹 Pre-Generated Trial Licenses

  • Instead of forcing users to generate a trial key manually, ship a pre-generated one with an expiration baked in.

  • On first run, the app automatically loads the trial key—zero input needed.

  • Works well for offline trials where calling home isn't an option.

🔹 One-Time Setup, Persistent Storage

  • Once the user activates the trial, store the activation state in a secure, tamper-resistant location.

  • This prevents them from needing to re-enter anything every time they launch the app.

import json
import os

TRIAL_FILE = "/etc/myapp/trial.json"

def store_trial_data(expiration_date):
    trial_info = {"expires": expiration_date}
    with open(TRIAL_FILE, "w") as f:
        json.dump(trial_info, f)

def load_trial_data():
    if os.path.exists(TRIAL_FILE):
        with open(TRIAL_FILE, "r") as f:
            return json.load(f)
    return None

💡 Why JSON Instead of Plaintext?
It’s human-readable for debugging but still allows integrity checks (e.g., signing).

Handling Expired Trials Gracefully

💀 Worst Case:
User opens the app, sees "Your trial has expired", and immediately uninstalls.

🎯 Better Approach:
Instead of just blocking access, consider:

  1. Soft Locking: Allow limited functionality (e.g., "view mode" instead of full lockout).

  2. Grace Periods: Give users a few extra days to upgrade instead of cutting them off instantly.

  3. Clear Upgrade Paths: Provide one-click options to extend the trial or purchase a full license.

Example of a grace period check:

from datetime import datetime, timedelta

def check_trial_status(expiration_date):
    today = datetime.now().date()
    if today > expiration_date:
        grace_end = expiration_date + timedelta(days=7)  # 7-day grace period
        if today > grace_end:
            return "Trial expired. Please upgrade."
        return "Trial expired, but you have a grace period until " + str(grace_end)
    return "Trial active!"

🔔 Pro Tip:
Instead of scaring users with "TRIAL OVER. PAY NOW.", show "Your trial has ended, but you have a few days left to decide!"—this improves conversion rates.

Avoiding Accidental Lockouts

Even if we do everything right, there are still edge cases that can annoy users:

  • System Clock Tampering: If a user changes the system date, does the trial break?

  • Docker Container Resets: If the app is inside a container, does the trial reset if they restart it?

  • Corrupted Storage: What happens if the trial file is deleted or unreadable?

Solutions to Prevent Lockouts:

  • Time Checks: Store a hash of the system’s first detected time and compare it at startup.

  • Container Awareness: Use Docker’s hostname + mount volume checks to detect resets.

  • Backup Mechanism: Keep redundant storage locations (e.g., system registry + local file) in case of corruption.

import hashlib
import time

FIRST_RUN_TIMESTAMP = "/etc/myapp/first_run"

def detect_time_tampering():
    if not os.path.exists(FIRST_RUN_TIMESTAMP):
        with open(FIRST_RUN_TIMESTAMP, "w") as f:
            f.write(str(int(time.time())))  # Store the first run timestamp

    with open(FIRST_RUN_TIMESTAMP, "r") as f:
        stored_time = int(f.read())

    if stored_time > int(time.time()):
        return "Warning: System clock tampering detected!"

    return "Time check passed."

⚠️ Why It Matters?
If the trial is time-based, users might try rolling back the system clock to extend it forever. This check helps detect that.

Final Thoughts

Building a tamper-resistant, time-based trial for offline software is all about balance—strong security, smooth user experience, and just enough friction to keep things fair. While no system is unbreakable, the goal is to make cracking it more trouble than it's worth.

💡 Key Takeaway: If users spend more time trying to bypass the trial than just buying the software, you've won.

Now, what happens when someone actively tries to crack it? That’s a problem for another day.

0
Subscribe to my newsletter

Read articles from Sachin Sankar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sachin Sankar
Sachin Sankar