Choosing the Right Database: ACID Transactions in Postgres vs DynamoDB

DataWisokaDataWisoka
6 min read

Transactional integrity is crucial for ensuring data consistency, reliability, and correctness in applications. When it comes to ensuring ACID (Atomicity, Consistency, Isolation, Durability) transactional integrity, the choice between Postgres and DynamoDB can be particularly challenging. Postgres, a relational database, and DynamoDB, a NoSQL database, offer robust solutions but with vastly different approaches. Understanding these differences is essential for selecting the right database for your application.

This blog post delves into the intricacies of ACID transactional integrity in Postgres and DynamoDB, helping you understand the strengths and limitations of each system. By the end, you’ll have a clearer picture of which database aligns better with your needs, particularly when it comes to maintaining data integrity during transactions. Let’s explore the key differences, consequences of differences, and practical examples to guide you in making an informed choice.


Atomicity: Ensuring all or nothing transactions

Atomicity is the guarantee that each transaction is treated as a single unit, which either fully completes or fully fails. This principle ensures that a database remains in a consistent state, even in the event of errors, crashes, or power failures.

Postgres:

Postgres excels in providing robust transaction management, ensuring that all changes within a transaction are applied completely or not at all.

Example in Postgres:

Let's consider an example where we transfer money between two accounts. If any step in the transaction fails, the entire operation is rolled back.

import psycopg2

def transfer_funds(from_account, to_account, amount):
    conn = psycopg2.connect("dbname=test user=postgres password=secret")
    cur = conn.cursor()
    try:
        cur.execute("BEGIN;")
        # Deduct from one account
        cur.execute("UPDATE accounts SET balance = balance - %s WHERE account_id = %s;", (amount, from_account))
        # Add to another account
        cur.execute("UPDATE accounts SET balance = balance + %s WHERE account_id = %s;", (amount, to_account))
        conn.commit()
    except Exception as e:
        conn.rollback()
        print("Transaction failed:", e)
    finally:
        cur.close()
        conn.close()

transfer_funds(1, 2, 100)

DynamoDB:

DynamoDB supports atomicity within a transaction but is limited to 25 items or 4 MB of data. Handling very large transactions would require extensive coordination across multiple nodes, leading to potential performance bottlenecks and increased complexity. By limiting transaction size, DynamoDB can maintain high performance and availability.

Example in DynamoDB:

Transferring funds between two accounts in DynamoDB:

import boto3

dynamodb = boto3.client('dynamodb', region_name='us-west-2')

def transfer_funds(from_account, to_account, amount):
    try:
        response = dynamodb.transact_write_items(
            TransactItems=[
                {
                    'Update': {
                        'TableName': 'accounts',
                        'Key': {'account_id': {'N': str(from_account)}},
                        'UpdateExpression': 'SET balance = balance - :amount',
                        'ExpressionAttributeValues': {':amount': {'N': str(amount)}}
                    }
                },
                {
                    'Update': {
                        'TableName': 'accounts',
                        'Key': {'account_id': {'N': str(to_account)}},
                        'UpdateExpression': 'SET balance = balance + :amount',
                        'ExpressionAttributeValues': {':amount': {'N': str(amount)}}
                    }
                }
            ]
        )
        print("Transaction committed")
    except Exception as e:
        print("Transaction failed:", e)

transfer_funds(1, 2, 100)

Consequences of differences in Atomicity:

  • Complex Transactions: The limitation of 25 items or 4 MB per transaction means that for complex operations involving many items, you would need to break the transaction into multiple smaller ones. This can lead to partial updates where some transactions succeed while others fail, requiring additional logic to handle retries and rollbacks, thus increasing the complexity of the application code and risking data integrity.

Consistency: Maintaining a valid state

Consistency in the context of ACID transactions ensures that a database remains in a valid state before and after a transaction. This means that any transaction will bring the database from one valid state to another, adhering to all defined rules, constraints, and schemas.

Postgres:

Postgres ensures consistency by enforcing constraints, triggers, and rules within the database. When a transaction is committed, it transitions the database from one valid state to another, ensuring that all constraints (such as foreign keys, unique constraints, and checks) are satisfied.

Example in Postgres:

Assume we have a constraint that the balance must not be negative.

import psycopg2

def create_schema():
    conn = psycopg2.connect("dbname=test user=postgres password=secret")
    cur = conn.cursor()
    cur.execute("""
        CREATE TABLE accounts (
            account_id SERIAL PRIMARY KEY,
            balance NUMERIC CHECK (balance >= 0)
        );
    """)
    conn.commit()
    cur.close()
    conn.close()

create_schema()

DynamoDB:

DynamoDB ensures consistency through its flexible schema-less design and conditional operations. Conditional expressions allow you to specify conditions that must be met for an operation to proceed. If the condition is not met, the operation is not performed, ensuring that the database remains in a valid state.

Example in DynamoDB:

To ensure consistency in application logic, we need to check the balance before updating it.

import boto3
from botocore.exceptions import ClientError

dynamodb = boto3.client('dynamodb')

try:
    response = dynamodb.transact_write_items(
        TransactItems=[
            {
                'Update': {
                    'TableName': 'Accounts',
                    'Key': {'account_id': {'N': '1'}},
                    'UpdateExpression': 'SET balance = balance - :val',
                    'ConditionExpression': 'balance >= :val',
                    'ExpressionAttributeValues': {':val': {'N': '100'}}
                }
            },
            {
                'Update': {
                    'TableName': 'Accounts',
                    'Key': {'account_id': {'N': '2'}},
                    'UpdateExpression': 'SET balance = balance + :val',
                    'ExpressionAttributeValues': {':val': {'N': '100'}}
                }
            }
        ]
    )
except ClientError as e:
    print(f"Transaction failed: {e.response['Error']['Message']}")

Consequences of differences in Consistency:

The flexibility and schema-less nature of DynamoDB can lead to potential inconsistencies if not carefully managed. Applications with complex relationships and constraints might require additional logic to maintain consistency, which can increase complexity and potential for errors.

Isolation: Managing concurrent Transactions

Isolation ensures that transactions are executed in such a way that they do not interfere with each other, maintaining database integrity and consistency. Different isolation levels provide varying degrees of concurrency control and can impact performance.

Postgres:

Postgres supports multiple isolation levels (Read Committed, Repeatable Read, Serializable), giving fine-grained control over how transactions interact. This flexibility allows for minimizing conflicts in high-concurrency environments.

Example in Postgres:

Using the Serializable isolation level:

import psycopg2

def transfer_funds_isolated(from_account, to_account, amount):
    conn = psycopg2.connect("dbname=test user=postgres password=secret")
    cur = conn.cursor()
    try:
        cur.execute("BEGIN ISOLATION LEVEL SERIALIZABLE;")

        cur.execute("UPDATE accounts SET balance = balance - %s WHERE account_id = %s;", (amount, from_account))
        cur.execute("UPDATE accounts SET balance = balance + %s WHERE account_id = %s;", (amount, to_account))

        conn.commit()
    except Exception as e:
        conn.rollback()
        print("Transaction failed:", e)
    finally:
        cur.close()
        conn.close()

transfer_funds_isolated(1, 2, 100)

DynamoDB:

DynamoDB, designed for scalability and availability, handles isolation through its transactional API, though it does not offer the same configurability as Postgres.

Example in DynamoDB:

Using transactions to ensure isolation:

import boto3

dynamodb = boto3.client('dynamodb', region_name='us-west-2')

def transfer_funds_isolated(from_account, to_account, amount):
    try:
        response = dynamodb.transact_write_items(
            TransactItems=[
                {
                    'Update': {
                        'TableName': 'accounts',
                        'Key': {'account_id': {'N': str(from_account)}},
                        'UpdateExpression': 'SET balance = balance - :amount',
                        'ExpressionAttributeValues': {':amount': {'N': str(amount)}}
                    }
                },
                {
                    'Update': {
                        'TableName': 'accounts',
                        'Key': {'account_id': {'N': str(to_account)}},
                        'UpdateExpression': 'SET balance = balance + :amount',
                        'ExpressionAttributeValues': {':amount': {'N': str(amount)}}
                    }
                }
            ]
        )
        print("Transaction committed")
    except Exception as e:
        print("Transaction failed:", e)

transfer_funds_isolated(1, 2, 100)

Consequences of differences in Isolation:

Postgres’s configurable isolation levels allow for tuning based on application requirements, whereas DynamoDB’s fixed isolation level may not always be the most efficient for every use case.

Durability: Safeguarding committed data

Durability ensures that once a transaction has been committed, it remains in the system even in the event of a failure. Both Postgres and DynamoDB have their own mechanisms to achieve durability, but their approaches are quite different.

Postgres:

Postgres ensures durability primarily through Write-Ahead Logging (WAL) and checkpoints. While Postgres ensures durability on a single node, it can also be configured for replication to protect against storage failures. Replicas can be used for failover, ensuring that the data is not lost even if the primary storage system fails.

DynamoDB:

Durability in DynamoDB is managed by automatically replicating data across multiple AWS Availability Zones, ensuring that once a transaction is committed, the data is durable.

While DynamoDB’s replication ensures high durability, it relies heavily on AWS infrastructure. This means that your data’s durability is tied to AWS’s operational stability and architecture. In contrast, Postgres provides more control over durability mechanisms, allowing customization and tuning to meet specific requirements.


Conclusion

Both Postgres and DynamoDB offer robust transactional integrity. DynamoDB chose to have limitations on transaction size and configurability for high scalability, availability and performance. Postgres offers fine-grained control, making it suitable for applications requiring complex transactions.

0
Subscribe to my newsletter

Read articles from DataWisoka directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

DataWisoka
DataWisoka