Using write forwarding with an Aurora PostgreSQL global database

We are using an AWS Aurora Global Postgres Cluster and run an API service in each of our three regions (EU, US, AU). The EU API service is connected to the primary cluster. The US and AU API services are connected to a local read-replica, and the primary cluster only for writing. This cross-regional database connection causes a significant performance penalty.

AWS Aurora Global offers a solution called “Write Forwarding,” which can help mitigate the performance issue and ensure read-after-write consistency. However, Write Forwarding does not support sub-transactions (more details can be found here). Initially, we thought we could remove and replace all of our atomic transaction blocks with a manual rollback, but it turns out Django uses them internally as well. Calling the update_or_create function already causes a sub-transaction. There’s an atomic transaction block in update_or_create, and it is calling get_or_create, which contains another atomic transaction block. We must ensure a single “durable” transaction is used across the board.

To address this, we could monkey patch our way out of this by adding the following code to our

from django.db import transaction

django_atomic_transaction = transaction.atomic

def our_atomic_transaction(using=None, savepoint=False, durable=False):
    return django_atomic_transaction(using=using, savepoint=savepoint, durable=durable)

transaction.atomic = our_atomic_transaction

Any ideas or suggestions?

Hello there!
I’m not sure how to help in this case, but have you considered deactivating transaction management?

Thank you. You might be onto something. I ran a quick test, and besides turning off auto-commit, it requires enabling atomic requests and a custom database engine that turns off savepoints.

Here are the changes I made:

# myproject/
    "default": {
        "ENGINE": "myproject.aurora.engine",
        "AUTOCOMMIT": False,
        "ATOMIC_REQUESTS": False

# myproject/aurora/engine/
from django.db.backends.postgresql import base, features

class DatabaseFeatures(features.DatabaseFeatures):
    uses_savepoints = False

class DatabaseWrapper(base.DatabaseWrapper):
    features_class = DatabaseFeatures

I need to run further tests since this is quite an impactful change. I will let you know if this turns out to be the solution.

1 Like

I hope you find the answers that you’re looking for.

Hi, we’ve encountered an integrity issue with write forwarding. I’ve tried updating the database config and the pg_write_forward.consistency_mode parameter but the issue still persists. Did you manage to resolve your issues and get write forwarding to work?

Here is one of the API errors:

IntegrityError at /api/v1/user/
null value in column “id” of relation “user” violates not-null constraint
DETAIL: Failing row contains (null, test_user, 2024-06-27 16:40:02.349801+00, 2024-06-27 16:40:02.349895+00).
CONTEXT: WriteForward command: INSERT INTO “user” (“username”, “created_at”, “updated_at”) VALUES (‘test_user’, ‘2024-06-27T16:40:02.349801’::timestamp, ‘2024-06-27T16:40:02.349895’::timestamp) RETURNING “user”.“id”
Source (On-premise / RDS Endpoint):
Target RDS Endpoint:
Error Message:
Timestamp with Timezone (MM/DD/HH24:MI TZ):