Proactive Ops logo

Proactive Ops

Subscribe
Archives
October 16, 2025

Aurora User Management with Infrastructure as Code

A guide on using Terraform and the RDS Data API for PostgreSQL user management.

A woman runs her hand along a collection of card catalogs

In my recent AWS RDS Data API deep dive post, I looked at different ways the API could be called. One item I omitted was how to use the Data API in Terraform. Today we’re going to look at how we can use the RDS Data API to provision database users.

PostgreSQL User Management

Let’s start with some basic queries to create a new user in our PostgreSQL database.

We don’t want our application to use the database administrator account. Let’s create a user for the application to use. We will let the application manage the table definitions, views and other aspects of the single database. This user won’t be allowed to create or drop databases, so we will create one in our script.

CREATE DATABASE data_api_example;

-- Add our user
CREATE USER myapp WITH ENCRYPTED PASSWORD 'UseARandomStringHere';
-- Grant them access to the db
GRANT ALL PRIVILEGES ON DATABASE data_api_example TO myapp;
-- The Data API only supports the public schema, so we need to ensure access to it
GRANT ALL ON SCHEMA public TO myapp;

-- Workaround to allow us to perform the next steps. Inspired by https://stackoverflow.com/a/77339792
GRANT myapp TO dbadmin;
-- Grant access to any future tables or sequences
ALTER DEFAULT PRIVILEGES FOR ROLE myapp IN SCHEMA public
  GRANT ALL PRIVILEGES ON TABLES TO myapp;
ALTER DEFAULT PRIVILEGES FOR ROLE myapp IN SCHEMA public
  GRANT ALL PRIVILEGES ON SEQUENCES TO myapp;
-- Remove the access hack now we're done.
REVOKE myapp FROM dbadmin;

Many tutorials show more complex setups with a new schema. This is followed by revoking the user’s access to the public schema. The RDS Data API doesn’t support schema selection. It always uses the public schema. That simplifies our script, but it is why you should have one RDS instance per application or service.

Once you’ve implemented infrastructure as code, it is inefficient to log into database servers to execute queries to configure a user. This can be done at setup time. In this article we will look at how to do this using Terraform.

Provisioning an Aurora Cluster

Before we can create our users we need to create an Aurora cluster. We will provision a minimal Aurora Serverless cluster. It lacks important configuration including CMKs for encryption, appropriate scaling, and backups. This is a starting point for your exploration, not something you should run in production. It is enough for you to see the relationship with the resources we create later.

locals {
  db_cluster_name = "data-api-example"
  database_name   = "data_api_example"
}

resource "aws_rds_cluster" "this" {
  cluster_identifier = local.db_cluster_name

  database_name = local.database_name

  deletion_protection = true

  engine         = "aurora-postgresql"
  engine_version = "17.4"
  engine_mode    = "provisioned"

  master_username             = "dbadmin"
  manage_master_user_password = true
  # master_user_secret_kms_key_id = var.kms_key_arn You should use a CMK to encrypt the secret

  db_subnet_group_name = aws_db_subnet_group.this.name
  vpc_security_group_ids = [
    aws_security_group.db.id
  ]

  storage_encrypted = true
  # kms_key_id = var.kms_key_arn # You should use a CMK to encrypt the storage

  serverlessv2_scaling_configuration {
    min_capacity = 0
    max_capacity = 1
  }

  enable_http_endpoint = true # Enable Data API
}

resource "aws_rds_cluster_instance" "this" {
  identifier         = aws_rds_cluster.this.cluster_identifier
  cluster_identifier = aws_rds_cluster.this.id

  instance_class = "db.serverless"
  engine         = aws_rds_cluster.this.engine
  engine_version = aws_rds_cluster.this.engine_version
}

resource "aws_db_subnet_group" "this" {
  name       = local.db_cluster_name
  subnet_ids = [for s in data.aws_subnet.db : s.id]
}

terraform {
  required_version = ">= 1.11.0, < 2.0.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0, < 7.0"
    }
  }
}

Connecting to a Database from Terraform

All the common database engines have Terraform providers. These providers manage users, databases, schemas and more. The downside is they use a traditional database client. This means our Terraform runner needs direct access to the database. We’re back to poor network security and bastion services with free rein over the environment.

I got excited about HashiCorp’s announcement of actions in Terraform. Right now, actions don’t support the RDS Data API. The response on my feature request suggests we will be waiting a while for that to change. I hope Paul Santus’ pull request for the data source gets merged soon. Getting basic Data API support into the provider is the first step. Giving both tickets a thumbs up will help convince HashiCorp they’re needed.

If we can’t use actions or the AWS provider, how can we manage our database in Terraform? Let me introduce you to the terraform_data resource. This is a built in replacement for the null_resource.

Adding Postgres Users

Now we have our database and admin user, let’s add a user for our application and the database. First we create the credentials and store them in Secrets Manager. Next we run our SQL commands to provision the user. We can take the SQL from earlier and wrap it in a terraform_data resource. As you can see, we can reference the resources we created. We could also use variables and locals if we wanted to. This gives us a lot of flexibility.

locals {
  db_app_username = "myapp"

  exclude_characters = ":!?'\"[]@*\\&`$"
}

data "aws_caller_identity" "current" {}

resource "aws_secretsmanager_secret" "db_user" {
  name_prefix = "db-${local.db_cluster_name}-user"

  description = "Database user secret for ${local.db_cluster_name}"

  policy = data.aws_iam_policy_document.secret_db_user.json

  # kms_key_id = var.kms_key_arn # Another place to use a CMK.
}

ephemeral "aws_secretsmanager_random_password" "db_user" {
  password_length = 32

  # Exclude characters that mess up bash commands and URL encoding
  exclude_characters = local.exclude_characters
  include_space      = false
}

resource "aws_secretsmanager_secret_version" "db_user" {
  secret_id = aws_secretsmanager_secret.db_user.id

  secret_string_wo_version = 0 # Initial setup
  secret_string_wo = jsonencode({
    engine      = "postgres",
    host        = aws_rds_cluster.this.endpoint,
    username    = local.db_app_username,
    password    = ephemeral.aws_secretsmanager_random_password.db_user.random_password,
    dbname      = aws_rds_cluster.this.database_name,
    cluster_arn = aws_rds_cluster.this.arn, # This is non standard property is added for convenience

    dbClusterIdentifier = aws_rds_cluster.this.id,
  })

  lifecycle {
    ignore_changes = [
      secret_string,
    ]
  }
}

data "aws_iam_policy_document" "secret_db_user" {

  statement {
    actions = [
      "secretsmanager:*"
    ]

    resources = ["*"]

    principals {
      type = "AWS"

      identifiers = [
        # Scope this to a limited set of users, rather than delegating to IAM.
        "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
      ]
    }
  }

  /*
  # Permissions for app user role.
  statement {
    actions = [
      "secretsmanager:DescribeSecret",
      "secretsmanager:GetResourcePolicy",
      "secretsmanager:GetSecretValue",
      "secretsmanager:ListSecretVersionIds",
    ]

    resources = ["*"]

    principals {
      type = "AWS"

      identifiers = [
        aws_iam_role.my_app.arn
      ]
    }
  }
  */
}

locals {
  sql_statements = [
    # Add our user
    "CREATE USER ${local.db_app_username} WITH PASSWORD '${ephemeral.aws_secretsmanager_random_password.db_user.random_password}';",
    # Grant them access to the db
    "GRANT ALL PRIVILEGES ON DATABASE ${local.database_name} TO ${local.db_app_username};",
    # The Data API only supports the public schema, so we need to ensure access to it
    "GRANT ALL ON SCHEMA public TO ${local.db_app_username};",

    # Access hack
    "GRANT ${local.db_app_username} TO ${aws_rds_cluster.this.master_username};",
    # Grant access to any future tables or sequences
    "ALTER DEFAULT PRIVILEGES FOR ROLE ${local.db_app_username} IN SCHEMA public GRANT ALL PRIVILEGES ON TABLES TO ${local.db_app_username};",
    "ALTER DEFAULT PRIVILEGES FOR ROLE ${local.db_app_username} IN SCHEMA public GRANT ALL PRIVILEGES ON SEQUENCES TO ${local.db_app_username};",
    # Remove the access hack now we're done.
    "REVOKE ${local.db_app_username} FROM ${aws_rds_cluster.this.master_username};"
  ]
}

resource "terraform_data" "db_user" {
  triggers_replace = {
    never = "never" # Ensures this is only run once.
  }

  provisioner "local-exec" {
    command = join("\n", [
      for s in local.sql_statements : <<-EOT
        aws rds-data execute-statement --resource-arn "$DB_ARN" --database "$DB_NAME" --secret-arn "$SECRET_ARN" --sql "${s}"
      EOT
    ])

    # We use environment variables to simplify the command.
    environment = {
      DB_ARN  = aws_rds_cluster.this.arn
      DB_NAME = aws_rds_cluster.this.database_name

      # Needs to be performed as the admin, not the app user we just created.
      SECRET_ARN = aws_rds_cluster.this.master_user_secret[0].secret_arn
    }

    interpreter = ["bash", "-c"]
  }

  depends_on = [
    aws_rds_cluster_instance.this,
    aws_secretsmanager_secret_version.db_user,
  ]
}

Thanks to Tamás Sallai for his post How to run SQL scripts against the RDS Data API with Terraform. It helped guide me the first time I needed to implement this.

Read Only Role for Ops

You don’t want Ops making uncontrolled changes to the database. They need read access for trouble shooting issues. Let’s create another user and grant them read access to the database. This will allow the team to use Query Editor in the console to access the database during incidents. The code looks similar to the earlier example, but we change the permissions we’re granting.

locals {
  db_ops_username = "myapp_ops"
}

resource "aws_secretsmanager_secret" "db_user_ops" {
  name_prefix = "db-${local.db_cluster_name}-ops"

  description = "Database user secret for ${local.db_cluster_name}"

  policy = data.aws_iam_policy_document.secret_db_user_ops.json

  # kms_key_id = var.kms_key_arn # Another place to use a CMK.
}

ephemeral "aws_secretsmanager_random_password" "db_user_ops" {
  password_length = 32

  # Exclude characters that mess up bash commands and URL encoding
  exclude_characters = local.exclude_characters
  include_space      = false
}

resource "aws_secretsmanager_secret_version" "db_user_ops" {
  secret_id = aws_secretsmanager_secret.db_user_ops.id

  secret_string_wo_version = 0 # Initial setup
  secret_string_wo = jsonencode({
    engine      = "postgres",
    host        = aws_rds_cluster.this.endpoint,
    username    = local.db_ops_username,
    password    = ephemeral.aws_secretsmanager_random_password.db_user_ops.random_password,
    dbname      = aws_rds_cluster.this.database_name,
    cluster_arn = aws_rds_cluster.this.arn, # This is non standard property is added for convenience

    dbClusterIdentifier = aws_rds_cluster.this.id,
  })

  lifecycle {
    ignore_changes = [
      secret_string,
    ]
  }
}

data "aws_iam_policy_document" "secret_db_user_ops" {

  statement {
    actions = [
      "secretsmanager:*"
    ]

    resources = ["*"]

    principals {
      type = "AWS"

      identifiers = [
        # Update this to a limited set of users/roles.
        "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
      ]
    }
  }

  /*
  # Permissions for app user role.
  statement {
    actions = [
      "secretsmanager:DescribeSecret",
      "secretsmanager:GetResourcePolicy",
      "secretsmanager:GetSecretValue",
      "secretsmanager:ListSecretVersionIds",
    ]

    resources = ["*"]

    principals {
      type = "AWS"

      identifiers = [
        aws_iam_role.my_app.arn
      ]
    }
  }
  */
}

locals {
  sql_statements_ops = [
    # Add the Ops user
    "CREATE USER ${local.db_ops_username} WITH PASSWORD '${ephemeral.aws_secretsmanager_random_password.db_user_ops.random_password}';",
    # Grant them access to the db
    "GRANT CONNECT ON DATABASE ${local.database_name} TO ${local.db_ops_username};",
    # Ensure access to tables in the public schema
    "GRANT USAGE ON SCHEMA public TO ${local.db_ops_username};",
    "GRANT SELECT ON ALL TABLES IN SCHEMA public TO ${local.db_ops_username};",
    "GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO ${local.db_ops_username};",

    # Grant access to future tables created by the app user
    "GRANT ${local.db_app_username} TO ${aws_rds_cluster.this.master_username};",
    "ALTER DEFAULT PRIVILEGES FOR ROLE ${local.db_app_username} IN SCHEMA public GRANT SELECT ON TABLES TO ${local.db_ops_username};",
    "ALTER DEFAULT PRIVILEGES FOR ROLE ${local.db_app_username} IN SCHEMA public GRANT SELECT ON SEQUENCES TO ${local.db_ops_username};",
    "REVOKE ${local.db_app_username} FROM ${aws_rds_cluster.this.master_username};"
  ]
}

resource "terraform_data" "db_user_ops" {
  triggers_replace = {
    never = "never"
  }

  provisioner "local-exec" {
    command = join("\n", [
      for s in local.sql_statements_ops : <<-EOT
        aws rds-data execute-statement --resource-arn "$DB_ARN" --database "$DB_NAME" --secret-arn "$SECRET_ARN" --sql "${s}"
      EOT
    ])

    # We use environment variables to simplify the command.
    environment = {
      DB_ARN  = aws_rds_cluster.this.arn
      DB_NAME = aws_rds_cluster.this.database_name

      # Needs to be performed as the admin, not the app user we just created.
      SECRET_ARN = aws_rds_cluster.this.master_user_secret[0].secret_arn
    }

    interpreter = ["bash", "-c"]
  }

  depends_on = [
    aws_rds_cluster_instance.this,
    aws_secretsmanager_secret_version.db_user,
    aws_secretsmanager_secret_version.db_user_ops,
  ]
}

DDL Operations

We don’t need to limit our use of the RDS Data API to only user management. We can also use it to enable extensions, create tables and more. This is ideal when your database schema doesn’t change very much.

Let’s enable pgvector, create a table and a few indices. In this example we will store our SQL commands in a separate file called setup.sql. It looks like this:

CREATE EXTENSION vector;

CREATE TABLE ticket (
  id integer PRIMARY KEY,
  category_id integer NOT NULL,
  raw_text text NOT NULL,
  processed_data text NOT NULL,
  embedding vector(1024) NOT NULL
);

CREATE INDEX ON ticket (category_id);
CREATE INDEX ON ticket USING ivfflat (embedding vector_l2_ops) WITH (lists = 100);
CREATE INDEX ON ticket USING ivfflat (embedding vector_ip_ops) WITH (lists = 100);
CREATE INDEX ON ticket USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);

Now we can run our commands using another terraform_data resource. It looks like this.

locals {
  db_setup_script = "${path.module}/setup.sql"
}

resource "terraform_data" "db_schema" {
  triggers_replace = {
    file = filesha256(local.db_setup_script)
  }

  provisioner "local-exec" {
    command = <<-EOT
      while read line; do
        aws rds-data execute-statement --resource-arn "${aws_rds_cluster.this.arn}" --database "${aws_rds_cluster.this.database_name}" --secret-arn "${aws_rds_cluster.this.master_user_secret[0].secret_arn}" --sql "$line"
      done  < <(awk 'BEGIN{RS=";\n"}{gsub(/\n/,""); if(NF>0) {print $0";"}}' ${local.db_setup_script})
    EOT

    interpreter = ["bash", "-c"]
  }

  depends_on = [
    aws_rds_cluster_instance.this
  ]
}

Terraform checks the hash of the SQL file each time we apply. This allows us to add more commands as our needs evolve. To do this you need to use IF NOT EXISTS when creating tables and ALTERing them. That’s not a topic for this post.

Automate Secret Rotation

One of the key features of Secrets Manager is the automated credentials rotation. We can do this with a couple of new resources. Add this for each secret you create. Run apply after adding the following code.

data "aws_region" "current" {}

data "aws_serverlessapplicationrepository_application" "rotate_secret" {
  # Application only available in us-east-1. More info https://serverlessrepo.aws.amazon.com/applications/us-east-1/297356227824/SecretsManagerRDSPostgreSQLRotationSingleUser
  application_id = "arn:aws:serverlessrepo:us-east-1:297356227824:applications/SecretsManagerRDSPostgreSQLRotationSingleUser"
}

resource "aws_serverlessapplicationrepository_cloudformation_stack" "rotate_secret_db_user" {
  name             = "rotate-${aws_secretsmanager_secret.db_user.name}"
  application_id   = data.aws_serverlessapplicationrepository_application.rotate_secret.application_id
  semantic_version = data.aws_serverlessapplicationrepository_application.rotate_secret.semantic_version
  capabilities     = data.aws_serverlessapplicationrepository_application.rotate_secret.required_capabilities

  parameters = {
    endpoint          = "https://secretsmanager.${data.aws_region.current.name}.amazonaws.com"
    excludeCharacters = local.exclude_characters
    functionName      = "rotate-${aws_secretsmanager_secret.db_user.name}"
    # kmsKeyArn = var.kms_key_arn # You should use a CMK to encrypt the secret
  }
}

resource "aws_secretsmanager_secret_rotation" "db_user" {
  secret_id           = aws_secretsmanager_secret.db_user.id
  rotation_lambda_arn = aws_serverlessapplicationrepository_cloudformation_stack.rotate_secret_db_user.outputs["RotationLambdaARN"]

  rotation_rules {
    automatically_after_days = 30
  }
}

In some organisations, staff churn is high these days. When someone leaves you can run the rotation function for any secrets they had access to. This is a quick process for keeping your data secure.

Done!

That’s it! Now you have a database, a table, some indices and 3 different users for your database - an administrator, another for the application, and one for Ops. CloudTrail tracks the use of the credentials. You can optionally provision automatic credential rotation to improve security even further.

Using the RDS Data API can simplify user provisioning in your databases. It is easier than adding a bastion, firewall rules and using a database specific provider. It also reduces the resources you expose to the public internet. 🌊

Need Help?

Do you need some help implementing the ideas in this post? Get in touch! I am happy to help.

Like and Subscribe

Did you like this post? Please subscribe so you don't miss the next one. Do you know someone who would benefit from this article? Please share it with them.

Proactive Ops is produced on the unceeded territory of the Ngunnawal people. We acknowledge the Traditional Owners and pay respect to Elders past and present.

Don't miss what's next. Subscribe to Proactive Ops:
GitHub Bluesky LinkedIn https://davehall.co…