Table of Contents
- Stop Leaking Secrets: Real Security for Terraform
- 📦 Code Examples
- Why Your Current Approach Is a Ticking Time Bomb
- The Five Rules That Actually Matter
- Rule 1: Secrets Never Touch Code
- Rule 2: Credentials Should Expire
- Rule 3: Least Privilege Isn't Optional
- Rule 4: Encrypt Everything at Rest
- Rule 5: Audit Everything
- HashiCorp Vault: Credentials That Actually Expire
- The Old Way vs. The Vault Way
- Getting Vault Running Locally
- Actually Using Vault to Store Stuff
- Pulling Vault Secrets into Terraform
- The Really Cool Part: Vault-Generated AWS Credentials
- SOPS: Commit Secrets to Git (Safely)
- The Problem SOPS Solves
- Getting SOPS Installed
- Encrypting Your First File with SOPS
- Using SOPS-Encrypted Files in Terraform
- SOPS Best Practices
- OIDC: Stop Storing CI/CD Credentials
- Secret Rotation Strategies
- Audit Logging & Compliance
- Security Best Practices Checklist
- Quick Check: Did This Stick?
- What's Coming Next
- Go Deeper
Stop Leaking Secrets: Real Security for Terraform
We need to talk about the elephant in the room.
You've got AWS keys hardcoded in your Terraform files. Database passwords committed to Git. API tokens floating around in environment variables. And you're sweating every time someone mentions "security audit."
I've been there. We've all been there.
Here's the truth: Most infrastructure breaches don't happen because hackers are brilliant. They happen because someone left credentials sitting in plain text somewhere. CircleCI breach in 2023? 1.8 million secrets exposed. Cost? Over $10M in cleanup, plus customer trust nuked into orbit.
This isn't a scare tactic. This is what happens when security is an afterthought.
In this part, I'll show you how production teams actually handle secrets - with HashiCorp Vault, SOPS encryption, OIDC authentication, and proper audit logging. No theory. Just the stuff that works in the real world.
📦 Code Examples
Repository: terraform-hcl-tutorial-series This Part: Part 11 - Security Practices
Get the working example:
git clone https://github.com/khuongdo/terraform-hcl-tutorial-series.git
cd terraform-hcl-tutorial-series
git checkout part-11
cd examples/part-11-security/
# Explore Vault integration and SOPS encryption
terraform init
terraform plan
Why Your Current Approach Is a Ticking Time Bomb
Let me guess what your code looks like:
# ❌ This is how disasters start
provider "aws" {
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
resource "aws_db_instance" "main" {
password = "MyS3cr3tP@ssw0rd" # Yep, straight into Git
}
variable "api_key" {
default = "sk_live_4eC39HqLyjWDarjtT1zdp7dc"
}
Here's what just happened:
- You committed credentials to Git (GitHub's secret scanners are screaming right now)
- Those passwords are now in your Terraform state file forever
- You've been using the same password for 3 years because changing it means updating code
- Zero audit trail - you have no idea who accessed these credentials or when
- If one key leaks, your entire infrastructure is compromised
Real Talk: The Cost of Leaking Secrets
Remember CircleCI's 2023 breach? Attackers got into environment variables. 1.8 million customer secrets potentially exposed. The damage? Over $10M in remediation costs, forced credential rotation for thousands of companies, and customer trust obliterated.
That's not theoretical. That happened.
Here's how secrets actually leak in the wild:
The usual suspects:
- Developer commits
.envfile to Git (happens daily) - CI/CD logs print secrets in build output (so easy to miss)
- S3 bucket with state file set to public (one checkbox away from disaster)
- Terraform Cloud variables without encryption
~/.aws/credentialssitting on a compromised laptop
You need defense in depth. One layer of protection isn't enough.
The Five Rules That Actually Matter
Before we get into the tools, let's lock down the principles. These aren't suggestions - this is how you survive security audits.
Rule 1: Secrets Never Touch Code
Secrets don't belong in:
- Terraform
.tffiles - Git repos (even private ones)
- Unencrypted state files
- CI/CD config files
Where they DO belong: Dedicated secret managers like Vault, AWS Secrets Manager, or GCP Secret Manager.
Rule 2: Credentials Should Expire
Static credentials are like milk - they go bad, you just don't know when. Dynamic credentials expire automatically.
Think about it: Instead of AWS access keys that live forever, use OIDC tokens that die after an hour. Attacker steals credentials? Cool, they're useless in 59 minutes.
Rule 3: Least Privilege Isn't Optional
Your Terraform only creates S3 buckets? Then it doesn't need EC2 permissions. Period.
Give the minimum required access. Nothing more. When (not if) something gets compromised, you want the blast radius small.
Rule 4: Encrypt Everything at Rest
If it's sensitive and sitting on disk, it better be encrypted:
- Terraform state files (use encrypted S3 backend)
- Secret storage (Vault handles this)
- Config files (that's what SOPS is for)
Rule 5: Audit Everything
You need to answer three questions when something goes wrong:
- Who grabbed that secret?
- When did they access it?
- What did they do with it?
If you can't answer these, you're flying blind during an incident.
Now let's actually implement this stuff.
HashiCorp Vault: Credentials That Actually Expire
Vault is where the magic happens. Instead of storing static credentials, Vault generates fresh ones on demand - and kills them automatically after a set time.
The Old Way vs. The Vault Way
Old way:
Developer → Grabs AWS key from .env file → Uses it forever → Credential eventually leaks
Vault way:
Developer → Asks Vault for AWS creds → Gets 1-hour token → Token dies automatically → Leak? Meh, already expired
Why this matters:
- Every request gets unique credentials (no sharing the same key across your whole team)
- Automatic expiration (credentials die whether you remember to rotate them or not)
- Everything logged (know exactly who grabbed what and when)
- Fine-grained policies (control who can request which secrets)
Getting Vault Running Locally
First, install it:
# macOS (easiest)
brew tap hashicorp/tap
brew install hashicorp/tap/vault
# Linux
wget https://releases.hashicorp.com/vault/1.15.0/vault_1.15.0_linux_amd64.zip
unzip vault_1.15.0_linux_amd64.zip
sudo mv vault /usr/local/bin/
# Check it worked
vault version
# Should see: Vault v1.15.0
Fire up Vault in dev mode (perfect for testing, terrible for production):
vault server -dev
# You'll see something like:
# Root Token: hvs.xxxxxxxxxxx
# Unseal Key: (not needed in dev mode)
# Vault Address: http://127.0.0.1:8200
# In a new terminal, configure environment
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='hvs.xxxxxxxxxxx' # Paste the root token from above
Dev mode warning: All data lives in memory and disappears when you stop the server. Great for learning, catastrophic for production. Don't even think about using this for real workloads.
Actually Using Vault to Store Stuff
Let's throw some secrets in there:
# Turn on the key-value storage engine
vault secrets enable -path=secret kv-v2
# Store database credentials
vault kv put secret/database/prod \
username=dbadmin \
password=SuperSecure123
# Store API keys
vault kv put secret/api/stripe \
publishable_key=pk_test_xxx \
secret_key=sk_test_yyy
# Make sure it worked
vault kv get secret/database/prod
# You'll see:
# ====== Data ======
# Key Value
# username dbadmin
# password SuperSecure123
Pulling Vault Secrets into Terraform
Time to connect Terraform to Vault:
# versions.tf
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "~> 3.23"
}
}
}
# provider.tf
provider "vault" {
address = "https://vault.example.com:8200"
# Set VAULT_TOKEN as environment variable, or use auth_login block for dynamic auth
}
Now read secrets from Vault instead of hardcoding them:
# main.tf
# Grab database credentials from Vault
data "vault_kv_secret_v2" "db" {
mount = "secret"
name = "database/prod"
}
# Use them in your resources
resource "aws_db_instance" "main" {
identifier = "production-db"
engine = "postgres"
# No more hardcoded passwords - pull from Vault
username = data.vault_kv_secret_v2.db.data["username"]
password = data.vault_kv_secret_v2.db.data["password"]
instance_class = "db.t3.medium"
allocated_storage = 100
}
# Same deal for API keys
data "vault_kv_secret_v2" "stripe" {
mount = "secret"
name = "api/stripe"
}
resource "kubernetes_secret" "stripe_api" {
metadata {
name = "stripe-credentials"
}
data = {
secret_key = data.vault_kv_secret_v2.stripe.data["secret_key"]
}
}
What you just gained:
- No secrets in your
.tffiles (finally!) - All secrets in one place (Vault)
- Full audit trail (Vault logs every access)
- Rotate secrets without touching code (change in Vault, done)
The Really Cool Part: Vault-Generated AWS Credentials
Here's where it gets interesting. Instead of permanent AWS keys, Vault can generate temporary ones on the fly:
# Turn on AWS credential generation
vault secrets enable aws
# Give Vault root AWS credentials (one-time setup)
vault write aws/config/root \
access_key=AKIAIOSFODNN7EXAMPLE \
secret_key=wJalrXUtnFEMI/K7MDENG \
region=us-east-1
# Create a role that defines what permissions to grant
vault write aws/roles/terraform \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ec2:*", "s3:*"],
"Resource": "*"
}
]
}
EOF
Now watch Terraform ask for credentials on demand:
# Request fresh AWS credentials from Vault (expires in 1 hour)
data "vault_aws_access_credentials" "terraform" {
backend = "aws"
role = "terraform"
type = "creds"
}
provider "aws" {
region = "us-east-1"
access_key = data.vault_aws_access_credentials.terraform.access_key
secret_key = data.vault_aws_access_credentials.terraform.secret_key
}
Here's what just happened:
- Terraform says "I need AWS creds"
- Vault creates brand new IAM credentials on the spot
- Credentials work for 1 hour (you can change this)
- Hour's up? Vault auto-nukes them
- Every request logged (full audit trail)
Pro tip: For production, use OIDC instead of Vault tokens (we'll cover that in a minute).
SOPS: Commit Secrets to Git (Safely)
SOPS (Secrets OPerationS) is brilliant. It encrypts config files so you can commit them to Git without your security team having a meltdown.
The Problem SOPS Solves
You've got a config file:
database:
password: SuperSecret123
api_key: sk_live_xxxxx
Can't commit this to Git. Secrets are right there in plain text.
SOPS fixes it:
database:
password: ENC[AES256_GCM,data:Qr8fK3w=,iv:xxx,tag:yyy,type:str]
api_key: ENC[AES256_GCM,data:9xKp2,iv:zzz,tag:aaa,type:str]
sops:
kms:
- arn: arn:aws:kms:us-east-1:123456789012:key/abc-def
created_at: "2025-06-04T10:00:00Z"
Now you can commit to Git. The secrets are encrypted gibberish. Only people with the KMS key can decrypt them.
Getting SOPS Installed
# macOS (one command)
brew install sops
# Linux (download binary)
wget https://github.com/getsops/sops/releases/download/v3.8.1/sops-v3.8.1.linux.amd64
chmod +x sops-v3.8.1.linux.amd64
sudo mv sops-v3.8.1.linux.amd64 /usr/local/bin/sops
# Make sure it worked
sops --version
# Should see: sops 3.8.1
Encrypting Your First File with SOPS
First, create a KMS key in AWS:
# Make an encryption key
aws kms create-key --description "SOPS encryption key"
# You'll get back a KeyId like: 12345678-abcd-1234-abcd-123456789012
# Create an alias so you don't have to memorize that UUID
aws kms create-alias \
--alias-name alias/sops \
--target-key-id 12345678-abcd-1234-abcd-123456789012
Now create a file with secrets:
# secrets.yaml
database:
host: db.example.com
username: admin
password: SuperSecret123
stripe:
api_key: sk_live_xxxxxxxxxxxxx
webhook_secret: whsec_yyyyyyyy
Encrypt it:
# Encrypt using your KMS key
sops --encrypt \
--kms 'arn:aws:kms:us-east-1:123456789012:key/12345678-abcd-1234-abcd-123456789012' \
secrets.yaml > secrets.enc.yaml
# Or use the alias (easier)
sops --encrypt \
--kms 'arn:aws:kms:us-east-1:123456789012:alias/sops' \
secrets.yaml > secrets.enc.yaml
The encrypted file looks like this:
database:
host: db.example.com # Not sensitive, stays plain text
username: admin # Not sensitive, stays plain text
password: ENC[AES256_GCM,data:Qr8fK3wX9k=,iv:xxx,tag:yyy,type:str]
stripe:
api_key: ENC[AES256_GCM,data:9xKp2L,iv:zzz,tag:aaa,type:str]
webhook_secret: ENC[AES256_GCM,data:P4rT,iv:bbb,tag:ccc,type:str]
sops:
kms:
- arn: arn:aws:kms:us-east-1:123456789012:key/12345678-abcd-1234
created_at: "2025-06-04T10:30:00Z"
Notice something cool? SOPS only encrypts the actual secrets. Host and username stay readable because they're not sensitive.
Using SOPS-Encrypted Files in Terraform
Decrypt file before applying:
# Decrypt and pipe to Terraform
sops -d secrets.enc.yaml > secrets.yaml
terraform apply -var-file=secrets.yaml
# Or use SOPS Terraform provider
SOPS Terraform provider:
# versions.tf
terraform {
required_providers {
sops = {
source = "carlpett/sops"
version = "~> 1.0"
}
}
}
# main.tf
# Read SOPS-encrypted file
data "sops_file" "secrets" {
source_file = "secrets.enc.yaml"
}
# Use decrypted values
resource "aws_db_instance" "main" {
password = data.sops_file.secrets.data["database.password"]
}
resource "kubernetes_secret" "stripe" {
data = {
api_key = data.sops_file.secrets.data["stripe.api_key"]
}
}
Workflow:
- Edit secrets:
sops secrets.enc.yaml(opens in editor, auto-encrypts on save) - Commit encrypted file to Git
- CI/CD decrypts using KMS permissions
- Terraform reads decrypted values
SOPS Best Practices
1. Use .sops.yaml for configuration:
# .sops.yaml (in repo root)
creation_rules:
- path_regex: \.prod\.yaml$
kms: arn:aws:kms:us-east-1:123456789012:alias/sops-prod
- path_regex: \.dev\.yaml$
kms: arn:aws:kms:us-west-2:987654321098:alias/sops-dev
Now encrypt without specifying KMS key:
sops -e secrets.prod.yaml > secrets.prod.enc.yaml
# Uses prod KMS key automatically
2. Encrypted regex patterns:
Encrypt only fields matching pattern:
# .sops.yaml
creation_rules:
- path_regex: \.yaml$
encrypted_regex: '^(password|api_key|secret|token)$'
3. Multiple decryption keys (team access):
creation_rules:
- kms: >-
arn:aws:kms:us-east-1:111111111111:alias/sops,
arn:aws:kms:us-east-1:222222222222:alias/sops
pgp: >-
FBC7B9E2A4F9289AC0C1D4843D16CEE4A27381B4,
1234567890ABCDEF1234567890ABCDEF12345678
Multiple keys = multiple people can decrypt.
OIDC: Stop Storing CI/CD Credentials
Here's the problem: Your CI/CD pipeline needs AWS credentials to deploy stuff. The old way? Store permanent AWS keys in GitHub Secrets or GitLab variables.
What could go wrong? If GitHub gets hacked (or your CI/CD provider), attackers walk away with permanent credentials to your AWS account. Not great.
OIDC (OpenID Connect) fixes this by using temporary tokens instead of permanent keys.
The Old Way vs. OIDC
Old way (storing long-lived keys):
# ❌ This is a permanent AWS key sitting in GitHub
- name: Configure AWS
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
OIDC way (no stored credentials):
# ✅ No secrets stored - GitHub gets temp creds on demand
- name: Configure AWS
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsRole
aws-region: us-east-1
How OIDC actually works:
- GitHub creates an OIDC token that proves "this is really a GitHub Actions workflow"
- GitHub sends that token to AWS
- AWS checks the signature and says "yep, this is legit"
- AWS hands back temporary credentials (good for 1 hour)
- Workflow uses the creds, then they expire
No secrets stored in GitHub. If GitHub gets breached, attackers get... nothing.
Setting Up OIDC for GitHub Actions + AWS
Step 1: Create OIDC Identity Provider in AWS
# oidc.tf
resource "aws_iam_openid_connect_provider" "github" {
url = "https://token.actions.githubusercontent.com"
client_id_list = ["sts.amazonaws.com"]
# GitHub's OIDC thumbprints (verify these periodically)
thumbprint_list = [
"6938fd4d98bab03faadb97b34396831e3780aea1",
"1c58a3a8518e8759bf075b76b750d4f2df264fcd"
]
}
Step 2: Create IAM role for GitHub Actions
# Trust policy: Only specific GitHub repo can assume role
data "aws_iam_policy_document" "github_assume_role" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
principals {
type = "Federated"
identifiers = [aws_iam_openid_connect_provider.github.arn]
}
condition {
test = "StringEquals"
variable = "token.actions.githubusercontent.com:aud"
values = ["sts.amazonaws.com"]
}
condition {
test = "StringLike"
variable = "token.actions.githubusercontent.com:sub"
# Only allow from specific repo and branch
values = ["repo:khuongdo/terraform-infra:ref:refs/heads/main"]
}
}
}
resource "aws_iam_role" "github_actions" {
name = "GitHubActionsRole"
assume_role_policy = data.aws_iam_policy_document.github_assume_role.json
}
# Attach permissions (Terraform needs these)
resource "aws_iam_role_policy_attachment" "terraform_policy" {
role = aws_iam_role.github_actions.name
policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess"
}
Step 3: Use in GitHub Actions workflow
# .github/workflows/terraform.yml
name: Terraform Deploy
on:
push:
branches: [main]
# Required for OIDC
permissions:
id-token: write
contents: read
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Authenticate using OIDC (no secrets!)
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsRole
aws-region: us-east-1
# Now AWS CLI and Terraform work automatically
- name: Terraform Init
run: terraform init
- name: Terraform Apply
run: terraform apply -auto-approve
Security win: If GitHub Actions compromised, attackers get 1-hour credentials scoped to specific repo.
OIDC for GitLab CI + GCP
Step 1: Create GCP Workload Identity Pool
# Enable required API
gcloud services enable iamcredentials.googleapis.com
# Create Workload Identity Pool
gcloud iam workload-identity-pools create "gitlab-pool" \
--location="global" \
--description="GitLab CI OIDC"
# Add GitLab as OIDC provider
gcloud iam workload-identity-pools providers create-oidc "gitlab-provider" \
--location="global" \
--workload-identity-pool="gitlab-pool" \
--issuer-uri="https://gitlab.com" \
--allowed-audiences="https://gitlab.com" \
--attribute-mapping="google.subject=assertion.sub,attribute.project_path=assertion.project_path"
Step 2: Grant service account permissions
# Create service account for Terraform
gcloud iam service-accounts create terraform-gitlab \
--display-name="Terraform via GitLab CI"
# Grant Terraform permissions
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:terraform-gitlab@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/editor"
# Allow workload identity
gcloud iam service-accounts add-iam-policy-binding \
terraform-gitlab@PROJECT_ID.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/gitlab-pool/attribute.project_path/khuongdo/terraform-infra"
Step 3: GitLab CI configuration
# .gitlab-ci.yml
terraform:
image: google/cloud-sdk:alpine
script:
# Authenticate using OIDC token from GitLab
- echo $CI_JOB_JWT_V2 > .ci_job_jwt_file
- gcloud iam workload-identity-pools create-cred-config \
projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/gitlab-pool/providers/gitlab-provider \
--service-account=terraform-gitlab@PROJECT_ID.iam.gserviceaccount.com \
--credential-source-file=.ci_job_jwt_file \
--output-file=credentials.json
- export GOOGLE_APPLICATION_CREDENTIALS=credentials.json
# Now run Terraform
- terraform init
- terraform apply -auto-approve
only:
- main
Result: No GCP service account keys stored in GitLab. Temporary credentials only.
Secret Rotation Strategies
Static secrets eventually leak. Rotation limits blast radius.
Automated Rotation with Vault
Vault can automatically rotate secrets on schedule:
# Enable database secrets engine
vault secrets enable database
# Configure PostgreSQL connection
vault write database/config/production \
plugin_name=postgresql-database-plugin \
allowed_roles="terraform" \
connection_url="postgresql://{{username}}:{{password}}@postgres.example.com:5432/mydb" \
username="vaultadmin" \
password="vault-root-password"
# Create role with automatic rotation (30 days)
vault write database/roles/terraform \
db_name=production \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="24h" \
max_ttl="720h"
# Request credentials (auto-rotated every 24h)
vault read database/creds/terraform
# Key Value
# lease_id database/creds/terraform/abc123
# lease_duration 24h
# username v-terraform-abc123
# password A1a-random-password
In Terraform:
data "vault_database_secret_backend_connection" "db" {
backend = "database"
name = "terraform"
}
resource "kubernetes_secret" "db_creds" {
metadata {
name = "db-credentials"
}
data = {
username = data.vault_database_secret_backend_connection.db.username
password = data.vault_database_secret_backend_connection.db.password
}
}
Every Terraform run gets new database credentials, automatically revoked after 24 hours.
Manual Rotation for Cloud Provider Keys
Rotate AWS access keys every 90 days:
#!/bin/bash
# rotate-aws-keys.sh
# Create new access key
NEW_KEY=$(aws iam create-access-key --user-name terraform-user --output json)
NEW_ACCESS_KEY=$(echo $NEW_KEY | jq -r '.AccessKey.AccessKeyId')
NEW_SECRET_KEY=$(echo $NEW_KEY | jq -r '.AccessKey.SecretAccessKey')
# Update Vault with new credentials
vault kv put secret/aws/terraform \
access_key=$NEW_ACCESS_KEY \
secret_key=$NEW_SECRET_KEY
# Wait 1 hour (ensure Vault updated everywhere)
sleep 3600
# Delete old access key
aws iam delete-access-key --user-name terraform-user --access-key-id $OLD_ACCESS_KEY
Schedule with cron:
# Rotate AWS keys every 90 days
0 0 1 */3 * /usr/local/bin/rotate-aws-keys.sh
Audit Logging & Compliance
Compliance requires answering: Who accessed what, when?
Vault Audit Logging
Enable audit device:
# Log to file
vault audit enable file file_path=/var/log/vault-audit.log
# Log to syslog
vault audit enable syslog tag="vault" facility="AUTH"
Sample audit log entry:
{
"time": "2026-06-04T14:30:00Z",
"type": "response",
"auth": {
"client_token": "hmac-sha256:abc123",
"display_name": "terraform-user",
"policies": ["default", "terraform"]
},
"request": {
"operation": "read",
"path": "secret/data/database/prod"
},
"response": {
"secret": {
"lease_id": "secret/data/database/prod/abc123"
}
}
}
Query logs for compliance reports:
# Who accessed production database secret?
cat /var/log/vault-audit.log | jq 'select(.request.path == "secret/data/database/prod")'
# All secret accesses by user
cat /var/log/vault-audit.log | jq 'select(.auth.display_name == "terraform-user")'
Terraform State Audit (S3 Backend)
Enable CloudTrail for S3 state bucket:
# s3-backend.tf
resource "aws_s3_bucket" "terraform_state" {
bucket = "my-terraform-state"
versioning {
enabled = true # Track all state changes
}
}
# Enable CloudTrail logging
resource "aws_cloudtrail" "state_audit" {
name = "terraform-state-audit"
s3_bucket_name = aws_s3_bucket.audit_logs.id
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${aws_s3_bucket.terraform_state.arn}/*"]
}
}
}
Query CloudTrail for state file access:
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=ResourceName,AttributeValue=my-terraform-state/prod/terraform.tfstate \
--max-results 50
Policy-as-Code Enforcement (Sentinel)
Terraform Cloud supports Sentinel policies to enforce security:
# sentinel.hcl
policy "require-vault-secrets" {
enforcement_level = "hard-mandatory"
}
policy "no-public-s3-buckets" {
enforcement_level = "hard-mandatory"
}
Example policy:
# require-vault-secrets.sentinel
import "tfplan/v2" as tfplan
# Ensure no hardcoded passwords
deny_hardcoded_passwords = rule {
all tfplan.resource_changes as _, rc {
all rc.change.after as attr, val {
attr not matches "password|secret|api_key" or
strings.has_prefix(val, "vault:")
}
}
}
main = rule {
deny_hardcoded_passwords
}
Effect: Terraform apply blocked if hardcoded secrets detected.
Security Best Practices Checklist
Before deploying to production, verify:
Secrets Management:
- No credentials in
.tffiles - No credentials in Git history (
git log -p | grep -i password) - All secrets stored in Vault or cloud secret managers
- SOPS used for encrypted config files
- Terraform state encrypted at rest (S3 SSE, GCS encryption)
Access Control:
- OIDC configured for CI/CD (no long-lived keys)
- IAM policies follow least privilege
- Vault policies restrict access by role
- MFA enabled for privileged accounts
- Terraform Cloud workspace access restricted
Credential Lifecycle:
- Secret rotation strategy defined (90-day max)
- Vault dynamic secrets used where possible
- Emergency key rotation procedure documented
- Expired credentials automatically revoked
Audit & Compliance:
- Vault audit logging enabled
- CloudTrail logs Terraform state access
- Logs retained per compliance requirements (1+ year)
- Automated compliance scanning (tfsec, Checkov)
- Policy-as-code enforcement (Sentinel, OPA)
Incident Response:
- Runbook for compromised credentials
- Automated alerts for suspicious access patterns
- Vault emergency sealing procedure
- Backup encryption keys stored securely (offline)
Quick Check: Did This Stick?
Let's make sure you actually got this (not just skimmed it):
1. Why can't you hardcode secrets in Terraform?
Because secrets in code end up in Git history forever, sit in plain text in state files, have zero audit trail, require code changes to rotate, and turn one leak into total infrastructure compromise.
2. What makes Vault's dynamic secrets better than static keys?
Dynamic secrets are generated fresh for each request, expire automatically after an hour (or whatever TTL you set), give each person unique credentials, and get auto-revoked. Static credentials? They live forever and everyone shares them.
3. What's SOPS actually for?
Encrypting config files so you can commit them to Git safely. Only people with the KMS or PGP key can decrypt them. Everyone else sees encrypted garbage.
4. How does OIDC make CI/CD more secure?
It kills the need for permanent credentials stored in GitHub/GitLab. Instead, your CI/CD gets temporary tokens that expire in an hour. Breach happens? Creds are already dead.
5. What should Vault audit logs tell you?
Who accessed what secret, when they grabbed it, and whether it worked or failed. That's your paper trail for security audits and incident response.
If you nailed these, you're ready for production-level secret management.
What's Coming Next
Part 12 is the series finale, and we're going big. We'll put everything together into a production-ready platform:
- Full CI/CD pipeline with security scanning built in
- Blue-green deployments so you can deploy without fear
- Disaster recovery that actually works
- Compliance automation for SOC 2, HIPAA, PCI-DSS
- Multi-account AWS setups
- GitOps workflows with Terraform
You'll have an infrastructure platform that passes security audits, scales to enterprise needs, and doesn't keep you up at night.
Ready to finish this? Part 12 drops soon.
Go Deeper
Want to dig into this stuff more? Here's where to start:
- HashiCorp Vault Docs - Official guides, covers everything
- SOPS on GitHub - Full documentation and examples
- AWS OIDC Setup Guide - Step-by-step from GitHub
- Terraform Vault Provider - All the resources and data sources
- OWASP Secrets Management - Security best practices
Got questions? Drop them in the comments. Actually helpful feedback? Even better.
Series navigation:
- Part 1: Why Infrastructure as Code?
- Part 2: Setting Up Terraform
- Part 3: Your First Cloud Resource
- Part 4: HCL Fundamentals
- Part 5: Variables, Outputs & State
- Part 6: Core Terraform Workflow
- Part 7: Modules for Organization
- Part 8: Multi-Cloud Patterns
- Part 9: State Management & Team Workflows
- Part 10: Testing & Validation
- Part 11: Security & Secrets Management (You are here)
- Part 12: Production Patterns & DevSecOps
Part of the "Terraform from Fundamentals to Production" series. Real-world Terraform for people who ship to production.