Posted on :: 3094 Words :: Tags: , ,

Modules: Organization & Reusability

Your main.tf hit 800 lines last week. You've copy-pasted the same VPC configuration across three projects. Someone on Slack just asked "which version of the security group rules should we actually use?"

Sound familiar?

This is what I call the module moment - that point where your Terraform practice needs to evolve from scripts to actual architecture.

Here's what we're building in Part 7: reusable infrastructure modules that turn repeated patterns into single-line imports. By the end, you'll publish your first module to the Terraform Registry. More importantly, you'll understand why modules are what separate infrastructure teams that scale from those that burn out.

📦 Code Examples

Repository: terraform-hcl-tutorial-series This Part: Part 7 - Module Examples

Get the working example:

git clone https://github.com/khuongdo/terraform-hcl-tutorial-series.git
cd terraform-hcl-tutorial-series
git checkout part-07
cd examples/part-07-modules/

# Explore module patterns
terraform init
terraform plan

Why Modules Actually Matter

Let's skip the theory and talk about a real problem you've probably hit.

You've got three environments: dev, staging, production. Each needs a VPC with the full setup - public and private subnets across 3 availability zones, NAT gateways, route tables, security groups, VPC endpoints for S3 and DynamoDB.

Without modules, here's what happens:

# dev/main.tf (500 lines)
resource "aws_vpc" "dev" { ... }
resource "aws_subnet" "dev_public_1" { ... }
resource "aws_subnet" "dev_public_2" { ... }
# ... 30 more resources ...

# staging/main.tf (you copy-paste dev/main.tf and find-replace "dev" → "staging")
resource "aws_vpc" "staging" { ... }
resource "aws_subnet" "staging_public_1" { ... }
# ... 30 more resources ...

# production/main.tf (copy-paste again, fingers crossed)
resource "aws_vpc" "prod" { ... }
# ...

Six months later, your infrastructure looks like this:

  • Staging somehow has 4 subnets while production only has 3
  • A critical security group rule exists only in dev (discovered during an incident)
  • Nobody knows which configuration is the "source of truth"
  • Updating all three environments means three separate PRs that inevitably drift

This doesn't scale. More importantly, this is how production incidents happen.

The module approach changes everything:

# modules/vpc/main.tf - write it ONCE
resource "aws_vpc" "main" {
  cidr_block = var.cidr_block
}

resource "aws_subnet" "public" {
  count      = length(var.availability_zones)
  vpc_id     = aws_vpc.main.id
  cidr_block = cidrsubnet(var.cidr_block, 8, count.index)
  # ... complete configuration
}
# ... all VPC resources defined once

Then use it three times:

# dev/main.tf
module "vpc" {
  source = "../modules/vpc"

  environment        = "dev"
  cidr_block        = "10.0.0.0/16"
  availability_zones = ["us-west-2a", "us-west-2b"]
}

# staging/main.tf
module "vpc" {
  source = "../modules/vpc"

  environment        = "staging"
  cidr_block        = "10.1.0.0/16"
  availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
}

# production/main.tf
module "vpc" {
  source = "../modules/vpc"

  environment        = "production"
  cidr_block        = "10.2.0.0/16"
  availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
}

One update to modules/vpc/ changes all three environments. Consistency guaranteed. No drift possible.

Module Structure: Less Magic Than You Think

Modules are just directories of Terraform files. That's it. No compilation, no special tooling, no magic.

The community has settled on a structure that works:

modules/vpc/
├── main.tf       # Core resource definitions
├── variables.tf  # Input variable declarations
├── outputs.tf    # Output value declarations
└── README.md     # Documentation (optional but you'll regret skipping it)

For production modules, add these:

modules/vpc/
├── main.tf
├── variables.tf
├── outputs.tf
├── README.md
├── versions.tf      # Provider version constraints
├── examples/        # Usage examples (your future self will thank you)
│   └── complete/
│       ├── main.tf
│       └── README.md
└── CHANGELOG.md     # Version history

Let's break down what goes in each file.

main.tf - Your actual infrastructure:

# modules/vpc/main.tf
resource "aws_vpc" "main" {
  cidr_block           = var.cidr_block
  enable_dns_hostnames = var.enable_dns_hostnames
  enable_dns_support   = var.enable_dns_support

  tags = merge(
    var.tags,
    {
      Name = "${var.name}-vpc"
    }
  )
}

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = merge(
    var.tags,
    {
      Name = "${var.name}-igw"
    }
  )
}

# ... more resources

variables.tf - The module's API. This is what users interact with:

# modules/vpc/variables.tf
variable "name" {
  description = "Name prefix for all VPC resources"
  type        = string
}

variable "cidr_block" {
  description = "CIDR block for VPC"
  type        = string

  validation {
    condition     = can(cidrhost(var.cidr_block, 0))
    error_message = "Must be valid IPv4 CIDR block."
  }
}

variable "availability_zones" {
  description = "List of availability zones for subnets"
  type        = list(string)
}

variable "enable_dns_hostnames" {
  description = "Enable DNS hostnames in VPC"
  type        = bool
  default     = true
}

variable "tags" {
  description = "Additional tags for resources"
  type        = map(string)
  default     = {}
}

outputs.tf - What you expose to module consumers:

# modules/vpc/outputs.tf
output "vpc_id" {
  description = "ID of created VPC"
  value       = aws_vpc.main.id
}

output "vpc_cidr_block" {
  description = "CIDR block of VPC"
  value       = aws_vpc.main.cidr_block
}

output "public_subnet_ids" {
  description = "List of public subnet IDs"
  value       = aws_subnet.public[*].id
}

output "private_subnet_ids" {
  description = "List of private subnet IDs"
  value       = aws_subnet.private[*].id
}

versions.tf - Lock down provider versions before they break you:

# modules/vpc/versions.tf
terraform {
  required_version = ">= 1.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

Building Your First Module: Web Server

Theory's done. Let's build something real - a reusable web server module.

First, plan your module's interface before writing any code:

What users configure (inputs):

  • Instance type (t3.micro, t3.medium, etc.)
  • AMI ID
  • Subnet ID
  • Security group rules
  • Tags

What users need back (outputs):

  • Instance ID
  • Public IP
  • Private IP

Create the structure:

mkdir -p modules/web-server
cd modules/web-server
touch main.tf variables.tf outputs.tf README.md

Define your variables:

# modules/web-server/variables.tf
variable "name" {
  description = "Name of the web server instance"
  type        = string
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.micro"

  validation {
    condition     = can(regex("^t3\\.", var.instance_type))
    error_message = "Only t3 instance types supported."
  }
}

variable "ami_id" {
  description = "AMI ID for instance"
  type        = string
}

variable "subnet_id" {
  description = "Subnet ID for instance placement"
  type        = string
}

variable "allowed_cidr_blocks" {
  description = "CIDR blocks allowed to access web server on port 80"
  type        = list(string)
  default     = ["0.0.0.0/0"]
}

variable "tags" {
  description = "Additional tags"
  type        = map(string)
  default     = {}
}

Implement the resources:

# modules/web-server/main.tf
resource "aws_security_group" "web" {
  name_prefix = "${var.name}-web-"
  description = "Allow HTTP inbound traffic"

  ingress {
    description = "HTTP from specified CIDR blocks"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = var.allowed_cidr_blocks
  }

  egress {
    description = "Allow all outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = merge(
    var.tags,
    {
      Name = "${var.name}-web-sg"
    }
  )

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_instance" "web" {
  ami                    = var.ami_id
  instance_type          = var.instance_type
  subnet_id              = var.subnet_id
  vpc_security_group_ids = [aws_security_group.web.id]

  user_data = <<-EOF
              #!/bin/bash
              yum update -y
              yum install -y httpd
              systemctl start httpd
              systemctl enable httpd
              echo "<h1>Hello from ${var.name}</h1>" > /var/www/html/index.html
              EOF

  tags = merge(
    var.tags,
    {
      Name = var.name
    }
  )
}

Define outputs:

# modules/web-server/outputs.tf
output "instance_id" {
  description = "ID of EC2 instance"
  value       = aws_instance.web.id
}

output "public_ip" {
  description = "Public IP address of instance"
  value       = aws_instance.web.public_ip
}

output "private_ip" {
  description = "Private IP address of instance"
  value       = aws_instance.web.private_ip
}

output "security_group_id" {
  description = "ID of security group"
  value       = aws_security_group.web.id
}

Document it (seriously, do this):

# Web Server Module

Deploys a simple EC2-based web server with Apache httpd.

## Usage

```hcl
module "web_server" {
  source = "./modules/web-server"

  name          = "my-web-server"
  instance_type = "t3.micro"
  ami_id        = "ami-0c55b159cbfafe1f0"  # Amazon Linux 2
  subnet_id     = aws_subnet.public.id

  allowed_cidr_blocks = ["203.0.113.0/24"]

  tags = {
    Environment = "production"
    ManagedBy   = "terraform"
  }
}

Requirements

  • Terraform >= 1.0
  • AWS Provider >= 5.0

Inputs

NameDescriptionTypeDefaultRequired
nameName of web server instancestring-yes
instance_typeEC2 instance typestringt3.microno
ami_idAMI IDstring-yes
subnet_idSubnet IDstring-yes
allowed_cidr_blocksCIDR blocks for HTTP accesslist(string)["0.0.0.0/0"]no

Outputs

NameDescription
instance_idEC2 instance ID
public_ipPublic IP address
private_ipPrivate IP address
security_group_idSecurity group ID

Test it before you ship it:

```hcl
# test/main.tf
module "web_server" {
  source = "../modules/web-server"

  name          = "test-server"
  instance_type = "t3.micro"
  ami_id        = "ami-0c55b159cbfafe1f0"
  subnet_id     = "subnet-12345678"  # Replace with real subnet

  tags = {
    Environment = "test"
  }
}

output "server_ip" {
  value = module.web_server.public_ip
}
cd test
terraform init
terraform plan
terraform apply

Where Modules Live: Local, Registry, Git

Modules can come from three places.

Local paths - for development and internal modules:

# Relative path
module "vpc" {
  source = "./modules/vpc"
}

# Absolute path
module "vpc" {
  source = "/home/user/terraform-modules/vpc"
}

# Parent directory
module "vpc" {
  source = "../shared-modules/vpc"
}

Terraform Registry - for public community modules:

# Official AWS VPC module
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"

  name = "my-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-west-2a", "us-west-2b", "us-west-2c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = false

  tags = {
    Terraform   = "true"
    Environment = "dev"
  }
}

Source format: NAMESPACE/NAME/PROVIDER

  • terraform-aws-modules = who published it
  • vpc = what it does
  • aws = which provider

Always pin versions:

version = "5.1.2"           # Exact version (safest)
version = ">= 5.1.0"        # Minimum version
version = "~> 5.1.0"        # Allow 5.1.x patches only
version = ">= 5.1.0, < 6.0" # Version range

Git repositories - for private company modules:

# GitHub HTTPS
module "vpc" {
  source = "github.com/your-org/terraform-modules//vpc?ref=v1.2.3"
}

# GitHub SSH
module "vpc" {
  source = "git@github.com:your-org/terraform-modules.git//vpc?ref=v1.2.3"
}

# GitLab
module "vpc" {
  source = "git::https://gitlab.com/your-org/terraform-modules.git//vpc?ref=main"
}

# Bitbucket
module "vpc" {
  source = "git::https://bitbucket.org/your-org/terraform-modules.git//vpc?ref=v1.0.0"
}

Git source syntax:

  • // separates repo from subdirectory path
  • ?ref= specifies the Git reference (branch, tag, commit SHA)

Use Git sources for private modules, pre-Registry testing, or forked public modules.

Module Versioning: Don't Break Production

Once multiple teams use your module, versioning becomes critical. Breaking prod because you refactored variable names is a resume-generating event.

Use semantic versioning (SemVer): MAJOR.MINOR.PATCH

  • MAJOR (1.0.0 → 2.0.0): Breaking changes - renamed variables, removed outputs, incompatible behavior
  • MINOR (1.0.0 → 1.1.0): New features, backward-compatible additions
  • PATCH (1.0.0 → 1.0.1): Bug fixes, documentation updates

Example version history:

v1.0.0 - Initial release
v1.0.1 - Fix security group description typo
v1.1.0 - Add optional "enable_ipv6" variable
v1.1.1 - Fix subnet CIDR calculation bug
v2.0.0 - BREAKING: Rename "subnet_count" to "availability_zones" (list)

Tag your releases in Git:

# After committing module changes
git tag -a v1.2.0 -m "Add support for custom DNS resolvers"
git push origin v1.2.0

# Consumers reference this tag
module "vpc" {
  source = "git::https://github.com/yourorg/modules.git//vpc?ref=v1.2.0"
}

Version constraint strategies:

# Conservative (production)
version = "1.2.3"  # Exact pin - nothing changes without you knowing

# Balanced (most common)
version = "~> 1.2"  # Allow 1.2.x patches, block 1.3.0

# Flexible (development)
version = ">= 1.0, < 2.0"  # Any 1.x version

Module Design Patterns That Work

Great modules have great interfaces. Here are patterns that actually work in production.

Pattern 1: Required vs Optional Variables

# Required (no default) - user must provide
variable "vpc_id" {
  description = "VPC ID where resources will be created"
  type        = string
}

# Optional with sensible default
variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.micro"
}

# Optional with null default (enables conditional logic)
variable "kms_key_id" {
  description = "KMS key ID for encryption. If null, uses AWS-managed keys."
  type        = string
  default     = null
}

Pattern 2: Feature Flags

variable "enable_monitoring" {
  description = "Enable detailed CloudWatch monitoring"
  type        = bool
  default     = false
}

variable "enable_public_ip" {
  description = "Assign public IP to instances"
  type        = bool
  default     = true
}

# Usage in module
resource "aws_instance" "this" {
  # ...
  monitoring              = var.enable_monitoring
  associate_public_ip_address = var.enable_public_ip
}

Pattern 3: Object Variables for Complex Config

variable "database_config" {
  description = "Database configuration"
  type = object({
    engine            = string
    engine_version    = string
    instance_class    = string
    allocated_storage = number
    multi_az          = bool
  })

  default = {
    engine            = "postgres"
    engine_version    = "14.7"
    instance_class    = "db.t3.micro"
    allocated_storage = 20
    multi_az          = false
  }
}

# Usage
module "database" {
  source = "./modules/rds"

  database_config = {
    engine            = "mysql"
    engine_version    = "8.0"
    instance_class    = "db.t3.small"
    allocated_storage = 50
    multi_az          = true
  }
}

Pattern 4: Computed Outputs

# Simple output
output "instance_id" {
  value = aws_instance.web.id
}

# Computed output (connection string)
output "database_connection_string" {
  value       = "postgresql://${aws_db_instance.this.username}@${aws_db_instance.this.endpoint}/${aws_db_instance.this.db_name}"
  description = "Full PostgreSQL connection string"
  sensitive   = true
}

# List output (from count or for_each)
output "instance_ids" {
  value = aws_instance.web[*].id
}

# Map output
output "subnet_ids_by_az" {
  value = {
    for subnet in aws_subnet.private :
    subnet.availability_zone => subnet.id
  }
}

Publishing to Terraform Registry

Ready to share your module? Here's the step-by-step.

Prerequisites:

  1. Public GitHub repository
  2. Repository name must be terraform-PROVIDER-NAME (e.g., terraform-aws-vpc)
  3. Semantic version tags (v1.0.0, v1.1.0)
  4. Standard module structure (main.tf, variables.tf, outputs.tf)

Publishing steps:

Prepare your GitHub repository:

# Create repo with correct naming
gh repo create terraform-aws-web-server --public
cd terraform-aws-web-server

# Add your module files
# (main.tf, variables.tf, outputs.tf, README.md)

# Commit and tag first version
git add .
git commit -m "Initial module release"
git tag v1.0.0
git push origin main
git push origin v1.0.0

Go to registry.terraform.io:

  • Click "Publish" → "Module"
  • Sign in with GitHub
  • Select your repository

The registry automatically:

  • Detects semantic version tags
  • Generates docs from README.md
  • Parses variables and outputs
  • Creates usage examples

Your module is now live:

module "web_server" {
  source  = "YOUR_GITHUB_USERNAME/web-server/aws"
  version = "1.0.0"

  # ...
}

Registry best practices:

Your README.md should include:

# Module Name

Brief description (1-2 sentences).

## Usage

\`\`\`hcl
module "example" {
  source = "..."
  # Minimal viable example
}
\`\`\`

## Examples

- [Complete example](./examples/complete)
- [Minimal example](./examples/minimal)

## Requirements

| Name | Version |
|------|---------|
| terraform | >= 1.0 |
| aws | >= 5.0 |

## Inputs

(Auto-generated by registry)

## Outputs

(Auto-generated by registry)

Add examples directory:

terraform-aws-web-server/
├── main.tf
├── variables.tf
├── outputs.tf
├── README.md
└── examples/
    ├── complete/
    │   ├── main.tf
    │   └── README.md
    └── minimal/
        ├── main.tf
        └── README.md

Module Composition: Building Bigger Things

The real power emerges when you compose modules into higher-level abstractions.

Here's a three-tier application:

# environments/production/main.tf

# Layer 1: Network foundation
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"

  name = "production-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-west-2a", "us-west-2b", "us-west-2c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
}

# Layer 2: Database
module "database" {
  source = "./modules/rds-postgres"

  name       = "production-db"
  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnet_ids

  instance_class    = "db.r5.xlarge"
  allocated_storage = 100
  multi_az          = true

  allowed_cidr_blocks = module.vpc.private_subnets_cidr_blocks
}

# Layer 3: Application servers
module "app_servers" {
  source = "./modules/ecs-service"

  name           = "production-app"
  vpc_id         = module.vpc.vpc_id
  subnet_ids     = module.vpc.private_subnet_ids
  desired_count  = 5

  environment_variables = {
    DB_HOST     = module.database.endpoint
    DB_NAME     = module.database.database_name
    DB_USER     = module.database.username
    ENVIRONMENT = "production"
  }
}

# Layer 4: Load balancer
module "load_balancer" {
  source = "./modules/alb"

  name       = "production-alb"
  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.public_subnet_ids

  target_group_arn = module.app_servers.target_group_arn
}

# Outputs
output "app_url" {
  value = "https://${module.load_balancer.dns_name}"
}

output "database_endpoint" {
  value     = module.database.endpoint
  sensitive = true
}

This pattern:

  • Separates concerns (network, data, compute)
  • Passes outputs from one module as inputs to another
  • Creates a clear dependency graph
  • Enables testing each layer independently

Advanced Patterns Worth Knowing

Pattern 1: Conditional Resource Creation

# modules/web-server/main.tf

# Create elastic IP only if enabled
resource "aws_eip" "this" {
  count = var.enable_elastic_ip ? 1 : 0

  instance = aws_instance.web.id
  domain   = "vpc"
}

# Output is conditional
output "elastic_ip" {
  value = var.enable_elastic_ip ? aws_eip.this[0].public_ip : null
}

Pattern 2: For_Each for Dynamic Resources

# modules/vpc/main.tf

# Create one subnet per AZ
resource "aws_subnet" "private" {
  for_each = toset(var.availability_zones)

  vpc_id            = aws_vpc.main.id
  availability_zone = each.value
  cidr_block        = cidrsubnet(var.cidr_block, 8, index(var.availability_zones, each.value))

  tags = {
    Name = "${var.name}-private-${each.value}"
    Type = "private"
  }
}

# Output as map
output "private_subnet_ids" {
  value = {
    for az, subnet in aws_subnet.private :
    az => subnet.id
  }
}

Pattern 3: Multi-Region with Provider Aliases

# Deploy same infrastructure to multiple regions

provider "aws" {
  alias  = "us_west"
  region = "us-west-2"
}

provider "aws" {
  alias  = "us_east"
  region = "us-east-1"
}

# Use module twice with different providers
module "vpc_west" {
  source = "./modules/vpc"

  providers = {
    aws = aws.us_west
  }

  name = "vpc-west"
  cidr = "10.0.0.0/16"
}

module "vpc_east" {
  source = "./modules/vpc"

  providers = {
    aws = aws.us_east
  }

  name = "vpc-east"
  cidr = "10.1.0.0/16"
}

Testing Your Modules

Production modules need automated testing.

Basic validation:

# In module directory
terraform fmt -check -recursive
terraform validate

# Add to CI/CD pipeline

Integration testing with Terratest (Go):

// test/vpc_test.go
package test

import (
    "testing"
    "github.com/gruntwork-io/terratest/modules/terraform"
    "github.com/stretchr/testify/assert"
)

func TestVPCModule(t *testing.T) {
    terraformOptions := &terraform.Options{
        TerraformDir: "../examples/complete",
    }

    defer terraform.Destroy(t, terraformOptions)
    terraform.InitAndApply(t, terraformOptions)

    vpcID := terraform.Output(t, terraformOptions, "vpc_id")
    assert.NotEmpty(t, vpcID)
}

Run with:

cd test
go test -v -timeout 30m

Mistakes to Avoid

Don't hardcode provider configuration in modules:

# Bad - forces all users to us-west-2
provider "aws" {
  region = "us-west-2"
}

# Good - version constraint only
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

Don't require too many variables:

# Bad - 15 required variables
variable "vpc_id" { type = string }
variable "subnet_id_1" { type = string }
variable "subnet_id_2" { type = string }
# ... 12 more

# Good - group related variables
variable "network_config" {
  type = object({
    vpc_id     = string
    subnet_ids = list(string)
  })
}

Don't make overly generic modules:

# Bad - tries to do everything
module "generic_resource" {
  # 50 variables, handles EC2, RDS, S3, Lambda...
  # Impossible to understand or maintain
}

# Good - single responsibility
module "web_server" {
  # Focused on one thing: EC2 web servers
}

Don't skip version constraints:

# Bad - latest version (breaks when v6.0.0 releases)
module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
}

# Good - pinned to safe range
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.1"  # Allow 5.1.x patches
}

Check Your Understanding

Before Part 8, make sure you can answer:

  1. What problem do modules solve?

    • Eliminate copy-paste duplication, ensure consistency across environments, create reusable infrastructure patterns
  2. What are the three essential files in a module?

    • main.tf (resources), variables.tf (inputs), outputs.tf (exposed values)
  3. How do you version a module in Git?

    • Use semantic versioning tags (v1.0.0, v1.1.0, v2.0.0), push tags to Git, consumers reference with ?ref=v1.0.0
  4. What's the difference between local and registry module sources?

    • Local uses file path (./modules/vpc), registry uses NAMESPACE/NAME/PROVIDER with version constraint
  5. When should you increment MAJOR vs MINOR vs PATCH?

    • MAJOR for breaking changes, MINOR for new features (backward-compatible), PATCH for bug fixes

If you're solid on these, you're ready for multi-cloud patterns.

What's Next?

In Part 8: Multi-Cloud Patterns, we'll tackle:

  • Deploying the same infrastructure to AWS, GCP, and Azure
  • Provider-agnostic module design
  • Cloud-specific vs generic abstractions
  • When to use multi-cloud (and when not to)
  • Real-world hybrid cloud architectures

You'll build a cloud-agnostic VPC module that works across all three major providers with minimal code changes.

Ready to go multi-cloud? Continue to Part 8 → (coming soon)


Resources

Official Documentation:

Community Modules (learn by reading):

Testing Tools:

Questions or feedback? Drop a comment below.

Series navigation:


This post is part of the "Terraform from Fundamentals to Production" series. Follow along to master Infrastructure as Code with Terraform.