Posted on :: 2917 Words :: Tags: , , , , ,

Setting Up Terraform

Welcome back. In Part 1, we talked about why Infrastructure as Code matters and why Terraform became the standard. Now we're going to actually install it and get it working.

By the end of this post, you'll have Terraform running on your machine, cloud credentials configured properly, and your first project initialized. No more slides—let's build something.

📦 Code Examples

All code examples for this tutorial series are available in the companion repository:

Repository: terraform-hcl-tutorial-series This Part: Part 2 - Provider Setup Examples

To get hands-on with the examples:

git clone https://github.com/khuongdo/terraform-hcl-tutorial-series.git
cd terraform-hcl-tutorial-series
git checkout part-02

# Try AWS example
cd examples/part-02-setup/aws/
terraform init
terraform plan

# Or try GCP example
cd ../gcp/
terraform init
terraform plan

# Or try Azure example
cd ../azure/
terraform init
terraform plan

Each example validates your provider configuration without creating resources (zero cost).

Why Setup Actually Matters

I know, I know. You want to jump straight to deploying infrastructure. But spend 20 minutes getting this right now, and you'll save yourself hours of debugging later.

Here's what happens when you rush the setup:

You end up with version conflicts because Project A needs Terraform 1.4 but Project B needs 1.7. You hardcode credentials in config files and fail your next security audit. You waste an afternoon debugging broken authentication that turns out to be a simple PATH issue. Your teammate can't run your code because their environment is configured differently.

Get the foundation right and you get reproducible environments, secure credential management, version isolation per project, and clear audit trails. This isn't just about making Terraform work—it's about making it work reliably for the next three years.

Installing Terraform

Good news: Terraform is just a single binary. No runtime, no dependencies, no complicated setup. Download it, put it in your PATH, done.

macOS

If you're on macOS, use Homebrew. Don't overthink it.

# Install Terraform
brew tap hashicorp/tap
brew install hashicorp/tap/terraform

# Verify it worked
terraform version
# Should show: Terraform v1.9.0 (or whatever's current)

Why Homebrew? Because updates are literally brew upgrade terraform. That's it.

If you're allergic to Homebrew for some reason:

# Download latest from https://releases.hashicorp.com/terraform/
wget https://releases.hashicorp.com/terraform/1.9.0/terraform_1.9.0_darwin_amd64.zip

# Unzip and move to PATH
unzip terraform_1.9.0_darwin_amd64.zip
sudo mv terraform /usr/local/bin/

# Check it
terraform version

Linux

On Ubuntu or Debian, add HashiCorp's repository:

# Add GPG key and repo
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

# Install
sudo apt update && sudo apt install terraform

# Verify
terraform version

Or go the manual route:

wget https://releases.hashicorp.com/terraform/1.9.0/terraform_1.9.0_linux_amd64.zip
unzip terraform_1.9.0_linux_amd64.zip
sudo mv terraform /usr/local/bin/
terraform version

Windows

If you use Chocolatey:

choco install terraform
terraform version

Otherwise, download the ZIP from HashiCorp's releases page, extract it to C:\Program Files\Terraform\, and add that directory to your system PATH.

To add to PATH: search "Environment Variables" in the Start menu, go to System Properties, click Environment Variables, edit the Path variable, add your Terraform directory, save, and restart your terminal.

Then verify:

terraform version

Quick Check

Once installed, run this:

terraform version

You should see version 1.7.0 or later. If you want tab completion (and you do), run:

terraform -install-autocomplete

Then reload your shell or open a new terminal window. Now you can tab-complete Terraform commands.

If terraform version fails, check your PATH. On macOS/Linux, run which terraform to see if it's actually there. On Windows, restart your terminal after modifying PATH. If you get "permission denied" on Linux, run chmod +x /usr/local/bin/terraform.

How Cloud Authentication Works

Terraform needs permission to create resources in your cloud account. Each cloud provider handles this differently, but the principle is the same: you authenticate locally, and Terraform inherits those credentials.

Here's what you should NEVER do:

# ❌ DO NOT DO THIS
provider "aws" {
  access_key = "AKIAIOSFODNN7EXAMPLE"
  secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}

Why is this so bad? Because once you commit this to Git, those credentials are in your history forever. Even if you delete them in the next commit, they're still there. They'll also show up in Terraform state files and plan outputs. One accidental push to a public repo and you're buying someone else's crypto mining operation.

Instead, you authenticate using the cloud provider's CLI tool. Terraform automatically picks up those credentials from your environment. Much safer, much cleaner.

AWS Setup

AWS has three ways to authenticate. Use them in this order.

Method 1: AWS CLI (Start Here)

Install the AWS CLI if you haven't already:

# macOS
brew install awscli

# Linux
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Windows: download the MSI from https://aws.amazon.com/cli/

Configure your credentials:

aws configure

It'll ask you for four things:

  1. AWS Access Key ID (get this from the AWS Console → Security Credentials → Access Keys)
  2. AWS Secret Access Key (shown once when you create the key—copy it now or lose it forever)
  3. Default region (something like us-west-2)
  4. Output format (just say json)

Test it:

aws sts get-caller-identity

You should see your AWS account ID and user ARN. If you do, Terraform will automatically use these credentials.

Method 2: Environment Variables (For CI/CD)

If you're running Terraform in a CI/CD pipeline, you can set credentials as environment variables:

export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export AWS_DEFAULT_REGION="us-west-2"

This is how GitHub Actions and GitLab CI typically work.

Method 3: IAM Roles (Production Best Practice)

If you're running Terraform on an EC2 instance or in AWS CodeBuild, use IAM roles instead of access keys. No long-lived credentials, automatic rotation, and you can scope permissions exactly.

Terraform detects this automatically:

provider "aws" {
  region = "us-west-2"
  # No credentials block needed—uses instance metadata
}

This is the gold standard for production. No keys to leak.

GCP Setup

Google Cloud uses Application Default Credentials via the gcloud CLI.

Install gcloud

# macOS
brew install --cask google-cloud-sdk

# Linux (Debian/Ubuntu)
sudo apt-get install apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk

# Windows: download from https://cloud.google.com/sdk/docs/install

Authenticate

gcloud init
gcloud config set project YOUR_PROJECT_ID
gcloud auth application-default login

That last command opens your browser for OAuth login. Once you authenticate, credentials get saved to ~/.config/gcloud/application_default_credentials.json. Terraform reads this file automatically.

Verify it worked:

gcloud projects list

You should see your GCP projects.

Service Account Keys (Only for CI/CD)

For local development, use Application Default Credentials. But if you need a service account key for CI/CD:

gcloud iam service-accounts create terraform-sa --display-name "Terraform Service Account"

gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
  --member="serviceAccount:terraform-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com" \
  --role="roles/editor"

gcloud iam service-accounts keys create ~/terraform-key.json \
  --iam-account=terraform-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com

export GOOGLE_APPLICATION_CREDENTIALS="$HOME/terraform-key.json"

Service account keys are long-lived credentials. If someone gets that JSON file, they have full access to your GCP project. Treat it like a password. Don't commit it to Git. Use it only when you absolutely have to.

Azure Setup

Azure uses the Azure CLI for authentication.

Install Azure CLI

# macOS
brew install azure-cli

# Linux (Ubuntu/Debian)
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Windows: download from https://aka.ms/installazurecliwindows

Login

az login

This opens your browser. Log in, and the CLI stores your credentials.

If you have multiple subscriptions:

az account list --output table
az account set --subscription "YOUR_SUBSCRIPTION_ID"

Check which subscription is active:

az account show

You should see your subscription ID and user email.

Service Principal (For CI/CD)

For pipelines:

az ad sp create-for-rbac --name "terraform-sp" --role Contributor

This outputs an appId, password, and tenant ID. Save them securely and set these environment variables:

export ARM_CLIENT_ID="appId from output"
export ARM_CLIENT_SECRET="password from output"
export ARM_SUBSCRIPTION_ID="your-subscription-id"
export ARM_TENANT_ID="tenant from output"

Understanding Providers

Now that authentication is set up, let's talk about providers.

A provider is a plugin that translates your Terraform config into API calls. When you write resource "aws_s3_bucket", the AWS provider turns that into AWS SDK calls. Same idea for GCP and Azure.

Think of providers like database drivers. You don't talk to PostgreSQL directly—you use a driver that speaks the protocol. Providers do the same thing for cloud APIs.

Here's what a provider configuration looks like:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-west-2"
}

Let's break this down.

The terraform block declares which providers you need. The source is where to download it from—hashicorp/aws translates to registry.terraform.io/hashicorp/aws. The version constraint locks you to version 5.x but not 6.0.

Why does versioning matter? Because six months from now, AWS provider 8.0 might release with breaking changes. Without a version constraint, your code breaks unexpectedly. With ~> 5.0, you stay on 5.x until you explicitly upgrade.

Version constraint examples:

  • ~> 5.0 means "any 5.x version, but not 6.0" (recommended)
  • = 5.31.0 means "exactly this version" (too strict)
  • >= 5.0, < 6.0 means "explicit range" (also fine)

The provider block configures the provider. For AWS, that's usually just the region. Credentials come from your environment (AWS CLI, environment variables, or IAM role), so you don't need to specify them here.

You can use multiple providers in one project:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-west-2"
}

provider "google" {
  project = "my-gcp-project"
  region  = "us-central1"
}

Now you can create resources in both AWS and GCP from the same Terraform code.

Your First Terraform Project

Let's actually initialize a project.

Project Structure

Create a directory:

mkdir ~/terraform-getting-started
cd ~/terraform-getting-started

I recommend this file structure:

terraform-getting-started/
├── main.tf          # Resources
├── variables.tf     # Input variables
├── outputs.tf       # Outputs
└── terraform.tfvars # Variable values (don't commit secrets here)

You don't need all of these yet, but this is where you're headed.

Write Your First Config (AWS)

Create main.tf:

terraform {
  required_version = ">= 1.7.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

resource "aws_s3_bucket" "learning_bucket" {
  bucket = "terraform-learning-${var.environment}-${random_id.bucket_suffix.hex}"

  tags = {
    Name        = "Terraform Learning Bucket"
    Environment = var.environment
    ManagedBy   = "Terraform"
  }
}

resource "random_id" "bucket_suffix" {
  byte_length = 4
}

Create variables.tf:

variable "aws_region" {
  description = "AWS region for resources"
  type        = string
  default     = "us-west-2"
}

variable "environment" {
  description = "Environment name (dev, staging, prod)"
  type        = string
  default     = "dev"

  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

Create outputs.tf:

output "bucket_name" {
  description = "Name of the created S3 bucket"
  value       = aws_s3_bucket.learning_bucket.id
}

output "bucket_arn" {
  description = "ARN of the S3 bucket"
  value       = aws_s3_bucket.learning_bucket.arn
}

Or Use GCP Instead

If you prefer Google Cloud, replace main.tf with:

terraform {
  required_version = ">= 1.7.0"

  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
}

provider "google" {
  project = var.gcp_project_id
  region  = var.gcp_region
}

resource "google_storage_bucket" "learning_bucket" {
  name     = "terraform-learning-${var.environment}-${random_id.bucket_suffix.hex}"
  location = var.gcp_region

  labels = {
    environment = var.environment
    managed_by  = "terraform"
  }
}

resource "random_id" "bucket_suffix" {
  byte_length = 4
}

And update variables.tf:

variable "gcp_project_id" {
  description = "GCP project ID"
  type        = string
}

variable "gcp_region" {
  description = "GCP region for resources"
  type        = string
  default     = "us-central1"
}

variable "environment" {
  description = "Environment name"
  type        = string
  default     = "dev"
}

Or Azure

For Azure:

terraform {
  required_version = ">= 1.7.0"

  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "learning" {
  name     = "rg-terraform-learning-${var.environment}"
  location = var.azure_region
}

resource "azurerm_storage_account" "learning" {
  name                     = "tflearn${var.environment}${random_id.suffix.hex}"
  resource_group_name      = azurerm_resource_group.learning.name
  location                 = azurerm_resource_group.learning.location
  account_tier             = "Standard"
  account_replication_type = "LRS"

  tags = {
    environment = var.environment
    managed_by  = "terraform"
  }
}

resource "random_id" "suffix" {
  byte_length = 4
}

And variables.tf:

variable "azure_region" {
  description = "Azure region for resources"
  type        = string
  default     = "East US"
}

variable "environment" {
  description = "Environment name"
  type        = string
  default     = "dev"
}

Run terraform init

Now comes the moment of truth.

terraform init

This command does four things:

  1. Downloads the provider plugins you specified (AWS, GCP, or Azure)
  2. Initializes the backend (local by default, remote in production)
  3. Creates a .terraform/ directory to store provider binaries
  4. Generates .terraform.lock.hcl to lock provider versions

You should see output like:

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 5.0"...
- Installing hashicorp/aws v5.31.0...
- Installed hashicorp/aws v5.31.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

Check what got created:

ls -la

You should see:

  • .terraform/ (provider binaries—don't commit this to Git)
  • .terraform.lock.hcl (version lock file—DO commit this)
  • main.tf, variables.tf, outputs.tf (your config)

The Lock File

Open .terraform.lock.hcl. It looks like this:

provider "registry.terraform.io/hashicorp/aws" {
  version     = "5.31.0"
  constraints = "~> 5.0"
  hashes = [
    "h1:...",
    "zh:...",
  ]
}

This file locks the exact provider version. Even though you said ~> 5.0, Terraform picked version 5.31.0 and wrote it down here. The next time you or your teammate runs terraform init, you'll get the same version.

The hashes verify provider integrity—basically, Terraform checks that the provider you download hasn't been tampered with.

Should you commit this file? Yes. Add it to Git so everyone on your team uses identical providers.

Troubleshooting Init Errors

If terraform init fails with "Failed to query available provider packages," check your internet connection. Terraform downloads providers from registry.terraform.io.

If you see "Required version constraint not satisfied," your Terraform version is too old. Run terraform version to check, then upgrade via Homebrew or re-download from HashiCorp.

If you get "Error configuring the backend," check your main.tf syntax. Run terraform validate to catch syntax errors.

What to Commit to Git

Create a .gitignore in your project:

# Local .terraform directories
**/.terraform/*

# State files (contain resource IDs and sometimes secrets)
*.tfstate
*.tfstate.*

# Crash logs
crash.log
crash.*.log

# Variable files with secrets
*.tfvars
!example.tfvars

# CLI config (may contain credentials)
.terraformrc
terraform.rc

# Override files
override.tf
override.tf.json
*_override.tf
*_override.tf.json

What you SHOULD commit:

  • *.tf files (all your Terraform code)
  • .terraform.lock.hcl (provider version lock)
  • README.md (documentation)
  • example.tfvars (template without real values)

What you should NEVER commit:

  • *.tfstate (contains resource IDs, may have secrets)
  • .terraform/ (provider binaries—these are re-downloadable)
  • terraform.tfvars if it contains credentials or API keys

Quick Self-Check

Before moving to Part 3, make sure you can answer these:

  1. What are the three AWS authentication methods?

    • AWS CLI credentials, environment variables, IAM roles.
  2. What does terraform init do?

    • Downloads providers, initializes backend, creates .terraform/ directory, generates lock file.
  3. Why use version constraints like ~> 5.0?

    • Prevents breaking changes from accidental provider upgrades while allowing minor updates.
  4. Should you commit .terraform.lock.hcl?

    • Yes. It ensures your team uses the same provider versions.
  5. Why never hardcode credentials in provider blocks?

    • They get exposed in Git history, Terraform state, and plan outputs. Use CLI tools or environment variables instead.

If you got those right, you're ready for Part 3.

What's Next

In Part 3, we'll actually deploy something. You'll run terraform plan to preview changes, terraform apply to create real cloud resources, and terraform destroy to clean up.

This is where Terraform stops being abstract and starts being real. You'll create actual infrastructure in AWS, GCP, or Azure, see how Terraform tracks state, and learn how to modify resources without breaking things.

Ready? Continue to Part 3 → (coming soon)


Resources

Official Docs:

Authentication Guides:

Series navigation:

Questions? Drop a comment. I respond to everything and incorporate feedback into future posts.


Part of the "Terraform from Fundamentals to Production" series. Follow along to master Infrastructure as Code.