- Devops Interviews
- Posts
- Terraform Interview Questions [Senior level - S2E5]
Terraform Interview Questions [Senior level - S2E5]
1. Handling State File Locking Failures
Question:
You encounter a terraform apply error because the state file is locked by another process. How would you resolve this issue?
Answer:
Using DynamoDB for State Locking:
If using an S3 backend, ensure DynamoDB is configured for state locking.terraform { backend "s3" { bucket = "my-terraform-state" key = "state/terraform.tfstate" region = "us-east-1" dynamodb_table = "terraform-locks" } }Resolving Lock Conflicts:
Identify the process holding the lock.
Manually unlock the state if safe to do so:
terraform force-unlock <LOCK_ID>
2. Cross-Account Resource Provisioning
Question:
How do you use Terraform to create resources in multiple AWS accounts from a single configuration?
Answer:
Use multiple provider aliases:
provider "aws" { alias = "account_a" profile = "account_a_profile" region = "us-east-1" } provider "aws" { alias = "account_b" profile = "account_b_profile" region = "us-west-1" } resource "aws_s3_bucket" "account_a_bucket" { provider = aws.account_a bucket = "bucket-in-account-a" } resource "aws_s3_bucket" "account_b_bucket" { provider = aws.account_b bucket = "bucket-in-account-b" }
3. Preventing Accidental Deletions
Question:
How do you protect critical Terraform-managed resources from being accidentally deleted?
Answer:
Use the
prevent_destroylifecycle policy:resource "aws_s3_bucket" "critical" { bucket = "critical-bucket" lifecycle { prevent_destroy = true } }If deletion is required, remove the
prevent_destroyblock and then runterraform apply.
4. Terraform with Kubernetes (EKS)
Question:
How would you use Terraform to deploy a Kubernetes cluster on AWS and manage Kubernetes resources using the same configuration?
Answer:
Deploy EKS with Terraform:
module "eks" { source = "terraform-aws-modules/eks/aws" cluster_name = "my-cluster" cluster_version = "1.24" }Use the Kubernetes provider to manage Kubernetes resources:
provider "kubernetes" { host = module.eks.cluster_endpoint token = data.aws_eks_cluster_auth.main.token cluster_ca_certificate = base64decode(module.eks.cluster_ca_certificate) } resource "kubernetes_namespace" "example" { metadata { name = "example" } }
5. Using Local and Remote Modules
Question:
What are the differences between local and remote modules in Terraform, and how do you use them effectively?
Answer:
Local Modules: Stored in the same repository as the Terraform configuration.
module "vpc" { source = "./modules/vpc" }Remote Modules: Sourced from external repositories like Terraform Registry or GitHub.
module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "3.5.0" }Recommendation: Use remote modules for common, reusable components and local modules for project-specific configurations.
6. Drift Detection
Question:
How do you handle drift between Terraform state and actual infrastructure?
Answer:
Run
terraform planto detect drifts.Use
terraform refreshto update the state file with actual resource configurations.Manually fix mismatches or recreate resources by using
terraform taint.
7. Conditional Resource Creation
Question:
How do you create resources conditionally based on variable inputs?
Answer:
Use the
countargument to control resource creation:resource "aws_instance" "example" { count = var.create_instance ? 1 : 0 ami = "ami-123456" instance_type = "t2.micro" }
8. Managing Sensitive Data in Terraform Outputs
Question:
How do you ensure that sensitive outputs like passwords or keys are not exposed?
Answer:
Use the
sensitiveattribute in outputs:output "db_password" { value = aws_rds_instance.example.password sensitive = true }This prevents the value from being displayed in the CLI or logs.
9. Handling Provider Rate Limits
Question:
You encounter rate limit errors when applying Terraform changes. How would you mitigate this issue?
Answer:
Add
timeoutsor configure retry logic for providers:provider "aws" { request_timeout = "60s" }Use batching for resource creation:
resource "aws_instance" "batch" { count = 10 ami = "ami-123456" }
10. Managing State Across Teams
Question:
How do you securely share Terraform state between team members working on the same infrastructure?
Answer:
Use a remote backend (e.g., S3 + DynamoDB for AWS):
terraform { backend "s3" { bucket = "shared-state" key = "state/terraform.tfstate" region = "us-east-1" dynamodb_table = "state-locks" } }Implement role-based access controls (RBAC) for state file access.
Use Terraform Cloud for centralized state management and collaboration.
Let me know if you'd like further elaboration on any of these!