My Profile Photo

about:software


This blog talks about software and systems integration. No developers were harmed in the creation of this blog.. well, mostly, anyway..


Using Terraform Workspaces for Multi-Region Deployments in AWS

This blog post talks about using Terraform workspaces as a mechanism to maintain consistent environments across multiple cloud regions. While the examples in the post are AWS-centric, the concepts highlighted here are really cloud agnostic.

Short intro to Terraform State & Workspaces

For Terraform to be able to map resources in your config files to resources that have been provisioned in AWS or any other provider, it maintains a sort of lookup table in the form of the “Terraform State”. For example, if you were to have the following in your tf project

resource "aws_s3_bucket" "my-s3-bucket" {
  bucket = "my-s3-bucket"
  acl    = "private"

  tags {
    Name        = "my-s3-bucket"
    Environment = "dev"
  }
}

Terraform will maintain a reference to the ARN of the actual S3 bucket and its other attributes in its state file, like so

{
    "type": "aws_s3_bucket",
    "primary": {
      "id": "my-s3-bucket",
      "attributes": {
          "acceleration_status": "",
          "acl": "private",
          "arn": "arn:aws:s3:::my-s3-bucket",
          "bucket": "my-s3-bucket",
          "cors_rule.#": "0",
          "force_destroy": "false",
          "id": "my-s3-bucket",
          "logging.#": "0",
          "region": "us-east-1",
          "replication_configuration.#": "0",
          "server_side_encryption_configuration.#": "0",
          "tags.%": "2",
          "tags.Environment": "dev",
          "tags.Name": "my-s3-bucket",
          "versioning.#": "1",
          "versioning.0.enabled": "false",
          "versioning.0.mfa_delete": "false"
      },
    "meta": {},
    "tainted": false
    },
  "deposed": [],
  "provider": "provider.aws.east1"
}

Within a Terraform state, there can only be one resource for a given name. In it’s simplest form, if I wanted to create many instances of resources like S3 buckets, for example, I would define multiple resources in my terraform config - one per resource. This becomes a bit tedious (not to mention a big violation of the DRY principle) when all the resources are exactly the same in terms of configuration except for, perhaps, its name. This is especially true for services like AWS API Gateway where Terraform configs requires at least 5-6 resources to be defined for even a simplistic “Hello World” type scenario.

Terraform workspaces (previously referred to as Terraform environments), is a way to address this concern with repetitive configurations. It is essentially a mechanism to partition the Terraform state so that many instances to the same resource can exists within it. The most commonly stated use case for this is to define a resource like an ec2 instance or a load balancer once per SDLC environment - in other words, define the resource once but terraform apply using the same configuration separately for “dev”, “qa”, and “prod” environments. This same capability can also be used to manage multi-region deployments.

The case for multi-region Deployments

Before we talk about how Terraform workspaces can solve for multi-region deployments, I do want to take a moment to talk about “why” we want to have multi-region deployments. It comes down to

  • You have a desire to insulate yourself against the failure of an entire cloud region. While unlikely, we have seen occurrences where an entire AWS cloud region (comprised to multiple availability zones) has had cascading failures bringing down the entire region altogether. Depending upon the business criticality of the services that you are running in the cloud, that may or may not be an acceptable risk.
  • You have a desire to reduce service latency for your customers. This is especially true for global businesses where you’d like to make sure, for example, that your customers in Asia are not forced to go half way across the globe to retrieve an image from an S3 bucket in N. Virginia.
  • You have a desire for complete isolation between regions for the purposes of blue-green type deployments across regions. For example, you would like to limit the availability of a modified API Gateway end point to a single region so as to monitor and isolate failures to that single region.

Configuration

Our intent from this point on is to create a single set of terraform configs that we can then apply to multiple regions. To this end, we will define Terraform workspaces that map to individual regions, and refactor our resources (if needed) so that we don’t have namespace collision in AWS.

We’ll start by defining the configuration to reference the workspace name in our provider definition

provider "aws" {
 region = "${terraform.workspace}"
}

Note that once this config is added, terraform init will no longer work in the default workspace, since (as you may have guessed) there is no default region for AWS. However, if we were to create a workspace corresponding to a valid AWS region and then terraform init, that would work

    shanid:~/dev$ terraform workspace new us-east-1

    Created and switched to workspace "us-east-1"!

    You are now on a new, empty workspace. Workspaces isolate their state,
    so if you run "terraform plan" Terraform will not see any existing state
    for this configuration.

Once the workspace is created, we should be able to run terraform init, terraform plan and terraform apply as usual.

Once we have provisioned our resources in this region, create a workspace for a second region and re-run the terraform in that workspace to create the exact same set of AWS resources in that region.

    shanid:~/dev$ terraform workspace new us-west-2

    Created and switched to workspace "us-west-2"!

    You are now on a new, empty workspace. Workspaces isolate their state,
    so if you run "terraform plan" Terraform will not see any existing state
    for this configuration.

The only (minor) gotcha to look out for is with regards to AWS resources that are global or globally named. IAM is an example of a global resource, and S3 is an example of a resource that has a globally scoped name. Consider the following example,

resource "aws_iam_role" "lambda_role" {
 name     = "my_lambda_role"
 assume_role_policy = <<EOF
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
       "Service": "lambda.amazonaws.com"
     },
     "Action": "sts:AssumeRole"
   }
 ]
}
EOF
}

If we were to attempt to create this resource in multiple resources, we’d start running into issues. This would work just fine in the first region, but in subsequent regions, you’d start seeing errors when applying your terraform since the resource my_lambda_role already exists. The easiest way to solve for this, is to include the region/workspace in the name of the resource being created. For example, the following config will create distinctly named IAM roles

resource "aws_iam_role" "lambda_role" {
 name     = "my_lambda_role_${terraform.workspace}"
 assume_role_policy = <<EOF
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
       "Service": "lambda.amazonaws.com"
     },
     "Action": "sts:AssumeRole"
   }
 ]
}
EOF
}

This would create a my_lambda_role_us-east-1 role in us-east-1 and a my_lambda_role_us-west-2 role in us-west-2. And we have maintained our objective of a single configuration that can be deployed seamlessly into multiple regions.

Conclusion

Hopefully this approach makes it easier for you to manage your cross-region deployments much more easily with Terraform. I should acknowledge that using workspaces is probably not the only way to go about solving for this problem, but this is the way that we’ve solved for most of our deployment related challenges with the least possible amount of repetition in our configs.

As always, please feel free to leave a comment if you’re having issues with the sample config or if you’re running into issues that I have not covered in this post.

comments powered by Disqus