Back to all blog posts
Changelog

Introducing Terramate — An Orchestrator and Code Generator for Terraform

This post introduces Terramate, explaining its purpose, benefits, and why it's become a favorite among clients. Discover how Terramate addresses the challenges of managing Terraform at scale, including reducing code duplication and improving orchestration. Learn about its unique features like hierarchical data sharing, change detection with Git integration, and the generation of HCL code. Whether you're new to Terraform or looking to enhance your existing setup, this article offers insights into making infrastructure as code more efficient and user-friendly with Terramate.

Picture of Soren Martius
Sören Martius
· 10 min read
Introducing Terramate — An Orchestrator and Code Generator for Terraform

Today, after months of hard work, we’re proud, ecstatic, and numerous other adjectives, to introduce a new tool Terramate to the open-source and Terraform community.

In this blog post, we will explain what Terramate is and why we decided to build it and why our clients love it.

If instead, you’d like to see some comprehensive examples that explain how to use Terramate in detail, please take a look at our example repositories:
- https://github.com/terramate-io/terramate-example-code-generation

Why Terramate

Prior to Terramate, we built Mineiros, a consultancy that helped fast growing companies to implement scaling platforms on public clouds with Infrastructure as Code (IaC).

We mainly used Terraform when working in IaC environments, which is currently the market's most adopted and mature technology.

When dealing with Terraform at scale, one of the first problems many of our users are trying to solve is splitting the Terraform state into isolated units that allow you to run only parts of your IaC independently.

Terraform provides the concept of modules to allow reusing code, but modules on their own do not provide isolation on plan and apply time. As long as you keep a centralized state for all your Terraform code, all modules used in this code will always be planned and applied.

A solution to this was introduced by the team that created Terragrunt (a thin wrapper for managing Terraform code), where states are split over multiple directories.

If you are interested in learning how Terramate compares to Terragrunt, please see the blog post.

For the sake of having a clear naming, we decided to call such directories, that are keeping a single state for each, stacks.

There are various good reasons for splitting your code into several stacks, e.g.

  • Reducing the blast radius (lowering the risk of failed applies)
  • Faster execution time through isolated and smaller units
  • Clear ownership of individual stacks (through e.g.GitHub CODEOWNERS )
  • Better collaboration

All of those and more are very well described in Lesson 3 in Gruntworks blog post “5 Lessons learned from writing over 300.000 lines of infrastructure code”.

But splitting your code into smaller independent units, that we call stacks, has some significant tradeoffs and it will require you to:

  • Share data and code across all stacks to keep the code DRY (Don’t Repeat Yourself).
  • Orchestrate your stacks so that not all stacks are executed all the time, but only stacks that have changed within a specific pull request (PR) will be executed (planned/applied).

With Terramate we aim to solve these problems as efficiently as possible in a non-intrusive way.

Solving duplication code

What code is duplicated?

As each stack keeps its own state the obvious code that is duplicated in all stacks is the definition for the backend that Terraform uses to permanently store its state and the version of Terraform to use. While you can resolve to set the same Terraform version in all stacks by using simple symbolic links, Terraform backend configuration required you to set a unique state file location for each stack.

In addition, Terraform requires a bunch of providers to operate that need to be configured to e.g. define an account and a region in Amazon Web Services (AWS) or a project in Google Cloud Platform (GCP) or pin a specific provider's version that should be used.

How does Terramate help to reduce code duplication

Terramate introduces the new concept of sharing data between stacks and offers a way to generate HCL (Hashicorp Configuration Language) code inside of each stack.

We decided to use a hierarchical approach when sharing data for stacks.

Stacks can be structured into directories and any level of subdirectories. This enables you to define a structure like the following.

config.tm
modules/
  my-vpc-module/
    main.tf
stacks/
  config.tm
  gcp-projects/
    my-staging/
      config_project.tm
      my-vpc/
        terramate.tm
        main.tf
    my-prod/
      config_project.tm
      my-vpc/
        terramate.tm
        main.tf

Terramate configuration files suffix is .tm (or use .tm.hcl to support HCL highlighting in IDEs). We are currently building a VSCode extension for supporting native highlighting of .tm files.

What makes a stack a stack?

A stack is defined by a Terramate configuration file (any file ending with .tm or .tm.hcl ) that provides a stack block. The stack block can be empty but can also define metadata about the stack such as its name or a description. In the orchestration section, we will also introduce configuration for defining a stacks relationship to other stacks (e.g. to define the order of execution of multiple stacks). If no file in a directory defines the stack block, Terramate will not identify this directory as a stack. Terramate does not support stacks inside of stacks, which means stacks need to be defined in a leaf directory.

stack {
  name        = "My stack"
  description = "My stack description"   
}

Stacks can be configured by using the stack block inside a Terramate configuration file

How does hierarchical sharing of data work?

Terramate uses HCL as the configuration language and supports reading its configuration from anywhere within the hierarchy from all files ending with .tm or .tm.hcl .

Terramate introduces a globals block that can be defined in configuration files at any level.

If a global is defined on multiple levels, the more specific (closer to the stack) value is used and the previous value is always overwritten.

Terramate globals are evaluated in a lazy fashion on the stack level:

  • They can be referenced on higher-level configuration files without being already defined on this level.
  • The actual values can be set or overwritten within the Terramate stack configuration. This way global defaults can be set but it also allows to overwrite those with stack-specific values.

How is shared data used?

To actually make use of the data shared via globals , Terramate supports generating actual HCL code using the generate_hcl block. Additionally, a variable namespace called terramate is available to enable access to stack and general metadata: e.g. the stack's path within the repository or its name.

# file: stacks/config.tm

globals {
  # define a bucket name that is used when generating backend.tf defined below
  gcs_bucket_name = "my-state-bucket"
  # the following will calculate the path name of each stack 
  # but remove the / prefix as gcs does not handle this well
  gcs_bucket_prefix = tm_substr(terramate.path, 1, -1)
}

# The block label specifies the name of the file to create in stacks
# This file will be generated in all stacks reachable from this configuration
generate_hcl "backend.tf" {
  content {
    terraform {
      backend "gcs" {
        bucket = global.gcs_bucket_name
        prefix = global.gcs_bucket_prefix
      }
    }
  }
}

The example already makes use of additional features in Terramate.

Terramate supports everything that is already known from Terraform: Simple and complex types, and functions.

All functions available in Terraform v0.15.3 are available in Terramate but prefixed with tm_ to make clear what will be executed while generating code and what will be part of the generated code. Terraform stopped exporting functions in version 1.0 and we will keep back-porting new functions and fixes that will be added in future versions. Also, we will provide Terramate specific functions to make life easier.

Within the content block, any HCL code can be used and will be partially evaluated (all globals and Terramate variables will be replaced) and will be written to the file inside of the stack directory. The name is specified as the label of the block.

When running terramate generate the resulting files from the structure explained above will be:

// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT
// TERRAMATE: originated from generate_hcl block on /stacks/config.tm
terraform {
  backend "gcs" {
    bucket = "my-state-bucket"
    prefix = "stacks/gcp-projects/my-staging/my-vpc"
  }
}

generated file: stacks/gcp-projects/my-staging/my-vpc/backend.tf


// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT
// TERRAMATE: originated from generate_hcl block on /stacks/config.tm
terraform {
  backend "gcs" {
    bucket = "my-state-bucket"
    prefix = "stacks/gcp-projects/my-prod/my-vpc"
  }
}

generated file: stacks/gcp-projects/my-prod/my-vpc/backend.tf

In the same way, any HCL code can be generated into multiple files. To apply the generated changes, Terramate provides the terramate run command to execute commands inside all stacks. Details and improvements to this flow are explained in detail in the next section.

After running terramate generate , it is, of course, possible to manually change into the stack directory and run terraform init , terraform plan , terraform apply , etc.

We are already working on more ways to generate code, e.g. generate_terraform and generate_file functionality which will be discussed in the last section of this article.

Solving Orchestration

The orchestration features covered in this section don’t depend on generated code but can work with any native Terraform setup. We also plan to support more tooling in the future like the already mentioned Terragrunt.

Terramate allows you to execute any command in all defined stacks using terramate run <cmd> . For example, terramate run terraform init will execute terraform init in all defined stacks.

Normally, you would not want to execute all stacks all the time to reduce runtime and blast-radius.

With Terramate you can also cd into any directory and execute the command in all stacks reachable from this directory (all stacks in sub-directories of the current one). You can also execute a specific stack by using terramate -C <dir|stack> <cmd>

Change Detection (Git integration)

The real power of Terramate orchestration lies in its git integration (we plan to support other VCS in the future). The git integration enables Terramate to detect changed stacks based on files changed within a PR or when the default branch (e.g. main ) changes since the last merge or since a specific commit.

The detection also includes recursive scanning of local modules (modules in the same repository) and marking stacks as changed if a module or sub-module has changed.

Change Detection is enabled by providing the --changed option and can be configured to use a specific branch as a reference. By default origin/main is used as a reference.

Any file inside a stack directory or within the sub-directories of a stack is monitored for changes.

Order of Execution and pulling in stacks

While developing Terramate, we had a lot of discussions if and how we need to define dependencies between stacks. As Terraform stacks can get very complex, we agreed on starting with the basics by allowing to define an order of execution and forcing a stack to always run if another stack runs.

In a stacks configuration, we allow defining the order of execution: for example to always make sure the production VPC is executed after the staging VPC.

stack {
  name = "My production VPC"
  after = [
    "/stacks/gcp-projects/my-staging/my-vpc",
  ]
}

generated file: stacks/gcp-projects/my-prod/my-vpc/terramate.tm

Other directives that are available to configure relationships between stacks are before and wants .

The before is simply the counterpart to after , defining a stack will be executed before all stacks in the set.

The wants directive also takes a set of stack names and ensures that whenever the current stack is executed, all stacks defined in wants are also running the same command.

Having stacks defined in wants and in after or before allows to define if the stacks always run before or after the current stack.

This can be used in scenarios where one stack reads the state of resources via a Terraform data source and wants to ensure that the data is always updated before being read.

If executing a selected set of stacks that include a wants definition, the stacks configured as wanted are always selected in addition. If a selected stack defines an order of execution with a stack that is not selected, the order of execution is ignored as the stack is not being executed.

Execution is stopped by default if any stack executes a failing command. This can be overwritten using command-line option: terramate run --continue-on-error <cmd> .

More possibilities to define relationships between stacks and to pass data from one stack to the other may be added in the future once we find valid use cases.

Orchestration Safeguards when executing commands in stacks

By default, Terramate will try to protect execution in undefined situations. Various safeguards that can be disabled if needed are applied by default:

  • Protection against running when files are not yet committed to ensure that the state that might be applied is known at least by your local Git.
  • Protection against running when untracked files are found to also ensure no temporary files are considered when planning or applying Terraform code.
  • Protection against the default branch being out of date with upstream to ensure that all changes on the default branch e.g. main are considered when running commands such as terraform plan or terraform apply
  • Protection against running out-of-date generated code to ensure that the generated code matches the Terramate configuration.

Features Overview

Code Generation

  • Generate any Terraform or HCL code to keep your configuration DRY
  • Global variables empower you to share data between a bunch of stacks
  • Lazy evaluation of global variables and terramate metadata allows for very powerful and complex configurations
  • The to-be-generated code can be shared between multiple stacks and defined only once on the upper level of the hierarchy.
  • Terraform functions are available in globals and in generate_hcl blocks (prefixed with tm_ )
  • Partial evaluation of Terraform code allows the user to decide what should be executed at run-time or at build-time (code-generation-time)

Orchestration for Terraform stacks

  • Change detection based on changes made in the current branch/pull request (Git)
  • Change detection based on the last merge when on the default branch (Git and GitHub merge and squash commit strategy)
  • Change detection based on changes since a specific commit (Git and GitHub rebase merge strategy)
  • Run any command in
    - all stacks
    - all stacks reachable from the current directory
    - all changed stacks
    - all changed stacks reachable from the current directory
  • Define the order of execution between stacks that are selected for execution (before and after features)
  • Define stacks that should always run along with other stacks (wants feature)

Outlook

We just released the MVP of Terramate as version 0.1 since our goal was to release it early and get as much feedback from the Terraform community as possible. Thus driving the future development based on the communities feedback. All current features have been built based on our own experience with Terraform.

Terramate enables our users and customers to concentrate on designing and building infrastructure and to solve the Terraform specific configuration changes between the stacks for you.

Essentially so you don’t have to worry about it. While making use of the change detection features of Terramate, build times in CI for pull request previews and when applying the Terraform configuration could be significantly reduced saving build minutes and allowing for short review and deployment cycles.

This is an early step in an exciting journey for us. We are just at the beginning and have a lot of ideas to implement. Terramate already comes with a VSCode extension allowing us to improve the developer experience even further and automating tasks such as code generation on save and other soon-to-be-released features.

Terramate allows platform teams to take care of the code and provide an easy-to-use interface for engineering teams by separating code and configuration.

So what is on the plate for the future so far?

  • Make Terramate Terraform aware with generate_terraform blocks that can define rules for overwriting and merging based on the various blocks available in Terraform
  • Allow generating any file type with generate_file which would even enable scenarios, where Kubernetes manifests, can be created and orchestrated using terramate run kubectl
  • Allow for orchestration of Terragrunt projects. This will require a Terragrunt integration for enabling change detection of Terragrunts configuration files.
  • Terramate modules/imports that allow to define Terramate configuration outside of the hierarchy and include it when needed
  • Add support for other VCS than Git
  • Provide a bunch of github-actions for easy integration into your GitHub Actions pipelines

Developers should not need to care about Terraform specifics but concentrate on defining configurations for their applications backing services such as storage and caching layers or any other cloud-managed service in a self-serving manner.

If you’d like to learn more about how Terramate can help your team manage infrastructure as code more efficiently, don’t hesitate to join our Discord Community or reach out at hello@terramate.io.

References

This article was initially written by Tiago Katcipis and Marius Tolzmann


Soren is a co-founder and Chief Product Officer of Terramate. Before founding Terramate, he built cloud platforms for some of Europe's fastest-growing scaleups.