Back to all blog posts

Introducing Terramate CLI — An Orchestrator and Code Generator for Terraform, OpenTofu, Terragrunt, and more

Picture of Soren Martius
Sören Martius Chief Product Officer
Photo of Annu Singh
Annu Singh Technical Content Writer
selina nazareth
Selina Nazareth Developer Relations Manager
Reading Time:15 min read

Discover Terramate CLI, an open-source tool for Infrastructure as Code (IaC) orchestration and code generation that helps you to simplify complex codebases, split up large state files into smaller units called stacks and automate your IaC deployments in any CI/CD using GitOps workflows.

Introducing Terramate Cover

When dealing with Infrastructure as Code (IaC) such as Terraform, OpenTofu, Terragrunt, or Kubernetes (e.g. Kubernetes manifests, Helm, Kustomize, Tanka, etc.), teams often encounter the same challenges over and over again:

  • How should I manage multiple environments?
  • How can I reduce code duplication and make my configurations easier to maintain?
  • How can I automate the deployments of my infrastructure configurations?

Those questions arise because tools such as Terraform and OpenTofu lack standard patterns and best practices, leading teams to reinvent the wheel repeatedly.

Enter Terramate! In this article, we will introduce Terramate CLI, an open-source Infrastructure as Code (IaC) orchestration and code-generation tool that helps teams to:

  • Simplify complex infrastructure configurations and make them more maintainable.
  • Split up large monolithic state files (often called “Terralith”) into multiple units called stacks.
  • Automate and orchestrate tools such as Terraform, OpenTofu and Terragrunt in any CI/CD, such as GitHub Actions, GitLab CI/CD, BitBucket Pipelines, Azure DevOps, etc., using GitOps workflows.

Note: Terramate CLI is part of the Terramate Platform, which consists of Terramate CLI and Terramate Cloud. While Terramate Cloud can significantly improve the management and observability experience when managing cloud infrastructure with IaC at scale, this article focuses exclusively on Terramate CLI.

If you want to learn more about Terramate Cloud, we recommend you read our how it works guide.

Why Terramate

Before Terramate, we built Mineiros, a consultancy that helped fast-growing companies implement scaling platforms on public clouds such as AWS, Google Cloud and Azure with Infrastructure as Code (IaC).

When working in IaC environments, we mainly used Terraform and OpenTofu, the technologies that are currently the most adopted and mature in the market.

When dealing with Terraform and OpenTofu at scale, one of the first problems many of our users are trying to solve is splitting the Terraform state into isolated units that allow you to run only parts of your IaC independently.

Terraform provides the concept of modules to allow reusing code, but modules do not provide isolation on plan and apply time. As long as you keep a centralized state for all your Terraform code, all modules used in this code will always be planned and applied.

An early solution to this was introduced by the team that created Terragrunt (a thin wrapper that aims to scale Terraform and OpenTofu configurations), where states are split over multiple directories.

But when using Terragrunt, your project effectively becomes a Terragrunt project, meaning that you have to adopt another tool and syntax, you lose the capabilities of natively running Terraform and OpenTofu, and you effectively have to migrate your project into an opinionated approach that might not work for you in the long run.

With Terramate CLI, we are providing a flexible and lightweight alternative to Terragrunt that helps you solve the same challenges and more in a non-intrusive way. But instead of trying to replace Terragrunt, you can onboard Terramate to any existing Terragrunt project to supercharge with advanced capabilities. If you are interested in learning how Terramate compares to Terragrunt and why it’s sometimes a good idea to use both together, please take a look at our Terramate and Terragrunt blog.

About Stacks

As mentioned during the introduction of this article, one of the biggest challenges in IaC is to size your stacks and environments correctly. Monolithic state files (often referred to as “Terralith”) are problematic and can cause long running pipelines, large blast radius and poor collaboration due to waiting times by sequential and blocking pipelines. This is why Terramate introduces a concept called stacks.

A stack is a unit of configuration that often represents one or multiple infrastructure resources that can be deployed and managed independently.

There are various good reasons for splitting your code into several stacks:

  • Reducing the blast radius (lowering the risk of failed applies)
  • Faster execution time through isolated and smaller units
  • Clear ownership of individual stacks (through, e.g., GitHub CODEOWNERS  )
  • Better collaboration by allowing teams to work on different units in parallel

Those and more are very well described in Lesson 3 of Gruntworks blog post “5 Lessons learned from writing over 300,000 lines of infrastructure code”.

Stacks are Infrastructure as Code agnostic and can be used to orchestrate and manage any IaC tool (e.g., Terraform, OpenTofu, Pulumi, Kubernetes, Ansible, Crossplane, Cloudformation, etc.).

Info: Stacks seamlessly integrate with Terramate Cloud allowing you to add additional features such as observability, drift detection, change history, insights into resources, misconfiguration prevention, and more.

But splitting your code and state into smaller independent stacks has some significant tradeoffs, and it will require you to:

  • Share data and code across all stacks to keep the code DRY (Don’t Repeat Yourself).
  • Orchestrate your stacks so that not all stacks are executed all the time, but only stacks that have changed within a specific pull request (PR) will be executed (planned/applied).

With Terramate, we aim to solve these problems efficiently and in a non-intrusive way.

Stacks should not be mistaken for the different approaches available to managing multiple environments. For example, in Terraform and OpenTofu, the most commonly used approaches are workspaces, directories, Terragrunt, TFVars, and partial backend configurations. While most tools available focus on a single approach, Terramate is designed to be agnostic. This means that stacks in Terramate integrate with any existing approach to manage different environments, and different stacks can use different approaches, too.

What makes a Stack a Stack?

Technically, a stack in Terramate is just a directory that contains a Terramate configuration file (usually a stack.tm.hcl ). But when managing infrastructure configuration with any IaC in stacks, a stack usually contains:

  • A Terramate configuration file stack.tm.hcl to configure the metadata and, optionally, the orchestration behavior of a stack.
  • A separate backend or state configuration as each stack is supposed to have it’s own state.
  • IaC configurations such as Terraform, OpenTofu, Terragrunt, CloudFormation, Pulumi, Ansible, etc.

For example:

stacks/
  vpc/
    main.tf      # Configures our resources managed with IaC
    backend.tf   # Configures the state backend
    stack.tm.hcl # Configures the Terramate stack

To configure a directory as a stack in Terramate, a configuration file (any file ending with .tm  or .tm.hcl  ) that provides a stack  block has to exist in the directory.

The stack  block can be empty but can also define metadata about the stack, such as its name or description.

For example:

stack {
  name        = "My stack"
  description = "My stack description"
}

We provide IDE extensions such as our VSCode extension for supporting native highlighting and autocompletion of .tm files inside your IDE.

In the orchestration section, you can also configure the orchestration behavior of each stack in relation to other stacks (e.g., to define the order of execution of multiple stacks).

# metadata
stack {
  name        = "My stack"
  description = "My stack description"
  
  # optional orchestration settings
  after = [
    "tag:prod:networking",
    "/prod/apps/auth",
  ]
}

If no file in a directory defines the stack block, Terramate will not identify this directory as a stack. Terramate does not support stacks inside of stacks, which means stacks need to be defined in a leaf directory.

Stacks can be structured into directories and any level of subdirectories. Using such a structure allows you to define the implicit order of execution when orchestrating commands, such as terraform apply in stacks, by simply moving them in your directory tree. This enables you to define a structure like the following.

config.tm
modules/
  my-vpc-module/
    main.tf
stacks/
  config.tm
  gcp-projects/
    my-staging/
      config_project.tm
      my-vpc/
        terramate.tm
        main.tf
    my-prod/
      config_project.tm
      my-vpc/
        terramate.tm
        main.tf

Stacks in Terramate are units that can be used to manage one or multiple infrastructure resources in isolation. Later in this article, we will learn how to orchestrate commands such as terraform apply or tofu apply in stacks, but for now let’s understand how we can use the code generation in Terramate to generate code in stacks to reduce manual maintenance effort to a bare minimum.

Solving duplication code

What code is duplicated?

As each stack manages its own state, the apparent code duplicated in all stacks is the definition for the backend that Terraform and OpenTofu use to permanently store their state and the version of Terraform or OpenTofu to use.

While you can resolve to set the same Terraform or OpenTofu version in all stacks by using simple symbolic links, the backend configuration in Terraform and OpenTofu requires you to set a unique state file location for each stack.

In addition, Terraform and OpenTofu require several providers to operate, which need to be configured to, e.g., define an account  and a region  in Amazon Web Services (AWS) or a project  in Google Cloud Platform (GCP) or pin a specific provider's version that should be used.

How does Terramate help to reduce code duplication?

Terramate CLI comes with a powerful compiler that allows for generating HCL code such as Terraform and OpenTofu configuration and any other arbitrary files such as JSON and YAML. This removes the need to manually maintain duplicate files, such as Terraform and OpenTofu backend or provider configurations among multiple files. Still, it can be used for advanced use cases, such as generating entire infrastructure configurations in stacks that meet certain conditions, such as path, tags, name, etc.

The nice thing about code generation in Terramate is that it allows you to share data between stacks using Globals. Globals let you define, extend and overwrite global and shared configuration values at any level in your stack and directory hierarchy.

How does data sharing work in Terramate?

Terramate offers two main approaches for sharing data among stacks:

  • Hierarchical data sharing (top-down) using globals and code generation
  • Sharing data between stacks using output sharing

How does hierarchical sharing (top-down) of data work?

As mentioned, Terramate uses HCL as the configuration language and supports reading its configuration from anywhere within the hierarchy from all files ending with .tm  or .tm.hcl  .

In Terramate, you can define global values with the globals  block, which can be defined in configuration files at any level.

globals {
  <variable-name> = <expression>
}

Globals can be defined, extended and overwritten at any level of your stack hierarchy. If a global is defined on multiple levels, the more specific (closer to the stack) value is used, and the previous value is always overwritten.

Terramate Globals are evaluated lazily on the stack level:

  • They can be referenced on higher-level configuration files without being already defined on this level.
  • The actual values can be set or overwritten within the Terramate stack configuration. This allows global defaults to be set, but it also allows stack-specific values to be overwritten.

How is shared data used?

Terramate supports generating actual HCL code using the generate_hcl  block to use the data shared via globals. Additionally, a variable namespace called terramate  is available to enable access to stack and general metadata, such as the stack's path within the repository or its name.

# file: stacks/config.tm

globals {
  # define a bucket name that is used when generating backend.tf defined below
  gcs_bucket_name = "my-state-bucket"
  # the following will calculate the path name of each stack 
  # but remove the / prefix as gcs does not handle this well
  gcs_bucket_prefix = tm_substr(terramate.path, 1, -1)
}

# The block label specifies the name of the file to create in stacks
# This file will be generated in all stacks reachable from this configuration
generate_hcl "backend.tf" {
  content {
    terraform {
      backend "gcs" {
        bucket = global.gcs_bucket_name
        prefix = tm_try(global.gcs_bucket_prefix, "some-prefix")
      }
    }
  }
}

Terramate supports everything already known from Terraform and OpenTofu: Simple, complex types and functions.

Most functions in Terraform and OpenTofu are available in Terramate but prefixed with tm_  to clarify what will be evaluated while generating code and what will be part of the generated code. Additionally, Terramate is adding Terramate-specific functions that can be helpful when facing advanced code-generation scenarios.

Within the content  block, any HCL code can be used. It will be partially evaluated (all globals and Terramate variables will be replaced) and written to a file inside the stack directory. The name is specified as the block's label.

When running terramate generate  the resulting files from the structure explained above will be:

Generated file: stacks/gcp-projects/my-staging/my-vpc/backend.tf

// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT
terraform {
  backend "gcs" {
    bucket = "my-state-bucket"
    prefix = "stacks/gcp-projects/my-staging/my-vpc"
  }
}

Generated file: stacks/gcp-projects/my-prod/my-vpc/backend.tf

// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT
// TERRAMATE: originated from generate_hcl block on /stacks/config.tm
terraform {
  backend "gcs" {
    bucket = "my-state-bucket"
    prefix = "stacks/gcp-projects/my-prod/my-vpc"
  }
}

In the same way, any HCL code can be generated into multiple files. To apply the generated changes, Terramate provides the terramate run  command to execute commands inside all stacks. The following section explains details and improvements to this flow in detail.

After running terramate generate  , it is, of course, possible to manually change into the stack directory and run terraform init  , terraform plan  , terraform apply  .

But we didn’t stop there. With Terramate, you can also generate any other arbitrary configuration format, such as JSON or YAML, using the generate_file block.

The obvious upside of using code generation compared to using one of the many existing wrapper approaches is that you always end up generating native infrastructure code that can be executed without any additional tooling and without causing any backward compatibility issues.

Sharing data between Stacks using Output Sharing

While globals and code generation are great for solving data-sharing challenges at build time, sometimes you encounter scenarios where you want to share data between stacks at run time. For that, Terramate CLI introduces Output Sharing.

Outputs Sharing is an advanced feature that uses code generation and orchestration to share the execution output of stacks as inputs to other stacks.

This is very similar to using dependencies in Terragrunt, but the big difference is that Terramate allows you to stay in native Terraform and OpenTofu by generating outputs and variables definitions and by injecting output data from one stack as input data into another stack using the well-knownTF_VAR_name approach.

If you want to learn more, we advise you to take a look at the Outputs Sharing documentation.

Solving Orchestration

The orchestration features covered in this section don’t depend on generated code and can work with any native IaC tool such as Terraform, OpenTofu, Terragrunt, Kubernetes YAML, Pulumi, etc.

Terramate allows you to execute any command in stacks using terramate run <cmd>  . For example, terramate run terraform init  will execute terraform init  in all defined stacks.

In general, you would not want to execute all stacks all the time to reduce runtime and blast-radius.

With Terramate, you can also cd  into any directory and execute the command in all stacks reachable from this directory (all stacks are in sub-directories of the current one). You can also execute a specific stack by using terramate -C <dir|stack> <cmd>

Graph-based Orchestration

Terramate’s orchestration works by building a directed acyclic graph (DAG) of all stacks available in a repository. You can then use different filters, such as --tags or --changed to filter this DAG for stacks that meet certain criteria only. Using a graph-based approach allows us to detect dependencies among stacks and to decide what stacks need to be orchestrated sequentially and what stacks can be orchestrated in parallel.

For example, to run a command in all stacks that contain changes and contain the tag kubernetes , you can run:

terramate run \
  --tag k8s \
  --changed \
  -- \
  kubectl diff

Change Detection (Git integration)

The real power of Terramate orchestration lies in its git  integration (we plan to support other VCS in the future). The git  integration enables Terramate to detect changed stacks based on files changed within a PR or when the default  branch (e.g., main  ) changes since the last merge or a specific commit.

The detection also includes recursive scanning of local modules (modules in the same repository) and marking stacks as changed if a module or sub-module has changed.

Change detection is enabled by providing the --changed  option and can be configured to use a specific branch as a reference. By default origin/main  is used as a reference.

Any file inside a stack directory or within the sub-directories of a stack is monitored for changes.

The change detection comes with different integrations. For example, when orchestrating Terragrunt with Terramate, the change detection will respect Terragrunt-specific configurations such as dependencies .

Order of Execution and pulling in Stacks

While developing Terramate, we discussed whether and how we need to define dependencies between stacks. As Terraform stacks can get very complex, we agreed on starting with the basics by allowing to define an order of execution and forcing a stack to always run if another stack runs.

In a stack configuration, we allow defining the order of execution: For example, to always make sure the production VPC is executed after the staging VPC.

stack {
  name = "My production VPC"
  after = [
    "/stacks/gcp-projects/my-staging/my-vpc",
  ]
}

Other directives available to configure relationships between stacks are before ,wants , and wanted_by .

The before  is simply the counterpart to after . Defining a stack will be executed before all stacks in the set.

The wants  directive also takes a set of stack names and ensures that whenever the current stack is executed, all stacks defined in wants  also run the same command.

Having stacks defined in wants  and after  or before  allows us to define if the stacks always run before or after the current stack.

This can be used in scenarios where one stack reads the state of resources via a Terraform data  source and wants to ensure that the data is always updated before being read.

If executing a selected set of stacks that include a wants  definition, the stacks configured as wanted are always selected in addition. If a selected stack defines an order of execution with a stack that is not selected, the order of execution is ignored as the stack is not being executed.

Execution is stopped by default if any stack executes a failing command. This can be overwritten using the command-line option: terramate run --continue-on-error <cmd> .

Orchestration Safeguards when executing Commands in Stacks

By default, Terramate will try to protect execution in undefined situations. Various safeguards that can be disabled if needed are applied by default:

  • Protection against running when files are not yet committed ensures that the state that might be applied is known, at least by your local Git.
  • Protection against running when untracked files are found also ensures that no temporary files are considered when planning or applying Terraform code.
  • Protection against the default branch being out of date with upstream to ensure that all changes on the default branch e.g. main  are considered when running commands such as terraform plan  or terraform apply
  • Protection against running out-of-date generated code to ensure that the generated code matches the Terramate configuration.

To learn more about the orchestration safeguards in Terramate, how to configure or optionally reactive those, please see the safeguards documentation.

The orchestration in Terramate is extremely powerful and dynamic. It comes with implicit order of execution that can optionally be overwritten with explicit configuration. If you want to learn more about how the orchestration works in Terramate, we recommend you take a look at the orchestration documentation.

CI/CD Integration

Orchestrating different Infrastructure as Code stacks is usually preferred in automation. For example, you typically want to apply GitOps principles, allowing teams to introduce changes in pull requests that can be reviewed and approved before merging back to main and triggering a deployment (e.g., terraform apply ). Tools such as Terraform and OpenTofu don’t include automation out of the box, leading teams to implement workflows for GitHub Actions, GitLab CI/CD, BitBucket Pipelines and others over and over again.

That’s why Terramate CLI comes with pre-configured workflows that we call CI/CD Blueprints. These blueprints allow you to automate common tasks such as pull requests with plan previews, deployments, and scheduled drift detection workflows that follow best practices and industry standards. The workflows provided have been battle-tested by hundreds of organizations in production and will help you automate your IaC projects in no time.

To learn more about the blueprints, look at the automation blueprints available in our documentation.

Features Overview

Code Generation:

  • Generate any Terraform, OpenTofu, any other HCL and any other arbitrary code such as JSON or YAML to keep your configuration DRY
  • Global variables empower you to share data between a bunch of stacks
  • Lazy evaluation of global  variables and terramate  metadata allows for very powerful and complex configurations
  • The to-be-generated code can be shared between multiple stacks and defined only once on the upper level of the hierarchy.
  • Terraform functions are available in globals  as well as in generate_hcl  and generate-file blocks (prefixed with tm_  )
  • Partial evaluation of Terraform code allows the user to decide what should be executed at run-time or build-time (code-generation-time)

Orchestration:

  • Change detection based on changes made in the current branch/pull request (Git)
  • Change detection based on the last merge when on the default branch (Git and GitHub merge and squash commit strategy)
  • Change detection based on changes since a specific commit (Git and GitHub rebase merge strategy)
  • Run any command in all stacks reachable from the current directory changed stacks all changed stacks reachable from the current directory
  • Automatically pass outputs of one stack as input into another
  • Define the order of execution between stacks that are selected for execution (before  ,  after  features)
  • Define stacks that should always run along with other stacks (wants  and wanted_by feature)
  • Configure complex workflows using Terramate Scripts

CI/CD Integration:

  • Automate your IaC in any general-purpose CI/CD, such as GitHub Actions, GitLab CI/CD, BitBucket Pipelines, etc.
  • Use production-grade GitOps workflow blueprints to get up and running in no time
  • Scheduled Drift Detection

Terramate Cloud:

  • Better observability for pull requests, deployments and drift
  • Incident Management for newly detected drift and failed deployments that integrate with e.g. Slack and Teams
  • Better previews in pull requests
  • Asset management
  • Misconfiguration detection
  • Policy integration with pre-configured policies such as CIS Benchmarks
  • Scaffolding and service catalog
  • Stateful orchestration (useful when you want to rerun and trigger failed or drifted stacks)

Outlook

This article only touched on what’s possible with Terramate CLI. In addition, there’s Terramate Cloud, which seamlessly integrates with Terramate CLI to provide features such as drift detection, misconfiguration detection, alerts for failed deployments and detected drift, asset inventory management and more.

While Terramate currently focuses on Terraform, OpenTofu, Terragrunt and Kubernetes YAML, we plan to add support for more IaC tooling, such as Ansible, Pulumi and Crossplane.

If you are interested in Terramate and want to learn more, feel free to book a demo with one of our engineers, join our Discord Community or try it out by following our getting started guide.

References


Ready to supercharge your IaC?

Explore how Terramate can uplift your IaC projects with a free trial or personalized demo.