Explore how Terramate can uplift your IaC projects with a free trial or personalized demo.
But when using Terragrunt, your project effectively becomes a Terragrunt project, meaning that you have to adopt another tool and syntax, you lose the capabilities of natively running Terraform and OpenTofu, and you effectively have to migrate your project into an opinionated approach that might not work for you in the long run.
With Terramate CLI, we are providing a flexible and lightweight alternative to Terragrunt that helps you solve the same challenges and more in a non-intrusive way. But instead of trying to replace Terragrunt, you can onboard Terramate to any existing Terragrunt project to supercharge with advanced capabilities. If you are interested in learning how Terramate compares to Terragrunt and why it’s sometimes a good idea to use both together, please take a look at our Terramate and Terragrunt blog.
As mentioned during the introduction of this article, one of the biggest challenges in IaC is to size your stacks and environments correctly. Monolithic state files (often referred to as “Terralith”) are problematic and can cause long running pipelines, large blast radius and poor collaboration due to waiting times by sequential and blocking pipelines. This is why Terramate introduces a concept called stacks.
A stack is a unit of configuration that often represents one or multiple infrastructure resources that can be deployed and managed independently.
There are various good reasons for splitting your code into several stacks:
CODEOWNERS
)Those and more are very well described in Lesson 3 of Gruntworks blog post “5 Lessons learned from writing over 300,000 lines of infrastructure code”.
Stacks are Infrastructure as Code agnostic and can be used to orchestrate and manage any IaC tool (e.g., Terraform, OpenTofu, Pulumi, Kubernetes, Ansible, Crossplane, Cloudformation, etc.).
Info: Stacks seamlessly integrate with Terramate Cloud allowing you to add additional features such as observability, drift detection, change history, insights into resources, misconfiguration prevention, and more.
But splitting your code and state into smaller independent stacks has some significant tradeoffs, and it will require you to:
With Terramate, we aim to solve these problems efficiently and in a non-intrusive way.
Stacks should not be mistaken for the different approaches available to managing multiple environments. For example, in Terraform and OpenTofu, the most commonly used approaches are workspaces, directories, Terragrunt, TFVars, and partial backend configurations. While most tools available focus on a single approach, Terramate is designed to be agnostic. This means that stacks in Terramate integrate with any existing approach to manage different environments, and different stacks can use different approaches, too.
Technically, a stack in Terramate is just a directory that contains a Terramate configuration file (usually a stack.tm.hcl
). But when managing infrastructure configuration with any IaC in stacks, a stack usually contains:
stack.tm.hcl
to configure the metadata and, optionally, the orchestration behavior of a stack.For example:
stacks/
vpc/
main.tf # Configures our resources managed with IaC
backend.tf # Configures the state backend
stack.tm.hcl # Configures the Terramate stack
To configure a directory as a stack in Terramate, a configuration file (any file ending with .tm
or .tm.hcl
) that provides a stack
block has to exist in the directory.
The stack
block can be empty but can also define metadata about the stack, such as its name or description.
For example:
stack {
name = "My stack"
description = "My stack description"
}
We provide IDE extensions such as our VSCode extension for supporting native highlighting and autocompletion of .tm
files inside your IDE.
In the orchestration section, you can also configure the orchestration behavior of each stack in relation to other stacks (e.g., to define the order of execution of multiple stacks).
# metadata
stack {
name = "My stack"
description = "My stack description"
# optional orchestration settings
after = [
"tag:prod:networking",
"/prod/apps/auth",
]
}
If no file in a directory defines the stack block, Terramate will not identify this directory as a stack. Terramate does not support stacks inside of stacks, which means stacks need to be defined in a leaf directory.
Stacks can be structured into directories and any level of subdirectories. Using such a structure allows you to define the implicit order of execution when orchestrating commands, such as terraform apply
in stacks, by simply moving them in your directory tree. This enables you to define a structure like the following.
config.tm
modules/
my-vpc-module/
main.tf
stacks/
config.tm
gcp-projects/
my-staging/
config_project.tm
my-vpc/
terramate.tm
main.tf
my-prod/
config_project.tm
my-vpc/
terramate.tm
main.tf
Stacks in Terramate are units that can be used to manage one or multiple infrastructure resources in isolation. Later in this article, we will learn how to orchestrate commands such as terraform apply
or tofu apply
in stacks, but for now let’s understand how we can use the code generation in Terramate to generate code in stacks to reduce manual maintenance effort to a bare minimum.
As each stack manages its own state, the apparent code duplicated in all stacks is the definition for the backend that Terraform and OpenTofu use to permanently store their state and the version of Terraform or OpenTofu to use.
While you can resolve to set the same Terraform or OpenTofu version in all stacks by using simple symbolic links, the backend configuration in Terraform and OpenTofu requires you to set a unique state file location for each stack.
In addition, Terraform and OpenTofu require several providers to operate, which need to be configured to, e.g., define an account
and a region
in Amazon Web Services (AWS) or a project
in Google Cloud Platform (GCP) or pin a specific provider's version that should be used.
Terramate CLI comes with a powerful compiler that allows for generating HCL code such as Terraform and OpenTofu configuration and any other arbitrary files such as JSON and YAML. This removes the need to manually maintain duplicate files, such as Terraform and OpenTofu backend or provider configurations among multiple files. Still, it can be used for advanced use cases, such as generating entire infrastructure configurations in stacks that meet certain conditions, such as path, tags, name, etc.
The nice thing about code generation in Terramate is that it allows you to share data between stacks using Globals. Globals let you define, extend and overwrite global and shared configuration values at any level in your stack and directory hierarchy.
Terramate offers two main approaches for sharing data among stacks:
As mentioned, Terramate uses HCL as the configuration language and supports reading its configuration from anywhere within the hierarchy from all files ending with .tm
or .tm.hcl
.
In Terramate, you can define global values with the globals
block, which can be defined in configuration files at any level.
globals {
<variable-name> = <expression>
}
Globals can be defined, extended and overwritten at any level of your stack hierarchy. If a global is defined on multiple levels, the more specific (closer to the stack) value is used, and the previous value is always overwritten.
Terramate Globals are evaluated lazily on the stack level:
Terramate supports generating actual HCL code using the generate_hcl
block to use the data shared via globals. Additionally, a variable namespace called terramate
is available to enable access to stack and general metadata, such as the stack's path within the repository or its name.
# file: stacks/config.tm
globals {
# define a bucket name that is used when generating backend.tf defined below
gcs_bucket_name = "my-state-bucket"
# the following will calculate the path name of each stack
# but remove the / prefix as gcs does not handle this well
gcs_bucket_prefix = tm_substr(terramate.path, 1, -1)
}
# The block label specifies the name of the file to create in stacks
# This file will be generated in all stacks reachable from this configuration
generate_hcl "backend.tf" {
content {
terraform {
backend "gcs" {
bucket = global.gcs_bucket_name
prefix = tm_try(global.gcs_bucket_prefix, "some-prefix")
}
}
}
}
Terramate supports everything already known from Terraform and OpenTofu: Simple, complex types and functions.
Most functions in Terraform and OpenTofu are available in Terramate but prefixed with tm_
to clarify what will be evaluated while generating code and what will be part of the generated code. Additionally, Terramate is adding Terramate-specific functions that can be helpful when facing advanced code-generation scenarios.
Within the content
block, any HCL code can be used. It will be partially evaluated (all globals and Terramate variables will be replaced) and written to a file inside the stack directory. The name is specified as the block's label.
When running terramate generate
the resulting files from the structure explained above will be:
Generated file: stacks/gcp-projects/my-staging/my-vpc/backend.tf
// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT
terraform {
backend "gcs" {
bucket = "my-state-bucket"
prefix = "stacks/gcp-projects/my-staging/my-vpc"
}
}
Generated file: stacks/gcp-projects/my-prod/my-vpc/backend.tf
// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT
// TERRAMATE: originated from generate_hcl block on /stacks/config.tm
terraform {
backend "gcs" {
bucket = "my-state-bucket"
prefix = "stacks/gcp-projects/my-prod/my-vpc"
}
}
In the same way, any HCL code can be generated into multiple files. To apply the generated changes, Terramate provides the terramate run
command to execute commands inside all stacks. The following section explains details and improvements to this flow in detail.
After running terramate generate
, it is, of course, possible to manually change into the stack directory and run terraform init
, terraform plan
, terraform apply
.
But we didn’t stop there. With Terramate, you can also generate any other arbitrary configuration format, such as JSON or YAML, using the generate_file block.
The obvious upside of using code generation compared to using one of the many existing wrapper approaches is that you always end up generating native infrastructure code that can be executed without any additional tooling and without causing any backward compatibility issues.
While globals and code generation are great for solving data-sharing challenges at build time, sometimes you encounter scenarios where you want to share data between stacks at run time. For that, Terramate CLI introduces Output Sharing.
Outputs Sharing is an advanced feature that uses code generation and orchestration to share the execution output of stacks as inputs to other stacks.
This is very similar to using dependencies in Terragrunt, but the big difference is that Terramate allows you to stay in native Terraform and OpenTofu by generating outputs
and variables
definitions and by injecting output data from one stack as input data into another stack using the well-knownTF_VAR_name
approach.
If you want to learn more, we advise you to take a look at the Outputs Sharing documentation.
The orchestration features covered in this section don’t depend on generated code and can work with any native IaC tool such as Terraform, OpenTofu, Terragrunt, Kubernetes YAML, Pulumi, etc.
Terramate allows you to execute any command in stacks using terramate run <cmd>
. For example, terramate run terraform init
will execute terraform init
in all defined stacks.
In general, you would not want to execute all stacks all the time to reduce runtime and blast-radius.
With Terramate, you can also cd
into any directory and execute the command in all stacks reachable from this directory (all stacks are in sub-directories of the current one). You can also execute a specific stack by using terramate -C <dir|stack> <cmd>
Terramate’s orchestration works by building a directed acyclic graph (DAG) of all stacks available in a repository. You can then use different filters, such as --tags
or --changed
to filter this DAG for stacks that meet certain criteria only. Using a graph-based approach allows us to detect dependencies among stacks and to decide what stacks need to be orchestrated sequentially and what stacks can be orchestrated in parallel.
For example, to run a command in all stacks that contain changes and contain the tag kubernetes
, you can run:
terramate run \
--tag k8s \
--changed \
-- \
kubectl diff
The real power of Terramate orchestration lies in its git
integration (we plan to support other VCS in the future). The git
integration enables Terramate to detect changed stacks based on files changed within a PR or when the default
branch (e.g., main
) changes since the last merge or a specific commit.
The detection also includes recursive scanning of local modules (modules in the same repository) and marking stacks as changed if a module or sub-module has changed.
Change detection is enabled by providing the --changed
option and can be configured to use a specific branch as a reference. By default origin/main
is used as a reference.
Any file inside a stack directory or within the sub-directories of a stack is monitored for changes.
The change detection comes with different integrations. For example, when orchestrating Terragrunt with Terramate, the change detection will respect Terragrunt-specific configurations such as dependencies
.
While developing Terramate, we discussed whether and how we need to define dependencies between stacks. As Terraform stacks can get very complex, we agreed on starting with the basics by allowing to define an order of execution and forcing a stack to always run if another stack runs.
In a stack configuration, we allow defining the order of execution: For example, to always make sure the production VPC is executed after the staging VPC.
stack {
name = "My production VPC"
after = [
"/stacks/gcp-projects/my-staging/my-vpc",
]
}
Other directives available to configure relationships between stacks are before
,wants
, and wanted_by
.
The before
is simply the counterpart to after
. Defining a stack will be executed before all stacks in the set.
The wants
directive also takes a set of stack names and ensures that whenever the current stack is executed, all stacks defined in wants
also run the same command.
Having stacks defined in wants
and after
or before
allows us to define if the stacks always run before or after the current stack.
This can be used in scenarios where one stack reads the state of resources via a Terraform data
source and wants to ensure that the data is always updated before being read.
If executing a selected set of stacks that include a wants
definition, the stacks configured as wanted are always selected in addition. If a selected stack defines an order of execution with a stack that is not selected, the order of execution is ignored as the stack is not being executed.
Execution is stopped by default if any stack executes a failing command. This can be overwritten using the command-line option: terramate run --continue-on-error <cmd>
.
By default, Terramate will try to protect execution in undefined situations. Various safeguards that can be disabled if needed are applied by default:
main
are considered when running commands such as terraform plan
or terraform apply
To learn more about the orchestration safeguards in Terramate, how to configure or optionally reactive those, please see the safeguards documentation.
The orchestration in Terramate is extremely powerful and dynamic. It comes with implicit order of execution that can optionally be overwritten with explicit configuration. If you want to learn more about how the orchestration works in Terramate, we recommend you take a look at the orchestration documentation.
Orchestrating different Infrastructure as Code stacks is usually preferred in automation. For example, you typically want to apply GitOps principles, allowing teams to introduce changes in pull requests that can be reviewed and approved before merging back to main and triggering a deployment (e.g., terraform apply
). Tools such as Terraform and OpenTofu don’t include automation out of the box, leading teams to implement workflows for GitHub Actions, GitLab CI/CD, BitBucket Pipelines and others over and over again.
That’s why Terramate CLI comes with pre-configured workflows that we call CI/CD Blueprints. These blueprints allow you to automate common tasks such as pull requests with plan previews, deployments, and scheduled drift detection workflows that follow best practices and industry standards. The workflows provided have been battle-tested by hundreds of organizations in production and will help you automate your IaC projects in no time.
To learn more about the blueprints, look at the automation blueprints available in our documentation.
global
variables and terramate
metadata allows for very powerful and complex configurationsglobals
as well as in generate_hcl
and generate-file
blocks (prefixed with tm_
)before
, after
features)wants
and wanted_by
feature)failed
or drifted
stacks)This article only touched on what’s possible with Terramate CLI. In addition, there’s Terramate Cloud, which seamlessly integrates with Terramate CLI to provide features such as drift detection, misconfiguration detection, alerts for failed deployments and detected drift, asset inventory management and more.
While Terramate currently focuses on Terraform, OpenTofu, Terragrunt and Kubernetes YAML, we plan to add support for more IaC tooling, such as Ansible, Pulumi and Crossplane.
If you are interested in Terramate and want to learn more, feel free to book a demo with one of our engineers, join our Discord Community or try it out by following our getting started guide.
Explore how Terramate can uplift your IaC projects with a free trial or personalized demo.