Back to all blog posts

Terramate Catalyst: The New Efficient Frontier for Infrastructure-as-Code

Picture of Soren Martius
Sören Martius Chief Product Officer
Portrait of Chris Schagen
Chris Schagen Chief Executive Officer
Photo of Marius Tolzmann
Marius Tolzmann Chief Technology Officer
Reading Time:5 min read

Sharing infrastructure information across Terraform and OpenTofu stacks has long been one of the hardest problems in Infrastructure-as-Code. Terramate Catalyst introduces a new model—information sharing—that replaces fragile output sharing, remote state lookups, and dependency chains. With Bundles and contracts, teams can connect stacks with a single PR and deployment, reduce cognitive overhead, and scale IaC without state file archaeology or complex orchestration.

Terramate Catalyst_ The New Efficient Frontier for Infrastructure-as-Code

Infrastructure-as-Code, such as Terraform or OpenTofu, has always had an arduous challenge at scale: getting different parts of your infrastructure to talk to each other shouldn't be this hard.

Before Catalyst, teams faced a frustrating reality. Want to reference a resource from one stack in another? You had three bad options, each with its own set of headaches.

The Output Sharing Problem

Traditional Terraform workflows (e.g., as introduced with Terragrunt Dependencies) forced you into a chicken-and-egg scenario⁠⁠. You need state to read from state. This meant chaining deployments⁠⁠ — turning what should be a single PR into multiple deployments⁠⁠. Your team needs permissions to access state files scattered across your infrastructure. And if you're dealing with large outputs? You'll hit system limits on environment variables⁠⁠.

The pain with this approach is that it requires the upstream stack to already be applied before you can even start to do a plan for the dependent stack. Which in turn requires multiple PRs and/or multiple applies. And every team member needs to be aware of this sequencing, which, in aggregate, can become really taxing and a frustrating challenge, especially for non-expert users.

Features like Terragrunt Dependencies and Terramate Output Sharing tried to make configuring those relationships more simple, but the core problems and complexity of two applies remain.

As Output Sharing requires terraform output to be parsed in the dependent stack and transformed into Terraform inputs in the dependent stack, additional problems occur:

  • Environment variables are used to populate the inputs to Terraform; an environment is limited in size in general, so very large outputs are potentially not possible to be shared.
  • Terraform can not run in the dependent stacks directly, as the inputs need to be populated. This can lead to additional effort when debugging or getting things running in the first place, as additional tooling is always required to plan or apply the dependent stacks.
  • This is not a one-time thing. Every time new outputs are introduced, the dependency challenge rears its ugly face, quickly compounding to the detriment of the platform team's ability to execute.
  • Mocking outputs (as supported by Terragrunt and Terramate) fixes the symptoms and allows for planning of dependent stacks without existing outputs, but plans do only reflect mocked inputs in plans and require plans to be regenerated during apply with the real values.

In short, the complexity is shifted from Terraform to tooling, unfortunately, with a mental load and complexity tax.

Alternatives: Remote State and Data Sources

Different approaches have been proposed to mitigate this problem.

Remote state sources work by looking at an existing state and looking up the values that are required. This essentially removes the need for additional tooling, as we now stay in native Terraform. We also only need one PR if we postpone the remote state lookup, so the need for mocking goes away.

Also, the downside is that it needs to be configured, which requires the location of the state and the right bucket. This information isn’t readily available in Terraform and needs to be looked up by the users to be set up once. Another con is that practically you can only postpone once.

In short, the complexity is partially mitigated, but unfortunately also partly shifted back to the user. So it also doesn’t really solve the problem.

Data sources get closer⁠⁠. They use the cloud provider API to query a specific resource. They handle new resources better and can be postponed, meaning you can do with only one PR. This, so far, has been our recommended way, as it is the “cleanest”.

Yet, the downside is that you now have to find the right resource. As IDs are often random, and a specific resource can be a needle in the haystack, you need to know about the infrastructure or invest time in finding the right resource. This, in turn, can make it difficult, especially if the person who did the initial work is not the same person who is required to read the data. Documentation thus becomes really critical (and is in practice often missing when needed the most).

So in a nutshell, the alternatives are all compromises, with different demands on cognitive load, manual work, and knowledge. But what if there was a way simpler way?

Information Sharing: The Catalyst Way

Catalyst introduces information sharing⁠⁠ — combining all the advantages with none of the drawbacks. One PR. One deployment. No permission juggling. No state file archaeology. No work finding resources.

Here's how it works: Bundles share two types of information⁠⁠.

  • Configuration gets shared at code generation time, so your code automatically adjusts to config from different stacks⁠⁠.
  • Data source information gets shared at deployment time, giving you exactly what you need to read the resources you want⁠⁠.

This information is then used as follows:

  • Bundles own creating infrastructure in stacks (create new stacks or extend configuration in existing stacks)
  • Bundles also provide a UUID that can be used to tag and identify resources created by Components. Components take care of generating the actual code in stacks (Terraform, OpenTofu, Kubernetes Manifests, etc.)
  • Each Bundle implements a Bundle class that can support creating a contract of the exported data
  • Bundles can access configuration data of other Bundles by referencing them via UUID or via human-friendly aliases.
  • A repository has global knowledge about all Bundles instantiated and all the configurations. In future releases, Terramate Catalyst will allow cross-repository access to information, too.

As a high-level example: A database Bundle can access a VPC Bundle of class example.com/vpc/v1 with the alias of main and access its uuid and exported configuration as well as the actual Bundle inputs.

This helps to easily share information about resources created by a Bundle, in contrast to sharing the actual resource, allowing us to generate a complete static configuration before planning or applying the actual resources.

The end-user does not need to know the details of the actual tagging strategies used, but the platform engineers can rely on the Bundle contracts to ensure tags are in place to populate the filters used in data sources.

The simplest tagging strategy can be that a Bundle always tags cloud resources with {class} = {uuid} e.g. example.com/vpc/v1 = {uuid} , and other Bundles can use that same tag by referencing class and alias to get the uuid : tm_bundle("example.com/vpc/v1", "main").uuid . This can be set up and extended in flexible ways to meet any team's needs.

The result is generated code that shares information and not data. Data is just read when needed. The user does not even need to know which exact stack requires the data, nor any state file configuration.

Each stack has native Terraform code that can be executed without any additional tooling. Nor does it require you to use any 3rd-party orchestration, so you can use Terramate Catalyst with Terraform Cloud or any other Terraform automation provider as well. Needless to say, it is seamlessly integrating with Terramate Cloud as well.

From Output Sharing to Information Sharing

Output sharing was created to mitigate the pain from configuring data sources. But ultimately introduced several new pains in the process.

With Terramate Catalyst information sharing, we have solved the original root problem. Instead of shifting complexity around, we have completely removed it from the user experience.

We believe information sharing will be the new standard, as it allows to dramatically reduce the level of expertise required to spin up complex infrastructure.

So long output-sharing, you won’t be missed.

Other Benefits from Terramate Catalyst

Information sharing is one of many benefits that Terramate Catalyst provides, namely:

  • For platform teams, creating reusable blueprints and packaging them into Bundles is easier than ever.
  • Bundles can create and maintain multiple stacks via a single configuration, allowing to provide a single API to users, AI agents, or internal developer portals.
  • It’s YAML for the end-user. No need to learn HCL or understand the underlying IaC engine used.
  • Bundles can create Terraform, OpenTofu, Kubernetes Manifests, Helm instantiations, and everything else.
  • Enable full self-service for non-expert users and expert users alike, with compliance and configuration guaranteed by your team of expert platform engineers.

Ready to see it in action? Check out our introduction guide to see how Catalyst transforms your IaC workflow.

Ready to supercharge your IaC?

Explore how Terramate can uplift your IaC projects with a free trial or personalized demo.