Back to all blog posts

Technical Introduction to Terramate Catalyst

Picture of Soren Martius
Sören Martius Chief Product Officer
Portrait of Chris Schagen
Chris Schagen Chief Executive Officer
selina nazareth
Selina Nazareth Developer Relations Manager
Reading Time:7 min read

Learn how Terramate Catalyst powers Infrastructure-as-Code self-service with Bundles and Components. Follow a hands-on guide with real examples to integrate Catalyst into your IDP and let developers deploy AWS services like S3 and ECS using existing Terraform modules—without writing Terraform.

Technical Introduction to Terramate Catalyst Coverr

In our previous blog posts, we discussed why enabling Developer self-service with Infrastructure-as-Code often doesn’t work and how Terramate Catalyst is reimagining Infrastructure-as-Code (IaC) self-service.

In this post, we will explore the technical capabilities of Terramate Catalyst hands-on by working through different examples. At the end of this guide, you will know how you can use Terramate Catalyst to enable developers to deploy AWS services, such as S3 or ECS, in self-service using existing Terraform modules.

If you prefer, you can take a look at the final result of this guide in the terramate-catalyst-examples repository on GitHub.

How Catalyst Works Under the Hood

Before we start writing some code, let’s learn about the basics of Catalyst.

At its core, Catalyst transforms how infrastructure is delivered and consumed inside organizations by introducing two new primitives: Bundles and Components.

Components

Components are reusable, opinionated infrastructure blueprints — defined by platform engineers. They encode organizational standards, governance rules, naming conventions, security policies, cost controls, and so on. In practice, a Component may represent a “database setup,” “message queue,” “VPC,” “cache cluster,” or any other infrastructure pattern. Also, Components can be any arbitrary IaC. E.g., you can define a bunch of Terraform and OpenTofu resources in a Component, use Terraform modules, or any other IaC, such as Kubernetes manifests. The idea of Components is to provide infrastructure patterns that can be reused by platform engineers and sourced by a single or multiple Bundles.

Bundles

Bundles assemble one or more Components into ready-to-use, deployable units. These are what developers and AI agents consume when requesting infrastructure. Bundles abstract away all the complexity: no need to write Terraform, manage state, or deal with providers — you declare what you need (e.g., “a database for service X in environment Y”). Catalyst fills in the rest. Bundles are meant as a unit of reuse for application developers, who aren’t experts in IaC.

Division of responsibilities

This separation creates a clear division of responsibility:

  • Platform engineers design and maintain infrastructure logic, compliance, scalability, and IaC best practices.
  • Application developers (or AI agents) request infrastructure via simple, high-level abstractions — without needing to understand Terraform, module variables, or backend configuration.

In other words: Catalyst doesn’t replace IaC — it operationalizes it and elegantly hides the complexity for non-expert infrastructure “consumers”.

Ease of onboarding

Onboarding Catalyst to an existing IaC setup is straightforward. E.g, Catalyst comes with helpers that allow importing existing Terraform modules as Components. Another command helps you easily create new Bundles without having to write all the required configuration from scratch.

Part of the value prop of Catalyst is that any existing IaC setup can be easily reused to turn into a self-service infrastructure vending machine.

Versioning of Bundles and Components

Both Components and Bundles can be managed and versioned in Git repositories and in the upcoming Terramate Registry using semantic versioning. If you use the Registry, Terramate Cloud provides a dashboard to track Bundle and Component usage, as well as versions across multiple repositories and teams.

Scaffold complex IaC

Catalyst basically works by scaffolding the entire IaC, including state configuration and providers, but it doesn’t require developers to know, e.g., Terraform, OpenTofu, or their configuration language HCL. Developers can either use the terramate scaffold command to choose from Bundles available in the current repository, a remote repository, or the upcoming registry in Terramate Cloud. Alternatively, you may use any of the other existing approaches, such as the Terramate MCP Server.

Overview Scaffold Complex IaC

Getting Started: Installing Terramate Catalyst

Let’s start working on some examples. Terramate Catalyst is not part of Terramate CLI but available as a separate binary on GitHub. It comes with two binaries, terramate and terramate-ls that act as drop-in replacements for Terramate CLI. The easiest way to install Terramate Catalyst is by using the asdf package manager.

asdf plugin add terramate-catalyst https://github.com/terramate-io/asdf-terramate-catalyst
asdf global terramate-catalyst 0.15.2-beta11 

Alternatively, you can also download the binaries from the repository directly by choosing a release.

We will provide more convenient installation methods, such as additional package managers, soon.

Example: Enable Developers to create an S3 Bucket in self-service

In this example, we focus on a simple use case. Allowing developers to deploy a simple S3 bucket by defining its name only and without ever touching Terraform or OpenTofu.

In components/terramate-aws-s3-bucket/v1/component.tm.hcl, you can see how a Component is configured.

component.tm.hcl

define component metadata {
  class   = "example.io/tf-aws-s3"
  version = "1.0.0"

  name         = "AWS S3 Bucket Component"
  description  = "Component that allows creating an S3 bucket on AWS with configurable ACL (default: private) and versioning enabled."
  technologies = ["terraform", "opentofu"]
}

define component {
  input "name" {
    type        = string
    prompt      = "S3 Bucket Name"
    description = "The name of the S3 bucket"
  }

  input "acl" {
    type        = string
    description = "Access Control List (ACL) for the bucket. Valid values: 'private', 'public-read', 'public-read-write', 'aws-exec-read', 'authenticated-read', 'bucket-owner-read', 'bucket-owner-full-control', 'log-delivery-write'"
    default     = "private"
  }

  input "tags" {
    type        = map(string)
    description = "Tags to apply to resources"
    default     = {}
  }
}

Components configure metadata such as class , version , name , description , and also expose the inputs available to Bundles.

Once created and configured, any IaC can be added to a Component. For example, find the configuration for a simple S3 bucket in componments/example.io/terramate-aws-s3-bucket/v1/main.tf.tmgen using simple Terramate code generation.

main.tf.tmgen

module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "5.9.0"

  bucket = component.input.name.value
  acl    = component.input.acl.value

  control_object_ownership = true
  object_ownership         = "ObjectWriter"

  # Disable Block Public Access settings when using public ACLs
  block_public_acls       = !tm_contains(["public-read", "public-read-write"], component.input.acl.value)
  block_public_policy     = !tm_contains(["public-read", "public-read-write"], component.input.acl.value)
  ignore_public_acls      = !tm_contains(["public-read", "public-read-write"], component.input.acl.value)
  restrict_public_buckets = !tm_contains(["public-read", "public-read-write"], component.input.acl.value)

  versioning = {
    enabled = true
  }

  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }

  tags = component.input.tags.value
}

You can see that the S3 bucket comes with versioning and encryption enabled by default, which we want to enforce for every bucket we create. Developers can only configure what matters to them, which, in this case, means the name of each bucket and whether it is private or public .

Next, let’s take a look at how we configure the Bundle for enabling developers to scaffold S3 buckets in self-service. Take a look at the Bundle configuration in terramate-catalyst-examples/bundles/example.io/tf-aws-s3/v1/bundle.tm.hcl .

bundle.tm.hcl

define bundle metadata {
  class   = "example.io/tf-aws-s3/v1"
  version = "1.0.0"

  name         = "S3 Bucket"
  description  = <<EOF
    This Bundle creates and manages an S3 Bucket on AWS. The bucket can be configured as private or public, with private as the default.
  EOF
  technologies = ["terraform", "opentofu"]
}

define bundle {
  alias = tm_slug(bundle.input.name.value)

  input "env" {
    type                  = string
    prompt                = "Environment"
    description           = "A list of available environments to create the S3 bucket in."
    allowed_values        = global.environments
    required_for_scaffold = true
    multiselect           = false
  }

  input "name" {
    type                  = string
    prompt                = "S3 Bucket Name"
    description           = "The name of the S3 bucket"
    required_for_scaffold = true
  }

  input "visibility" {
    type        = string
    prompt      = "Bucket Visibility"
    description = "Whether the bucket should be private or public"
    default     = "private"
    allowed_values = [
      { name = "Private", value = "private" },
      { name = "Public Read", value = "public-read" },
      { name = "Public Read/Write", value = "public-read-write" }
    ]
  }
}

define bundle {
  scaffolding {
    path = "/stacks/${bundle.input.env.value}/s3/_bundle_s3_${tm_slug(bundle.input.name.value)}.tm.hcl"
    name = tm_slug(bundle.input.name.value)
  }
}

define bundle stack "s3-bucket" {
  metadata {
    path = tm_slug(bundle.input.name.value)

    name        = "AWS S3 Bucket ${bundle.input.name.value}"
    description = <<EOF
      AWS S3 Bucket ${bundle.input.name.value}
    EOF

    tags = [
      "example.io/aws-s3-bucket",
      "example.io/bundle/${bundle.uuid}",
      "example.io/aws-s3-bucket/${bundle.uuid}",
      "example.io/aws-s3-bucket/${tm_slug(bundle.input.name.value)}",
    ]
  }

  component "s3-bucket" {
    source = "/components/example.io/terramate-aws-s3-bucket/v1"
    inputs = {
      name        = bundle.input.name.value
      acl         = bundle.input.visibility.value
      bundle_uuid = bundle.uuid
      tags = {
        "example.io/bundle-uuid" = bundle.uuid
      }
    }
  }
}

A few things are happening here. First, we configure the Bundle metadata as we did for the Component in the previous section. But if you look at the configuration, you can see that we are doing a few things in addition:

  • The Bundle exposes three inputs: name which can be a string and env which is a select field that can be either dev , stg or prd and visibility which can either be private , public-read or public-write .
  • In the define bundle scaffolding block, we are defining the target path of the Bundle. So each time a user is running terramate scaffold to create a new S3 bucket the IaC will be scaffolded in a unique path. For example, a bucket named terramate-catalyst-example in the dev environment will be scaffolded in stacks/dev/s3/terramate-catalyst-example/ .
  • It defines the required configuration for a single or multi-stack architecture, including orchestration configuration, state backend and providers.
  • It passes input variables down to the individual Components.

To test this example, clone the terramate-catalyst-examples repository and run terramate scaffold in the root of the repository.

git clone git@github.com:terramate-io/terramate-catalyst-examples.git

cd terramate-catalyst-examples

terramate scaffold

Running the scaffold command shows you the Bundles you can use to initiate infrastructure from.

Scaffold commandNext, choose S3 Bucket and create a new private bucket named catalyst-example-bucket in the dev environment.

catalyst-example-bucket in the dev environment.Creating a new instance of the S3 Bundle will generate a configuration file in terramate-catalyst-examples/stacks/dev/s3/_bundle_s3_catalyst-example-bucket.tm.yml that will look similar to this:

apiVersion: terramate.io/cli/v1
kind: BundleInstance
metadata:
  name: catalyst-example-bucket
  uuid: fa2a2e9e-1a29-4e03-9ff6-c9d53cfdc157
spec:
  source: /bundles/example.io/tf-aws-s3/v1
  inputs:
    
    # A list of available environments to create the S3 bucket in.
    env: dev
    
    # The name of the S3 bucket
    name: catalyst-test-bucket
    
    # Whether the bucket should be private or public
    visibility: private

This developer-friendly YAML file contains the configuration for our Bundle instance.

The final step is to generate all required files from the Bundle configuration. To achieve this, run terramate generate to generate the Terraform configuration.

 terramate generate
Code generation report

Successes:

- /stacks/dev/s3/catalyst-example-bucket
        [+] backend.tf
        [+] component_s3-bucket_main.tf
        [+] stack.tm.hcl
        [+] terraform.tf

Hint: '+', '~' and '-' mean the file was created, changed and deleted, respectively.

If you take a look at the generated component_s3-bucket_main.tf , you can see that the Terraform configuration for deploying a private S3 bucket named catalyst-example-bucket has been created in terramate-catalyst-examples/stacks/dev/s3/catalyst-example-bucket/component_s3-bucket_main.tf .

component_s3-bucket_main.tf

// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT

module "s3_bucket" {
  acl                      = "private"
  block_public_acls        = true
  block_public_policy      = true
  bucket                   = "catalyst-example-bucket"
  control_object_ownership = true
  ignore_public_acls       = true
  object_ownership         = "ObjectWriter"
  restrict_public_buckets  = true
  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }
  source = "terraform-aws-modules/s3-bucket/aws"
  tags = {
    "example.io/bundle-uuid" = "db8204c7-1fa3-49ae-9311-bca744f681f0"
  }
  version = "5.9.0"
  versioning = {
    enabled = true
  }
}

The result is native Terraform code that configures the S3 bucket, providers and Terraform state backend, which can be deployed without further effort using automation workflows.

To deploy the bucket, you can orchestrate the terraform apply command using the Terramate Orchestration Engine. Make sure that you have valid AWS credentials configured in our environment.

Initialize the Terraform environment first by orchestrating terraform init .

 terramate run -X -- terraform init

terramate: Entering stack in /stacks/dev/s3/catalyst-example-bucket
terramate: Executing command "terraform init"
Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/s3-bucket/aws 5.9.0 for s3_bucket...
- s3_bucket in .terraform/modules/s3_bucket
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 6.22.0, ~> 6.25.0"...
- Finding hashicorp/null versions matching "~> 3.2.0"...
- Installing hashicorp/aws v6.25.0...
- Installed hashicorp/aws v6.25.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Next, apply the changes by orchestrating terraform apply .

 terramate run -X -- terraform apply
                                       
terramate: Entering stack in /stacks/dev/s3/catalyst-example-bucket
terramate: Executing command "terraform apply"
module.s3_bucket.data.aws_region.current: Reading...
module.s3_bucket.data.aws_canonical_user_id.this[0]: Reading...
module.s3_bucket.data.aws_partition.current: Reading...
module.s3_bucket.data.aws_caller_identity.current: Reading...
module.s3_bucket.data.aws_region.current: Read complete after 0s [id=us-east-1]
module.s3_bucket.data.aws_partition.current: Read complete after 0s [id=aws]
module.s3_bucket.data.aws_caller_identity.current: Read complete after 0s [id=975086131449]
module.s3_bucket.data.aws_canonical_user_id.this[0]: Read complete after 1s [id=9da783ff7be6e9971d5ad7ce8956eb03ea19ecbe4c1250ace4ad596753f83e80]

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.s3_bucket.aws_s3_bucket.this[0] will be created
  + resource "aws_s3_bucket" "this" {
      + acceleration_status         = (known after apply)
      + acl                         = (known after apply)
      + arn                         = (known after apply)
      + bucket                      = "catalyst-example-bucket"
      + bucket_domain_name          = (known after apply)
      + bucket_prefix               = (known after apply)
      + bucket_region               = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = false
      + policy                      = (known after apply)
      + region                      = "us-east-1"
      + request_payer               = (known after apply)
      + tags                        = {
          + "example.io/bundle-uuid" = "db8204c7-1fa3-49ae-9311-bca744f681f0"
        }
      + tags_all                    = {
          + "example.io/bundle-uuid" = "db8204c7-1fa3-49ae-9311-bca744f681f0"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + cors_rule (known after apply)

      + grant (known after apply)

      + lifecycle_rule (known after apply)

      + logging (known after apply)

      + object_lock_configuration (known after apply)

      + replication_configuration (known after apply)

      + server_side_encryption_configuration (known after apply)

      + versioning (known after apply)

      + website (known after apply)
    }

  # module.s3_bucket.aws_s3_bucket_acl.this[0] will be created
  + resource "aws_s3_bucket_acl" "this" {
      + acl    = "private"
      + bucket = (known after apply)
      + id     = (known after apply)
      + region = "us-east-1"

      + access_control_policy (known after apply)
    }

  # module.s3_bucket.aws_s3_bucket_ownership_controls.this[0] will be created
  + resource "aws_s3_bucket_ownership_controls" "this" {
      + bucket = (known after apply)
      + id     = (known after apply)
      + region = "us-east-1"

      + rule {
          + object_ownership = "ObjectWriter"
        }
    }

  # module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be created
  + resource "aws_s3_bucket_public_access_block" "this" {
      + block_public_acls       = true
      + block_public_policy     = true
      + bucket                  = (known after apply)
      + id                      = (known after apply)
      + ignore_public_acls      = true
      + region                  = "us-east-1"
      + restrict_public_buckets = true
      + skip_destroy            = true
    }

  # module.s3_bucket.aws_s3_bucket_versioning.this[0] will be created
  + resource "aws_s3_bucket_versioning" "this" {
      + bucket = (known after apply)
      + id     = (known after apply)
      + region = "us-east-1"

      + versioning_configuration {
          + mfa_delete = (known after apply)
          + status     = "Enabled"
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes 

module.s3_bucket.aws_s3_bucket.this[0]: Creating...
module.s3_bucket.aws_s3_bucket.this[0]: Creation complete after 8s [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_public_access_block.this[0]: Creating...
module.s3_bucket.aws_s3_bucket_versioning.this[0]: Creating...
module.s3_bucket.aws_s3_bucket_public_access_block.this[0]: Creation complete after 2s [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_ownership_controls.this[0]: Creating...
module.s3_bucket.aws_s3_bucket_ownership_controls.this[0]: Creation complete after 1s [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_acl.this[0]: Creating...
module.s3_bucket.aws_s3_bucket_versioning.this[0]: Creation complete after 3s [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_acl.this[0]: Creation complete after 1s [id=catalyst-example-bucket,private]
Releasing state lock. This may take a few moments...

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Congratulations, you just learned how to provide self-service golden paths for deploying a S3 bucket on AWS to developers.

Example: Reconfigure the S3 Bucket to be public instead of private

But how about if a developer wants to change an existing S3 bucket created with Terramate Catalyst? It’s dead simple: just open the _bundle_s3_catalyst-example-bucket.tm.yml file in terramate-catalyst-examples/stacks/dev/s3/_bundle_s3_catalyst-example-bucket.tm.yml and change the visbility attribute from private to public-read .

_bundle_s3_catalyst-example-bucket.tm.yml

apiVersion: terramate.io/cli/v1
kind: BundleInstance
metadata:
  name: catalyst-example-bucket
  uuid: fa2a2e9e-1a29-4e03-9ff6-c9d53cfdc157
spec:
  source: /bundles/example.io/tf-aws-s3/v1
  inputs:
    
    # A list of available environments to create the S3 bucket in.
    env: dev
    
    # The name of the S3 bucket
    name: catalyst-test-bucket
    
    # Whether the bucket should be private or public
    visibility: public-read

Afterwards, run terramate generate again to regenerate the Terraform configuration.

 terramate generate                      
Code generation report

Successes:

- /stacks/dev/s3/catalyst-example-bucket
        [~] component_s3-bucket_main.tf

Hint: '+', '~' and '-' mean the file was created, changed and deleted, respectively.

You can see how only the component_s3-bucket_main.tf is generated again to reflect the updated configuration.

component_s3-bucket_main.tf

// TERRAMATE: GENERATED AUTOMATICALLY DO NOT EDIT

module "s3_bucket" {
  acl                      = "public-read"
  block_public_acls        = false
  block_public_policy      = false
  bucket                   = "catalyst-example-bucket"
  control_object_ownership = true
  ignore_public_acls       = false
  object_ownership         = "ObjectWriter"
  restrict_public_buckets  = false
  server_side_encryption_configuration = {
    rule = {
      apply_server_side_encryption_by_default = {
        sse_algorithm = "AES256"
      }
    }
  }
  source = "terraform-aws-modules/s3-bucket/aws"
  tags = {
    "example.io/bundle-uuid" = "db8204c7-1fa3-49ae-9311-bca744f681f0"
  }
  version = "5.9.0"
  versioning = {
    enabled = true
  }
}

To deploy the changes, simply orchestrate terraform apply again.

 terramate run -X -- terraform apply

terramate: Entering stack in /stacks/dev/s3/catalyst-example-bucket
terramate: Executing command "terraform apply"
module.s3_bucket.data.aws_canonical_user_id.this[0]: Reading...
module.s3_bucket.data.aws_caller_identity.current: Reading...
module.s3_bucket.data.aws_partition.current: Reading...
module.s3_bucket.data.aws_region.current: Reading...
module.s3_bucket.data.aws_partition.current: Read complete after 0s [id=aws]
module.s3_bucket.aws_s3_bucket.this[0]: Refreshing state... [id=catalyst-example-bucket]
module.s3_bucket.data.aws_region.current: Read complete after 0s [id=us-east-1]
module.s3_bucket.data.aws_caller_identity.current: Read complete after 0s [id=975086131449]
module.s3_bucket.data.aws_canonical_user_id.this[0]: Read complete after 1s [id=9da783ff7be6e9971d5ad7ce8956eb03ea19ecbe4c1250ace4ad596753f83e80]
module.s3_bucket.aws_s3_bucket_public_access_block.this[0]: Refreshing state... [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_versioning.this[0]: Refreshing state... [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_ownership_controls.this[0]: Refreshing state... [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_acl.this[0]: Refreshing state... [id=catalyst-example-bucket,private]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.s3_bucket.aws_s3_bucket_acl.this[0] will be updated in-place
  ~ resource "aws_s3_bucket_acl" "this" {
      ~ acl                   = "private" -> "public-read"
        id                    = "catalyst-example-bucket,private"
        # (3 unchanged attributes hidden)

      ~ access_control_policy (known after apply)
      - access_control_policy {
          - grant {
              - permission = "FULL_CONTROL" -> null

              - grantee {
                  - id            = "9da783ff7be6e9971d5ad7ce8956eb03ea19ecbe4c1250ace4ad596753f83e80" -> null
                  - type          = "CanonicalUser" -> null
                    # (3 unchanged attributes hidden)
                }
            }
          - owner {
              - id           = "9da783ff7be6e9971d5ad7ce8956eb03ea19ecbe4c1250ace4ad596753f83e80" -> null
                # (1 unchanged attribute hidden)
            }
        }
    }

  # module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be updated in-place
  ~ resource "aws_s3_bucket_public_access_block" "this" {
      ~ block_public_acls       = true -> false
      ~ block_public_policy     = true -> false
        id                      = "catalyst-example-bucket"
      ~ ignore_public_acls      = true -> false
      ~ restrict_public_buckets = true -> false
        # (3 unchanged attributes hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.s3_bucket.aws_s3_bucket_public_access_block.this[0]: Modifying... [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_public_access_block.this[0]: Modifications complete after 1s [id=catalyst-example-bucket]
module.s3_bucket.aws_s3_bucket_acl.this[0]: Modifying... [id=catalyst-example-bucket,private]
module.s3_bucket.aws_s3_bucket_acl.this[0]: Modifications complete after 1s [id=catalyst-example-bucket,public-read]
Releasing state lock. This may take a few moments...

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Additional Examples

The terramate-catalyst-examples repository also comes with additional examples that focus on more complex use-cases such as multi-state deployments and dependencies among Bundles.

VPC and ALB (tf-aws-vpc-alb)

Creates and manages a VPC with public and private subnets, NAT gateway, and an Application Load Balancer infrastructure. The ALB is configured with a basic HTTP listener, and target groups and routing rules are automatically added when deploying ECS services.

  • Creates a VPC with public and private subnets
  • Sets up NAT Gateway and Internet Gateway
  • Deploys an Application Load Balancer (ALB) in public subnets
  • Provides foundational networking infrastructure
  • Manages VPC and ALB in different state files

ECS Fargate Cluster (tf-aws-ecs-fargate-cluster)

Creates and manages an ECS Fargate cluster on AWS with a default capacity provider strategy that balances cost savings (Fargate Spot) with reliability (Fargate on-demand).

  • Creates an ECS Fargate cluster
  • Configures capacity provider strategy (Fargate Spot + on-demand)

ECS Fargate Service (tf-aws-ecs-fargate-service)

Creates and manages an ECS Fargate service that can be attached to existing ECS clusters, VPCs, and Application Load Balancers. It uses filter tags to discover and reference existing infrastructure resources created via the above-mentioned bundles using data sources.

  • Creates an ECS Fargate service attached to existing cluster, VPC, and ALB
  • Uses AWS data sources to discover resources by tags
  • Configures container definitions and load balancer integration

Summary

That’s it, you just learned how to use Terramate Catalyst to provide self-service for your non-infra-expert developers. They can now deploy a lot of standardized, secure, and compliant infrastructure in almost no time without DevOps support: Essentially the promise of AI infra but without the probabilistic nature of AI.

If you’d like to have an expert look at your infra self-service potential, feel free to schedule a meeting with us.

Ready to supercharge your IaC?

Explore how Terramate can uplift your IaC projects with a free trial or personalized demo.