# Atmos Atmos is a framework for orchestrating and operating infrastructure workflows across multiple cloud platforms and DevOps toolchains. It provides a powerful abstraction layer that separates infrastructure configuration from code, enabling teams to manage complex cloud architectures using hierarchical YAML-based stack configurations combined with Terraform "root modules" called components. Built as a Go CLI with an extensible architecture, Atmos simplifies multi-account, multi-region, and multi-tenant deployments through DRY (Don't Repeat Yourself) configuration patterns. It supports native Terraform/OpenTofu orchestration, Helmfile deployments, vendoring from remote repositories, custom commands, workflow automation, and policy validation with OPA. The framework is designed for enterprise-scale infrastructure management while remaining accessible for smaller teams. ## CLI Commands ### terraform plan - Generate Terraform Execution Plan Execute `terraform plan` for an Atmos component in a stack to preview infrastructure changes. ```bash # Basic plan for vpc component in dev stack atmos terraform plan vpc -s dev # Plan with custom planfile output atmos terraform plan vpc -s dev -out=my-custom.planfile # Skip planfile generation (useful for Terraform Cloud) atmos terraform plan vpc -s dev --skip-planfile # Plan with variable overrides atmos terraform plan vpc -s dev -var="instance_type=t3.large" # Targeted planning for specific resources atmos terraform plan vpc -s dev -target=aws_subnet.private # Plan to destroy infrastructure atmos terraform plan vpc -s dev -destroy # Plan all affected components based on git changes atmos terraform plan --affected # Plan specific components across all stacks atmos terraform plan --components vpc,eks # Plan with YQ query filter atmos terraform plan --query '.vars.tags.team == "platform"' # Dry run to see what would be executed atmos terraform plan vpc -s dev --dry-run ``` ### terraform apply - Apply Infrastructure Changes Apply Terraform changes for a component in a stack. ```bash # Apply changes with manual approval atmos terraform apply vpc -s dev # Apply from a previously generated planfile atmos terraform apply vpc -s dev --from-plan # Auto-approve apply (use with caution) atmos terraform apply vpc -s dev -auto-approve # Apply all affected components atmos terraform apply --affected # Apply with specific identity for authentication atmos terraform apply vpc -s prod --identity superadmin ``` ### describe component - View Component Configuration Display the complete deep-merged configuration for a component in a stack. ```bash # Describe component configuration atmos describe component vpc -s tenant1-ue2-dev # Output as JSON atmos describe component vpc -s tenant1-ue2-dev --format json # Output as YAML to file atmos describe component vpc -s tenant1-ue2-dev --file component.yaml # Query specific configuration sections with YQ atmos describe component vpc -s plat-ue2-prod --query .vars.tags atmos describe component vpc -s plat-ue2-prod -q .settings # Show provenance tracking (where values came from) atmos describe component vpc -s tenant1-ue2-dev --provenance # Disable template processing to see raw config atmos describe component vpc -s tenant1-ue2-dev --process-templates=false # Skip specific YAML functions atmos describe component vpc -s tenant1-ue2-dev --skip=terraform.output,include # Path-based component resolution (from component directory) cd components/terraform/vpc atmos describe component . --stack dev # Authenticate before describing (for YAML functions requiring credentials) atmos describe component vpc -s tenant1-ue2-dev --identity my-aws-identity ``` ### describe stacks - List All Stack Configurations View deep-merged configurations for all stacks or filter by specific criteria. ```bash # View all stacks atmos describe stacks # Filter by specific stack atmos describe stacks --stack plat-ue2-prod # Filter by component and section atmos describe stacks --components vpc --sections vars # Output as JSON for processing atmos describe stacks --format json | jq '.["plat-ue2-prod"]' ``` ### vendor pull - Download Dependencies from Remote Sources Pull component sources and mixins from remote repositories including Git, OCI registries, S3, and HTTP. ```bash # Pull all vendored components atmos vendor pull # Pull specific component atmos vendor pull --component vpc atmos vendor pull -c vpc # Pull components by tags atmos vendor pull --tags networking atmos vendor pull --tags dev,test # Dry run to preview changes atmos vendor pull --tags networking --dry-run # Pull Helmfile component atmos vendor pull -c echo-server --type helmfile ``` ### workflow - Execute Multi-Step Workflows Run predefined sequences of commands as automated workflows. ```bash # Execute workflow with auto-discovery atmos workflow eks-up --stack tenant1-ue2-dev # Execute workflow from specific file atmos workflow eks-up -f workflow1 --stack tenant1-ue2-dev # Resume workflow from a specific step atmos workflow eks-up -f workflow1 --from-step step3 # Execute workflow with authentication atmos workflow deploy-all -f workflows --identity superadmin ``` ### list - List Resources List various Atmos resources like stacks, components, and workflows. ```bash # List all stacks atmos list stacks # List all components atmos list components # List affected components based on git changes atmos list affected # List all workflows atmos list workflows # List vendor configurations atmos list vendor ``` ## Configuration ### atmos.yaml - CLI Configuration The main configuration file that defines paths, settings, and integrations. ```yaml # atmos.yaml - Core CLI configuration # Base path for all relative paths base_path: "./" # Terraform component configuration components: terraform: base_path: "components/terraform" command: terraform # or tofu, terraform-1.8, etc. apply_auto_approve: false deploy_run_init: true init_run_reconfigure: true auto_generate_backend_file: false init: pass_vars: false helmfile: base_path: "components/helmfile" use_eks: true kubeconfig_path: "/dev/shm" cluster_name_pattern: "{namespace}-{tenant}-{environment}-{stage}-eks-cluster" # Stack configuration stacks: base_path: "stacks" included_paths: - "orgs/**/*" excluded_paths: - "**/_defaults.yaml" name_pattern: "{tenant}-{environment}-{stage}" # Workflow configuration workflows: base_path: "stacks/workflows" # Vendor configuration vendor: base_path: "./vendor.yaml" # Logging logs: file: "/dev/stderr" level: Info # Trace, Debug, Info, Warning, Off # Schema validation schemas: jsonschema: base_path: "stacks/schemas/jsonschema" opa: base_path: "stacks/schemas/opa" # Template settings templates: settings: enabled: true evaluations: 1 sprig: enabled: true gomplate: enabled: true timeout: 5 # Terminal settings settings: list_merge_strategy: replace # replace, append, merge terminal: max_width: 120 pager: false unicode: true syntax_highlighting: enabled: true theme: dracula # Version checking version: check: enabled: true timeout: 1000 frequency: 1h ``` ### Stack Configuration - Define Infrastructure Environments Stack manifests define which components to deploy with what configuration. ```yaml # stacks/orgs/acme/plat/dev/us-east-2.yaml # Import shared configurations import: - orgs/acme/_defaults - catalog/terraform/vpc - mixins/region/us-east-2 - mixins/stage/dev # Global variables (inherited by all components) vars: namespace: acme tenant: plat environment: ue2 stage: dev region: us-east-2 # Environment variables for all components env: TF_LOG: DEBUG # Global settings settings: spacelift: workspace_enabled: true # Component definitions components: terraform: # VPC component vpc: metadata: component: vpc # Points to components/terraform/vpc inherits: - vpc/defaults # Inherit from base component vars: vpc_cidr: "10.0.0.0/16" availability_zones: - us-east-2a - us-east-2b - us-east-2c nat_gateway_enabled: true tags: Environment: dev Team: platform # Terraform backend configuration backend: s3: bucket: acme-terraform-state key: terraform.tfstate region: us-east-2 dynamodb_table: acme-terraform-locks # Environment variables for this component env: AWS_PROFILE: acme-dev # EKS cluster depending on VPC eks/cluster: metadata: component: eks/cluster vars: # Reference VPC outputs using YAML function vpc_id: !terraform.output vpc vpc_id subnet_ids: !terraform.output vpc private_subnet_ids kubernetes_version: "1.29" node_groups: general: instance_types: - t3.medium min_size: 2 max_size: 10 desired_size: 3 settings: depends_on: - vpc # Explicit dependency ``` ## YAML Functions ### !terraform.output - Read Component Outputs Read outputs from other Terraform components' remote state. ```yaml components: terraform: my-app: vars: # Basic output reference (same stack) vpc_id: !terraform.output vpc vpc_id # Reference output from specific stack shared_vpc_id: !terraform.output vpc plat-ue2-prod vpc_id # Use template to reference current stack security_group_id: !terraform.output security-group {{ .stack }} id # Get item from list output using YQ subnet_id: !terraform.output vpc .private_subnet_ids[0] # Get value from map output db_host: !terraform.output database .endpoints.writer # Complex YQ expression with string concatenation jdbc_url: !terraform.output database ".hostname | \"jdbc:postgresql://\" + . + \":5432/mydb\"" # Provide default value if component not provisioned optional_id: !terraform.output optional-component ".id // \"default-value\"" # Default for list type subnets: !terraform.output vpc ".subnet_ids // [\"mock-subnet-1\", \"mock-subnet-2\"]" # Access map keys with special characters api_key: !terraform.output secrets '.keys["api-key-1"].value' ``` ### !terraform.state - Read Remote State Directly Faster alternative to !terraform.output that reads state files directly without running terraform init. ```yaml components: terraform: app: vars: # Read from S3 backend state vpc_id: !terraform.state s3://my-bucket/vpc/terraform.tfstate .outputs.vpc_id.value # Read from local state file local_value: !terraform.state file:///path/to/terraform.tfstate .outputs.my_output.value # With templated stack reference cluster_arn: !terraform.state s3://{{ .vars.state_bucket }}/eks/terraform.tfstate .outputs.cluster_arn.value ``` ### !include - Include External YAML Files Include content from external YAML or JSON files. ```yaml components: terraform: vpc: vars: # Include entire file network_config: !include configs/network.yaml # Include with YQ path selector specific_config: !include configs/settings.yaml .database.connection # Include from URL remote_config: !include https://example.com/config.yaml ``` ### !env - Read Environment Variables Read values from environment variables with optional defaults. ```yaml vars: # Simple environment variable aws_region: !env AWS_REGION # With default value log_level: !env LOG_LEVEL info # Required variable (fails if not set) api_token: !env API_TOKEN ``` ### !exec - Execute Shell Commands Execute shell commands and use their output as values. ```yaml vars: # Get current git commit git_sha: !exec git rev-parse --short HEAD # Get timestamp deploy_time: !exec date -u +%Y-%m-%dT%H:%M:%SZ # Run script with arguments generated_value: !exec ./scripts/generate-value.sh production ``` ## Workflow Definitions ### Workflow Schema Define automated multi-step workflows in YAML. ```yaml # stacks/workflows/deploy.yaml workflows: # Basic workflow deploy-infrastructure: description: Deploy core infrastructure components steps: - command: terraform apply vpc -auto-approve - command: terraform apply eks/cluster -auto-approve - command: terraform apply eks/addons -auto-approve # Workflow with stack specified deploy-dev: description: Deploy to development environment stack: plat-ue2-dev steps: - command: terraform plan vpc - command: terraform apply vpc -auto-approve - command: terraform plan eks/cluster - command: terraform apply eks/cluster -auto-approve # Workflow with retry configuration deploy-with-retry: description: Deploy with automatic retries steps: - command: terraform apply vpc -auto-approve retry: max_attempts: 3 backoff_strategy: exponential initial_delay: 5s multiplier: 2 max_elapsed_time: 10m # Workflow with shell commands full-deployment: description: Complete deployment pipeline steps: - name: validate command: terraform validate vpc - name: plan-vpc command: terraform plan vpc - name: notify-start type: shell command: | echo "Starting deployment..." curl -X POST https://slack.com/webhook -d '{"text": "Deployment started"}' - name: apply-vpc command: terraform apply vpc -auto-approve - name: verify type: shell command: aws ec2 describe-vpcs --region us-east-2 # Workflow with authentication per step cross-account-deploy: description: Deploy across multiple AWS accounts steps: - command: terraform apply shared-vpc -s network-prod identity: network-admin name: deploy-shared-vpc - command: terraform apply app -s app-prod identity: app-admin name: deploy-application # Workflow with toolchain dependencies validate-infrastructure: description: Validate with external tools dependencies: tools: tflint: "^0.54.0" checkov: "latest" steps: - name: lint command: tflint --recursive type: shell - name: security-scan command: checkov -d components/terraform type: shell - name: validate command: terraform validate vpc ``` ## Vendoring Configuration ### vendor.yaml - Define Component Sources Configure component sources for vendoring from remote repositories. ```yaml # vendor.yaml apiVersion: atmos/v1 kind: AtmosVendorConfig metadata: name: infrastructure-vendor-config description: Vendor configuration for all infrastructure components spec: sources: # Vendor from GitHub with specific version - component: vpc source: github.com/cloudposse/terraform-aws-components.git//modules/vpc?ref={{.Version}} version: 1.380.0 targets: - components/terraform/vpc included_paths: - "**/*.tf" - "**/*.md" excluded_paths: - "**/examples/**" - "**/.github/**" tags: - networking - core # Vendor from OCI registry - component: eks source: oci://public.ecr.aws/cloudposse/components/terraform/stable/aws/eks:{{.Version}} version: latest targets: - components/terraform/eks tags: - kubernetes - compute # Vendor from S3 - component: custom-module source: s3::https://s3.amazonaws.com/my-bucket/modules/custom.zip targets: - components/terraform/custom tags: - internal # Vendor stack mixins - component: mixins source: github.com/my-org/infrastructure-mixins.git//stacks?ref={{.Version}} version: main targets: - stacks/mixins tags: - stacks - mixins # Local file copy - component: shared-config source: ../shared-infrastructure/configs targets: - configs/shared tags: - config ``` ## Custom Commands ### Define Custom CLI Commands Extend Atmos with custom commands in atmos.yaml. ```yaml # atmos.yaml - Custom commands section commands: # Simple terraform wrapper - name: tf description: Execute terraform commands commands: - name: plan description: Plan terraform components arguments: - name: component description: Name of the component flags: - name: stack shorthand: s description: Name of the stack required: true steps: - atmos terraform plan {{ .Arguments.component }} -s {{ .Flags.stack }} # Custom deployment command - name: deploy description: Deploy infrastructure commands: - name: all description: Deploy all components flags: - name: stack shorthand: s description: Target stack required: true - name: auto-approve description: Auto-approve changes type: bool env: - key: TF_IN_AUTOMATION value: "true" steps: - atmos terraform apply vpc -s {{ .Flags.stack }} {{ if .Flags.auto-approve }}-auto-approve{{ end }} - atmos terraform apply eks -s {{ .Flags.stack }} {{ if .Flags.auto-approve }}-auto-approve{{ end }} # Command with component config access - name: show description: Show component information commands: - name: vars description: Show component variables arguments: - name: component description: Component name flags: - name: stack shorthand: s required: true component_config: component: "{{ .Arguments.component }}" stack: "{{ .Flags.stack }}" steps: - 'echo "Component: {{ .ComponentConfig.component }}"' - 'echo "Workspace: {{ .ComponentConfig.workspace }}"' - 'echo "Backend bucket: {{ .ComponentConfig.backend.bucket }}"' ``` ## Go Template Functions ### Template Syntax in Stack Manifests Use Go templates with Sprig and Gomplate functions in stack configurations. ```yaml # Enable templates in atmos.yaml templates: settings: enabled: true sprig: enabled: true gomplate: enabled: true # Use in stack manifests components: terraform: my-component: vars: # Access stack context stack_name: "{{ .stack }}" atmos_component: "{{ .atmos_component }}" # Access variables from context full_name: "{{ .vars.namespace }}-{{ .vars.tenant }}-{{ .vars.environment }}-{{ .vars.stage }}" # Sprig functions name_upper: "{{ .vars.name | upper }}" timestamp: "{{ now | date \"2006-01-02\" }}" random_suffix: "{{ randAlphaNum 8 | lower }}" # Conditional logic instance_type: "{{ if eq .vars.stage \"prod\" }}m5.xlarge{{ else }}t3.medium{{ end }}" # String manipulation sanitized_name: "{{ .vars.name | replace \"-\" \"_\" | lower }}" # Format function for stack names remote_stack: "{{ printf \"%s-%s-%s\" .vars.tenant .vars.environment .vars.stage }}" # Access settings spacelift_enabled: "{{ .settings.spacelift.workspace_enabled }}" # atmos.Component function for cross-component references vpc_outputs: '{{ (atmos.Component "vpc" .stack).outputs }}' ``` ## Summary Atmos excels at managing complex, multi-environment infrastructure deployments by providing a clean separation between configuration (YAML stacks) and implementation (Terraform components). Its hierarchical configuration system with imports, inheritance, and deep-merging enables teams to define infrastructure once and deploy it consistently across development, staging, and production environments while maintaining environment-specific overrides. The framework integrates seamlessly with existing Terraform workflows while adding powerful orchestration capabilities including workflow automation, component vendoring, policy validation, and cross-component data sharing through YAML functions. Whether managing a single AWS account or hundreds of accounts across multiple cloud providers, Atmos provides the abstractions and tooling needed to scale infrastructure operations efficiently while maintaining code quality and security compliance.