images
How-To Guides

Terraform Naming Conventions & Best Practices: A hell-of-a-practical guide

Francesco Altomare, Senior Sales Engineer @ GlobalDots
19.08.2021
image 13 Min read
DevOps Consulting Services

Modules naming conventions 

Based on Hashicorp documentations we should follow general naming conventions for  Terraform modules. 

Based on information above all Terraform modules should follow next ruleset All source code in git

  • All modules follow naming convention terraform-<PROVIDER>-<NAME>. For  example terraform-aws-ec2, terraform-azure-vms, terraform-infoblox-dns etc All terraform modules should have Unit/Integration tests. For example Terratest  or Kitchen-Terraform or other tests framework but TerraTest is prefered test framework All terraform modules shouldn’t use provisioners. Provisioners are a Last Resort All terraform modules should be secure and follow vendor and hashicorp security best  practices. In another words you should use scanning tools like Checkov or tfsec All terraform modules should have right code style 
  • All terraform modules should follow Semantic Versioning
  • Use Snake Case for all resource names 
  • Declare all variables in variables.tf, including a description and type 
  • Declare all outputs in outputs.tf, including a description 
  • Always use relative paths and the file() helper 
  • Pin all modules and providers to a specific version or tag 
  • Prefer separate resources over inline blocks (e.g. aws_security_group_rule over  aws_security_group) 
  • Prefer variables.tf over terraform.tfvars to provide sensible defaults 
  • Terraform versions and provider versions should be pinned, as it’s not possible to safely  downgrade a state file once it has been used with a newer version of Terraform 

Terraform module structure 

All terraform modules should follow structure as shows above 

.
├── .gitignore
├── .markdownlint.json
├── .pre-commit-config.yaml
├── LICENSE
├── README.md
├── VERSION
├── examples
│ ├── complete
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ └── minimal
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ └── versions.tf
├── main.tf
├── outputs.tf
├── test
│ ├── go.mod
│ ├── go.sum
│ └── terraform_module_gcp_dns_test.go
├── variables.tf
└── versions.tf

line 2: .gitignore – should contain all extra files. Lets keep git repository clear(required) line 3: .markdownlint.json – linting configuration for Markdown(optional)  line 4: .pre-commit-config.yaml – configuration for pre-commit framework(required) line 5: LICENSE file for explanations under which license this module 

line 6: README.md – Readme with examples of terraform module usage, requirements, input  parameters, outputs,full all important information and warning if applicable (required) line 7: VERSION – Versions history for module with release notes (required) line 8: examples – Examples for module should contain at least minimal(line 14) (required) line 19: main.tf – General module file. Contains resources,locals and data-sources to create all  resources (required) 

line 20: outputs.tf – Contains outputs from the resources created in main.tf (required) line 21: test – All unit/integration tests for Terraform module (required) 

line 25: variables.tf – contains declarations of variables used in main.tf (required) line 26: versions.tf – contains all versions for Terraform itself and for providers in module  (required) 

General best practices 

Resource/Data name 

Don’t repeat resource type, don’t use partially or completely  

Bad 

resource “aws_instance” “webserver_aws_instance” {} 

Bad 

resource “aws_instance” “webserver_instance” {} 

Good 

resource “aws_instance” “webserver” {} 

Usage of “this”

Resource name should be named this if there is no more descriptive and general name  available, or if resource module creates single resource of this type (eg, there is single resource 

of type aws_nat_gateway and multiple resources of typeaws_route_table, so  aws_nat_gateway should be named this and aws_route_table should have more descriptive  names – like private, public, database). 

Nouns 

Always use singular nouns for Resources name as well as for Data. Usage of “count “

Bad 

resource “aws_instance” “webserver” { 

 ami = “123456” 

 count = “2” 

Good 

resource “aws_instance” “webserver” { 

 count = “2” 

 ami = “123456” 

Conditions in “count”

count = var.instances_per_subnet * length(module.vpc.private_subnets) or 

count = var.instance_count 

Placement of tags 

Tags for resources is required 

Bad 

resource “aws_instance” “webserver” { 

 count = “2” 

 tags = { 

 Name = “…”

 } 

Good 

resource “aws_instance” “webserver” { 

 count = “2” 

 subnet_id = “…” 

 vpc_security_group_ids = “..” 

 tags = { 

 Name = “…” 

 } 

Variables 

Use upstream module or provider variable names where applicable 

When writing a module that accepts variable inputs, make sure to use the same names as the  upstream to avoid confusion and ambiguity. 

Use all lower-case with underscores as separators 

Avoid introducing any other syntaxes commonly found in other languages such as CamelCase or  pascalCase. For consistency we want all variables to look uniform. This is also inline with  the HashiCorp naming conventions

Use positive variable names to avoid double negatives 

All variable inputs that enable/disable a setting should be  

formatted …._enabled (e.g. encryption_enabled). It is acceptable for default values to be  either false or true. 

Use feature flags to enable/disable functionality 

All modules should incorporate feature flags to enable or disable functionality. All feature flags  should end in _enabled and should be of type bool. 

Use description field for all inputs 

All variable inputs need a description field. When the field is provided by an upstream provider  (e.g. terraform-aws-provider), use the same wording as the upstream docs. 

Use sane defaults where applicable 

Modules should be as turnkey as possible. The default value should ensure the most secure  configuration (E.g. with encryption enabled).

Use variables for all secrets with nodefaultvalue 

All variable inputs for secrets must never define a default value. This ensures that terraform is  able to validate user input. The exception to this is if the secret is optional and will be generated  for the user automatically when left null or “” (empty). 

Use indented HEREDOC syntax 

Using <<-EOT (as opposed to <<EOT without the ) ensures the code can be indented inline with  the other code in the project. Note that EOT can be any uppercase string (e.g. CONFIG_FILE

Do not use HEREDOC for JSON, YAML or IAM Policies 

There are better ways to achieve the same outcome using terraform interpolations or resources For JSON, use a combination of a local and the jsonencode function. 

For YAML, use a combination of a local and the yamlencode function. 

For IAM Policy Documents, use the native iam_policy_document resource. 

Do not use long HEREDOC configurations 

Use instead the template_file resource and move the configuration to a separate template file.

 Use proper datatype 

Using proper datatypes in terraform makes it easier to validate inputs and document usage. 

  • Use null instead of empty strings (“”
  • Use bool instead of strings or integers for binary true/false 
  • Use string for freeform text 
  • Use object sparingly as it makes it harder to document and validate 

Outputs 

Use description field for all outputs 

All outputs must have a description set. The description should be based on (or adapted from) the  upstream terraform provider where applicable. Avoid simply repeating the variable name as the  output description. 

Use well-formatted snake case output names 

Avoid introducing any other syntaxes commonly found in other languages such as CamelCase or  pascalCase. For consistency we want all variables to look uniform. It also makes code more 

consistent when using outputs together with terraform remote_state to access those settings from  across modules. 

Never output secrets 

Secrets should never be outputs of modules. Rather, they should be written to secure storage  such as AWS Secrets Manager, AWS SSM Parameter Store with KMS encryption,  or S3 with KMS encryption at rest. Our preferred mechanism on AWS is using the SSM Parameter  Store. Values written to SSM are easily retrieved by other terraform modules, or even on the  command-line using tools like chamber by Segment.io

We are very strict about this in “root” modules (or the top-most module), because these sensitive  outputs are easily leaked in CI/CD pipelines (see tfmask for masking secrets in output only as a  last resort). We are less sensitive to this in modules that are typically nested inside of other  modules. 

Rather than outputting a secret, you may output plain text indicating where the secret is stored,  for example RDS master password is in SSM parameter /rds/master_password. You may also  want to have another output just for the key for the secret in the secret store, so the key is  available to other programs which may be able to retrieve the value given the key. 

Use symmetrical names 

We prefer to keep terraform outputs symmetrical as much as possible with the upstream resource  or module, with exception of prefixes. This reduces the amount of entropy in the code or possible  ambiguity, while increasing consistency. Below is an example of what *not to do. The expected  output name is user_secret_access_key. This is because the other IAM user outputs in the  upstream module are prefixed with user_, and then we should borrow the upstream’s output  name of secret_access_key to become user_secret_access_key for consistency. 

State 

Use Remote state 

For Terraform it’s required to use remote state 

Use backend with support for state locking 

We recommend using the S3 backend with DynamoDB for state locking or similar solutions Use encrypted S3 bucket with versioning, encryption and strict IAM policies 

We recommend not commingling state in the same bucket. This could cause the state to get  overridden or compromised. Note, the state contains cached values of all outputs. Wherever  possible, keep stages 100% isolated with physical barriers (separate buckets, separate  organizations)

Use Versioning on State Bucket 

For Terraform it’s required to use versioning for remote state 

Terraform Toolset 

Security 

Checkov 

Checkov is a static code analysis tool for infrastructure-as-code. 

It scans cloud infrastructure provisioned using Terraform, Terraform  

plan, Cloudformation, Kubernetes, Dockerfile, Serverless or ARM Templates and detects  security and compliance misconfigurations using graph-based scanning. 

Checkov also powers Bridgecrew, the developer-first platform that codifies and streamlines  cloud security throughout the development lifecycle. Bridgecrew identifies, fixes, and prevents  misconfigurations in cloud resources and infrastructure-as-code files. 

Github 

TfSec 

tfsec uses static analysis of your terraform templates to spot potential security issues. Now with  terraform CDK support. 

Github 

terraform-compliance 

terraform-compliance is a lightweight, security and compliance focused test framework against  terraform to enable negative testing capability for your infrastructure-as-code. 

Github 

terrascan 

Terrascan is a static code analyzer for Infrastructure as Code. Terrascan allow you to: 

  • seamlessly scan infrastructure as code for misconfigurations 
  • monitor provisioned cloud infrastructure for configuration changes that introduce posture  drift, and enables reverting to a secure posture. 
  • Detect security vulnerabilities and compliance violations. 
  • Mitigate risks before provisioning cloud native infrastructure.
  • Offers flexibility to run locally or integrate with your CI\CD. 

Github 

Regula 

Regula is a tool that evaluates CloudFormation and Terraform infrastructure-as-code for  potential AWS, Azure, and Google Cloud security and compliance violations prior to  deployment. 

Docs 

Github 

kics 

Find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the  development cycle of your infrastructure-as-code with KICS by Checkmarx. 

Docs 

Github 

Linting 

TFLint 

A Pluggable Terraform Linter 

Features 

TFLint is a framework and each feature is provided by plugins, the key features are as follows: 

  • Find possible errors (like illegal instance types) for Major Cloud providers  (AWS/Azure/GCP). 
  • Warn about deprecated syntax, unused declarations. 
  • Enforce best practices, naming conventions. 

Github 

config-lint 

A command line tool to validate configuration files using rules specified in YAML. The  configuration files can be one of several formats: Terraform, JSON, YAML, with support for  Kubernetes. There are built-in rules provided for Terraform, and custom files can be used for  other formats.

Github 

Testing 

Hashicorp articles 

Testing hashicorp terraform 

Testing experiment for Terraform 

inspec 

Chef InSpec is an open-source framework for testing and auditing your applications and  infrastructure. Chef InSpec works by comparing the actual state of your system with the desired  state that you express in easy-to-read and easy-to-write Chef InSpec code. Chef InSpec detects  violations and displays findings in the form of a report, but puts you in control of remediation. 

Moving Security and Sanity Left by Testing Terraform with InSpec 

Hashicorp blog 

Github 

Terratest 

Terratest is a Go library that provides patterns and helper functions for testing infrastructure,  with 1st-class support for Terraform, Packer, Docker, Kubernetes, AWS, GCP, and more. Docs 

Github 

terraform-compliance 

Ones again but more about BDD testing 

Docs 

Rspec-Terraform 

The creation of rspec-terraform was initially intended to smooth the creation and sharing of  common Terraform modules. Some sort of basic testing would ensure a stable and clearly  defined interface for each module. 

Github 

Clarity

A declarative test framework for Terraform 

Github 

Hooks 

pre-commit 

A framework for managing and maintaining multi-language pre-commit hooks. Main site 

Example of usage 

Installations 

From pip 

pip install pre-commit 

Non-administrative installation: 

curl https://pre-commit.com/install-local.py | python – In a python project, add the following to your requirements.txt (or requirements-dev.txt): pre-commit 

From brew 

brew install pre-commit 

Initializing 

cd tf/module/path 

Create .pre-commit-config.yaml in module path 

Simple example for terraform 

#pre commit framework 

— 

default_language_version: 

 python: python3 

repos: 

 – repo: git://github.com/pre-commit/pre-commit-hooks  rev: v3.2.0 

 hooks:

 – id: check-json 

 – id: check-merge-conflict 

 – id: trailing-whitespace 

 – id: end-of-file-fixer 

 – id: check-yaml 

 – id: check-added-large-files 

 – id: pretty-format-json 

 args: 

 – –autofix 

 – id: detect-aws-credentials 

 – id: detect-private-key 

 – repo: git://github.com/Lucas-C/pre-commit-hooks  rev: v1.1.9 

 hooks: 

 – id: forbid-tabs 

 exclude_types: 

 – python 

 – javascript 

 – dtd 

 – markdown 

 – makefile 

 – xml 

 exclude: binary|\.bin$ 

 – repo: git://github.com/jameswoolfenden/pre-commit-shell  rev: 0.0.2 

 hooks: 

 – id: shell-lint 

 – repo: git://github.com/igorshubovych/markdownlint-cli  rev: v0.23.2 

 hooks: 

 – id: markdownlint 

 – repo: git://github.com/adrienverge/yamllint  rev: v1.24.2 

 hooks: 

 – id: yamllint 

 name: yamllint 

 description: This hook runs yamllint.  entry: yamllint 

 language: python 

 types: [file, yaml] 

 – repo: git://github.com/jameswoolfenden/pre-commit  rev: v0.1.33 

 hooks: 

 – id: terraform-fmt 

 – id: checkov-scan 

 language_version: python3.7 

 – id: tf2docs 

 language_version: python3.7 

Now you can init git if you don’t init yet 

git init 

Now need to install hooks from .pre-commit-config.yaml run

pre-commit 

pre-commit install 

If you want run pre-commit without commiting you changes 

git add . 

pre-commit run -a 

After running command above you could see output like 

Check  

JSON………………………………………………………Passed Check for merge  

conflicts…………………………………………Passed Trim Trailing  

Whitespace………………………………………….Passed Fix End of  

Files…………………………………………………Passed Check  

Yaml………………………………………………………Passed Check for added large  

files……………………………………….Passed 

Pretty format  

JSON……………………………………………….Passed Detect AWS  

Credentials……………………………………………Passed Detect Private  

Key……………………………………………….Passed No-tabs  

checker………………………………………………….Passed Shell Syntax Check……………………………..(no files to  check)Skipped 

markdownlint…………………………………………………….Pass ed 

yamllint………………………………………………………..Pass ed 

terraform 

fmt……………………………………………………Passed checkov…………………………………………………………Pass ed 

Terraform version manager 

tfenv 

Terraform version manager inspired by rbenv 

Github 

tfswitch

The tfswitch command line tool lets you switch between different versions of terraform. If you  do not have a particular version of terraform installed, tfswitch lets you download the version  you desire. The installation is minimal and easy. Once installed, simply select the version you  require from the dropdown and start using terraform. 

Github 

Helpers 

pyTerrafile 

A Terrafile is a simple YAML config that gives you a single, convenient location that lists all your  external module dependencies. 

The idea is modelled on similar patterns in other languages – e.g. Ruby with  its Gemfile (technically provided by the bundler gem). 

Additionally, tfile supports modules from the Terraform Registry, as well as local modules and  from git. 

Example 

terraform-google-lb: 

 source: “GoogleCloudPlatform/lb-http/google” 

 version: “4.5.0” 

terraform-aws-vpc: 

source: https://github.com/terraform-aws-modules/terraform-aws-vpc.git version: v2.64.0 

provider: aws 

Github 

Terraspace 

The Terrafile is where you can define additional modules to add to your Terraspace project. The  modules can be from your own git repositories, other git repositories, or the Terraform Registry. 

Docs 

Wrappers/Frameworks 

Terraspace 

Terraspace is a Terraform Framework that optimizes for infrastructure-as-code happiness. It  provides an organized structure, conventions over configurations, keeps your code DRY, and  adds convenient tooling. Terraspace makes working with Terraform easier and more fun.

Docs 

Github 

Terragrunt 

Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY,  working with multiple Terraform modules, and managing remote state. 

Docs 

Github 

Pretf 

Pretf is a completely transparent, drop-in Terraform wrapper that generates Terraform  configuration with Python. It requires no configuration and no changes to standard Terraform  projects to start using it. 

Docs 

CDK for Terraform 

CDK (Cloud Development Kit) for Terraform allows developers to use familiar programming  languages to define cloud infrastructure and provision it through HashiCorp Terraform. 

Github 

Module generators 

generator-tf-module 

Scaffolding / Boilerplate generator for new Terraform module projects 

Github 

Terraform code generators 

Former2 

Generate CloudFormation / Terraform / Troposphere templates from your existing AWS  resources 

Github 

terraformer

A CLI tool that generates tf/json and tfstate files based on existing infrastructure (reverse  Terraform). 

Github 

Visualization 

Blast Radius 

Blast Radius is a tool for reasoning about Terraform dependency graphs with interactive  visualizations. 

Docs 

Github

Conclusion

Terraform is intricate but extremely powerful if you know all the ins-and-outs.

Our job at GlobalDots is to do that in order to provide the best possible DevOps service to our fast-growing business customers.

Contact us to learn how we can help you grow lean and smart, while keeping your precious resources focused on coding. 

Table of Contents

Comments

0 comments

There’s more to see

Cloud Deep Learning: AWS, Azure & GCP Compared
DevOps Consulting Services
From our Partners 23.02.22

How Can You Do Deep Learning in the Cloud? Deep learning is at the center of most artificial intelligence initiatives. It is based on the concept of a deep neural network, which passes inputs through multiple layers of connections. Neural networks can perform many complex cognitive tasks, improving performance dramatically compared to classical machine learning […]

Read more
Automatically trigger CDN purge on S3 change
Content Delivery Network (CDN)
Shalom Carmel, CIO @ GlobalDots 19.04.22

A CDN requires an origin server, which can be a S3 bucket. s3-trigger-purge-cdn are python scripts that run as Lambda functions, and are triggered by file uploads to the bucket. Once triggered, the Lambda function will attempt to purge the old file from the CDN cache. Currently supported CDN vendors: Akamai Cloudflare Edgecast Fastly Highwinds […]

Read more
How-To: Collect SNMP with Sumologic
Monitoring, Logging & Observability
Shalom Carmel, CIO @ GlobalDots

Introduction SNMP is an application layer protocol which manages and monitors the connected IP devices. SNMP works on a Client-Server based architecture, where the clients are known as the SNMP Agents and the Server are called as the Managers. The clients are devices that are connected to the Internet, it could be switches, routers, printers, […]

Read more
Unlock Your Cloud Potential
Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.
Contact us