Interview questions of Terraform ๐Ÿ”ฅ

Interview questions of Terraform ๐Ÿ”ฅ

ยท

8 min read

Q1. What is Terraform and how it is different from other IaaC tools?

Terraform is a tool made by HashiCorp that helps you control your cloud infrastructure by using code. You write this code in a clear language that tells the system what you want.

What makes Terraform special compared to other tools is that it can work with many different cloud providers like AWS, Azure, Google Cloud, and even on your own servers. It uses the same language and rules for all of them.

Terraform has a useful "dry run" feature. This means you can check what changes will happen before you make them. This helps to avoid mistakes or problems when you apply the changes. Also, Terraform keeps track of the current state of your infrastructure and all the changes you make. This is handy for handling complicated setups and keeping things organized.

Q2. How do you call a main.tf module?

In Terraform, the main.tf file is the primary setup file. It's where you describe the things you want and how they should be set up. Normally, this file is at the main folder of a Terraform module.

To make use of a Terraform module that has a main.tf file, you include it in your Terraform code by creating a new configuration file. In this new file, you add a "module" section. Inside that, you specify where the module is - it could be a local file or somewhere online like a remote repository. You also give a special name to the module, so you can talk about its parts in your Terraform code.

For example:

module "my_module" {
  source = "./my_module_directory"
}

Here, "my_module" is the special name for the module, and "./my_module_directory" is the place where the main.tf file of the module is kept. This tells Terraform to use the things you described in the main.tf file of that module.

Q3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies?

Sentinel is like a set of rules for your infrastructure code. It helps you define and make sure that your infrastructure setup follows certain rules, making it safe, follows guidelines, works well, and is cost-effective. You use Sentinel to create policies for your infrastructure code deployments.

With Sentinel, you can have rules for many things, like making sure your setup follows compliance standards, is secure, follows specific naming patterns, doesn't use too many resources, and is cost-efficient.

Here are some examples:

  1. Compliance: A rule that checks if a virtual machine is placed in a specific region to follow data regulations.

  2. Security: A rule to ensure that storage accounts are encrypted using a specific method.

  3. Naming conventions: A rule to check if a resource group name has a certain starting part.

  4. Resource limits: A rule that stops people from creating too many virtual machines in a specific area.

  5. Cost optimization: A rule to check if resources are labeled correctly for business reasons.

By using Sentinel, you make sure your infrastructure code is doing what it should and following the rules you set.

Q4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this?

In Terraform, if you want to make many copies of a thing, you can do it in two ways: using the count or for_each block in your setup. The count is like telling Terraform how many copies you want, while for_each is like saying, "Make one for each of these things in my list."

So, instead of writing the same code over and over, you use these blocks to make your setup cleaner and more organized.

Q5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this?

A. Set the environment variable TF_LOG=TRACE

B. Set verbose logging for each provider in your Terraform configuration

C. Set the environment variable TF_VAR_log=TRACE

D. Set the environment variable TF_LOG_PATH

Setting the environment variable TF_LOG=TRACE is the correct option for enabling debug messages in Terraform. Here's an explanation:

A. Set the environment variable TF_LOG=TRACE:

  • Terraform uses the TF_LOG environment variable to control the logging verbosity.

  • Setting TF_LOG=TRACE will enable detailed debug logging, including information about which paths Terraform is using to load providers referenced in your Terraform configuration.

  • This is a global setting that affects the overall Terraform execution, providing comprehensive logs for troubleshooting and understanding the internal processes.

Q6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure.

terraform destroy

When you tell Terraform to destroy everything with the terraform destroy command, it tries to remove all the things you set up. But sometimes, you might want to keep a specific thing. To do that, you use the -target option.

For instance, if you have a thing called aws_instance.example, and you don't want it to be deleted, you run this command:

terraform destroy -target=aws_instance.example

This command only removes the aws_instance.example thing, leaving everything else as it is. It's like telling Terraform, "Destroy everything, but leave this one thing alone."

Q7. Which module is used to store .tfstate file in S3?

The Terraform module called terraform/backend is handy for storing the .tfstate file in an S3 bucket. This module helps set up a remote backend in S3 and arranges for Terraform to save its state there. To put this module to work, you just have to provide the right configuration details for the backend in your Terraform setup file.

Q8. How do you manage sensitive data in Terraform, such as API keys or passwords?

In Terraform, if you have data that needs to stay private, like API keys or passwords, you can handle it in a few ways:

  1. Terraform input variables: You can set up input variables in your Terraform setup file. During the Terraform run, it will ask the user for sensitive info.

  2. Environment variables: Another way is to save sensitive data in environment variables and then refer to them in your Terraform setup. This keeps the private stuff out of the main configuration.

  3. External secret tools: For even more security, you can use external tools like Vault or AWS Secrets Manager. These tools specialize in keeping secrets safe.

  4. Encrypted state files: If your sensitive data is in the Terraform state files, you can encrypt them. Tools like sops or age can help with that, making sure that even if someone gets the state file, they can't easily see the secret stuff.

Q9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?

To set up an S3 bucket and a user with read and write rights to the bucket using Terraform, you can use these resources:

  • aws_s3_bucket: This resource specifies the S3 bucket and its settings, such as name, region, access control, and versioning.

  • aws_iam_user: This resource defines the user and its properties, including the username and permissions.

  • aws_iam_access_key: This resource generates access keys for the user, which are used for the user's authentication when accessing the S3 bucket.

  • aws_s3_bucket_policy: This resource outlines the permissions granted to the user on the S3 bucket.

Here's an example Terraform setup file:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "example_bucket" {
  bucket = "example-bucket"
  acl    = "private"
}

resource "aws_iam_user" "example_user" {
  name = "example-user"
}

resource "aws_iam_access_key" "example_access_key" {
  user = aws_iam_user.example_user.name
}

resource "aws_s3_bucket_policy" "example_policy" {
  bucket = aws_s3_bucket.example_bucket.id
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Principal = {
          AWS = aws_iam_user.example_user.arn
        }
        Action = [
          "s3:GetObject",
          "s3:PutObject"
        ]
        Resource = "${aws_s3_bucket.example_bucket.arn}/*"
      }
    ]
  })
}

This Terraform configuration file sets up an S3 bucket with a private access control list, creates an IAM user named "example-user," generates access keys for the user, and establishes a bucket policy that permits the user to read and write objects in the bucket.

Q10. Who maintains Terraform providers?

Terraform providers are taken care of by two groups: the open-source community and the companies that own the providers. The community helps out by sending in code, reporting bugs, and suggesting new features through the project's GitHub repository. On the other side, companies like Amazon Web Services, Google Cloud Platform, and Microsoft Azure look after their Terraform providers. They make sure their providers work well with their cloud services and update them with new features or fixes when needed. Providers for Terraform are often updated and released independently of the main Terraform project. This setup allows for a flexible and expandable structure.

Q11.How can we export data from one module to another?

In Terraform, you can share information from one module to another by using outputs. Outputs let you reveal values from one module so that another module or external references can use them.

To send data from one module, you define an output in the outputs.tf file of that module and set a value. For instance:

output "my_output" {
  value = "some value"
}

Then, in the module that's using this output, you can access it with the syntax <module_name>.<output_name>.

Here's an example:

module "my_module" {
  source = "./my_module"

  my_output = module.other_module.my_output
}

This piece of code takes the value of my_output from other_module and assigns it to the my_output variable in my_module.


Thanks for reading until here. See you in the next one.

ย