Terraform Resource Trigger Always Upload Aws S3 Bucket Object
Amazon S3 Bucket is a storage service offered by AWS for storing data lakes, websites, mobile applications, fill-in and restore, annal, enterprise applications, etc. The S3 stands for Simple Storage Service which tin can be scaled based on individual or organization needs. On top of providing a storage solution, Amazon S3 likewise provides comprehensive admission direction which can help you to set very granular-level permissions.
This blog post is all about managing the AWS S3 bucket using Terraform. Terraform provides three S3 resource:
- aws_s3_bucket
- aws_s3_bucket_object
- aws_s3_bucket_public_access_block
These resources are used for managing the S3 bucket, simply exhibit different characteristics which we volition explore in this post.
AWS S3 bucket supports versioning, replication, encryption, ACL (access control list), and bucket object policy. Here is the list of S3 tasks which we are going to complete using Terraform:
- Setting up AWS Access Credentials (prerequisite).
- Using aws_s3_bucket resource to create S3 Bucket.
- Uploading files to S3 bucket using aws_s3_bucket_object.
- Managing ACL (Access Command List) using aws_s3_bucket_public_access_block.
- Deleting the S3 bucket using Terraform.
Let's start.
i. Set AWS Admission Credentials
Terraform always needs Access Key and Surreptitious Key to work with the AWS resources. Only, AWS always provides you with static evidently text credentials and should not be stored, as information technology is independent in your Terraform file.
In that location are a couple of ways to handle this problem:
- Using Spacelift AWS Integration with IAM roles. Spacelift provides AWS integration out of the box. Hither is a comprehensive guide from Spacelift which can aid to integrate with AWS: AWS Integration Tutorial
- The second way would exist generating AWS access credentials dynamically based on IAM policies and storing them into the Vault (Hashicorp Vault) .
The above-mentioned methods will assist you integrate with AWS in a more secure style.
Spacelift Programmatic Setup of IAM Office —If you are using Spacelift, and so here is the lawmaking snippet of Terraform which you should integrate with your existing Terraform infrastructure code base.
# Creating a Spacelift stack. resource "spacelift_stack" "managed-stack" { proper noun = "Stack managed past Spacelift" repository = "my-awesome-repo" branch = "master" } # Creating an IAM role. resource "aws_iam_role" "managed-stack-office" { proper name = "spacelift-managed-stack-role" # Setting up the trust relationship. assume_role_policy = jsonencode( { Version = "2012-x-17" Statement = [ jsondecode( spacelift_stack.managed-stack.aws_assume_role_policy_statement ) ] } ) } # Attaching a powerful authoritative policy to the stack role. resources "aws_iam_role_policy_attachment" "managed-stack-role" { part = aws_iam_role.managed-stack-role.name policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess" } # Linking AWS role to the Spacelift stack. resource "spacelift_stack_aws_role" "managed-stack-part" { stack_id = spacelift_stack.managed-stack.id role_arn = aws_iam_role.managed-stack-function.arn } Hashicorp Vault programmatic Setup—If y'all are using the Hashicorp vault, here'due south the Terraform code snippet which defines the AWS IAM role for managing an S3 Bucket.
variable "aws_access_key" { } variable "aws_secret_key" { } variable "name" { default = "dynamic-aws-creds-vault-admin" } terraform { backend "local" { path = "terraform.tfstate" } } provider "vault" { } resource "vault_aws_secret_backend" "aws" { access_key = var.aws_access_key secret_key = var.aws_secret_key path = " $ { var . name } -path" default_lease_ttl_seconds = "120" max_lease_ttl_seconds = "240" } resource "vault_aws_secret_backend_role" "admin" { backend = vault_aws_secret_backend.aws.path name = " $ { var . proper name } -function" credential_type = "iam_user" policy_document = <<EOF { "Version": "2012-10-17", "Statement": [ { "Issue": "Allow", "Activity": [ "iam:*", "ec2:*", "s3:*" ], "Resource": "*" } ] } EOF } output "backend" { value = vault_aws_secret_backend.aws.path } output "role" { value = vault_aws_secret_backend_role.admin.name }
2. Use aws_s3_bucket Resource to Create S3 Saucepan
After setting upwards the credentials, let's utilize the Terraform aws_s3_bucket resource to create the offset S3 saucepan.
The S3 Bucket name we are going to use is – spacelift-test1-s3.
Here are the names of items needed for creating the S3 bucket:
- region—Specify the name of the region.
- bucket—Name the bucket i.eastward. – spacelift-test1-s3.
- acl—Access control list. We will set the S3 access as private.
Create a Terraform file named – main.tf and utilize the following Terraform code snippet:
variable "name" { default = "dynamic-aws-creds-operator" } variable "region" { default = "eu-primal-ane" } variable "path" { default = "../vault-admin-workspace/terraform.tfstate" } variable "ttl" { default = "1" } terraform { backend "local" { path = "terraform.tfstate" } } data "terraform_remote_state" "admin" { backend = "local" config = { path = var.path } } data "vault_aws_access_credentials" "creds" { backend = data.terraform_remote_state.admin.outputs.backend role = information.terraform_remote_state.admin.outputs.role } provider "aws" { region = var.region access_key = data.vault_aws_access_credentials.creds.access_key secret_key = information.vault_aws_access_credentials.creds.secret_key } resources "aws_s3_bucket" "spacelift-test1-s3" { bucket = "spacelift-test1-s3" acl = "private" } Along with main.tf, allow'due south create version.tf for AWS and vault version.
terraform { required_providers { aws = { source = "hashicorp/aws" version = "3.23.0" } } If you are going to use Hashicorp vault instead of Spacelift, then y'all must likewise add the Hashicorp vault version.
vault = { source = "hashicorp/vault" version = "two.17.0" } } Let's utilise the above Terraform configuration using Terraform commands:
1. $ terraform init – This is the get-go control we are going to run.
2. $ terraform plan – The 2d command would exist to run a Terraform plan. This command volition tell y'all how many AWS resources are going to exist added, changed or destroyed.
3. $ terraform apply – Use the Terraform configuration using the Terraform apply command which volition somewhen create an S3 bucket in AWS.
3. Upload Files to S3 Bucket Using aws_s3_bucket_object
In Pace ii we saw how to create an S3 saucepan using the aws_s3_bucket Terraform resource. In this footstep, we are going to use the same S3 bucket (spacelift-test1-s3) to upload files into.
When nosotros want to perform some additional operations (east.g. – upload files) on the S3 bucket then nosotros are going to utilise the aws_s3_bucket_object Terraform resources.
For uploading the files to the S3 bucket we volition extend the existing Terraform script from Step ii, forth with the new aws_s3_bucket_object resource block.
We are going to upload the ii sample text files:
- test1.txt
- test2.text
Here is the screenshot of my project structure for uploading files, which includes my main.tf along with test1.txt, test2.txt files.
As you can see from the projection structure, I have kept my test files under the directory uploads, and so I demand to mention the relative path inside my Terraform file (master.tf).
Here is my Terraform file:
variable "name" { default = "dynamic-aws-creds-operator" } variable "region" { default = "european union-central-ane" } variable "path" { default = "../vault-admin-workspace/terraform.tfstate" } variable "ttl" { default = "1" } terraform { backend "local" { path = "terraform.tfstate" } } data "terraform_remote_state" "admin" { backend = "local" config = { path = var.path } } data "vault_aws_access_credentials" "creds" { backend = data.terraform_remote_state.admin.outputs.backend role = data.terraform_remote_state.admin.outputs.role } provider "aws" { region = var.region access_key = information.vault_aws_access_credentials.creds.access_key secret_key = data.vault_aws_access_credentials.creds.secret_key } resources "aws_s3_bucket" "spacelift-test1-s3" { bucket = "spacelift-test1-s3" acl = "private" } resource "aws_s3_bucket_object" "object1" { for_each = fileset( "uploads/" , "*" ) bucket = aws_s3_bucket.spacelift-test1-s3.id fundamental = each.value source = "uploads/ $ { each . value } " } Here are some additional notes for the in a higher place-mentioned Terraform file –
- for_each = fileset("uploads/", "*") – For loop for iterating over the files located under upload directory.
- saucepan = aws_s3_bucket.spacelift-test1-s3.id – The original S3 bucket ID which we created in Step ii.
- Key = each.value – You have to assign a fundamental for the name of the object, once it's in the bucket.
- Source = "uploads/${each.value}" – Path of the files which volition be uploaded to the S3 saucepan.
How to Apply the New Changes?
Since we are working in the aforementioned main.tf file and we have added a new Terraform resource cake aws_s3_bucket_object, we can start with the Terraform plan command:
1. $ terraform program – This command will show that two more new resources (test1.txt, test2.txt) are going to exist added to the S3 bucket. Because nosotros accept previously created an S3 bucket, this time it volition only add new resources.
2. $ terraform employ – Run the Terraform apply control and you lot should be able to upload the files to the S3 bucket.
Here is the screenshot from AWS console S3 bucket:
There are many more things that y'all can do with Terraform and the S3 Bucket. Here is a guide on how to rename an AWS S3 saucepan in Terraform which can help y'all rename your S3 saucepan.
4. Manage ACL (Access Control List) Using aws_s3_bucket_public_access_block
Now, afterwards uploading the files to an S3 bucket, the next Terraform resource which we are going to talk nigh is aws_s3_bucket_public_access_block. This resource is going to help you manage the public access associated with your S3 bucket.
Past default, the value is imitation, which ways we are allowing public ACL (Access Command List). If you want to restrict the public ACL, y'all have to set the value to true.
Hither, nosotros are going to accept the same example which we accept taken previously for uploading the files to an S3 bucket:
variable "name" { default = "dynamic-aws-creds-operator" } variable "region" { default = "european union-central-1" } variable "path" { default = "../vault-admin-workspace/terraform.tfstate" } variable "ttl" { default = "one" } terraform { backend "local" { path = "terraform.tfstate" } } information "terraform_remote_state" "admin" { backend = "local" config = { path = var.path } } data "vault_aws_access_credentials" "creds" { backend = data.terraform_remote_state.admin.outputs.backend role = data.terraform_remote_state.admin.outputs.role } provider "aws" { region = var.region access_key = data.vault_aws_access_credentials.creds.access_key secret_key = data.vault_aws_access_credentials.creds.secret_key } resource "aws_s3_bucket" "spacelift-test1-s3" { bucket = "spacelift-test1-s3" acl = "private" } resource "aws_s3_bucket_object" "object1" { for_each = fileset( "uploads/" , "*" ) bucket = aws_s3_bucket.spacelift-test1-s3.id key = each.value source = "uploads/ $ { each . value } " } resource "aws_s3_bucket_public_access_block" "app" { bucket = aws_s3_bucket.spacelift-test1-s3.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } You lot can encounter in the above example that we have restricted the following public ACL:
- block_public_acls = truthful
- block_public_policy = true
- ignore_public_acls = true
- restrict_public_buckets = true
With the help of aws_s3_bucket_public_access_block, you tin can manage the public access control list onto your S3 Bucket.
5. Delete S3 Bucket Using Terraform
In the previous steps, we have seen how to create an S3 bucket and how to upload files to the S3 bucket using terraform aws_s3_bucket, aws_s3_bucket_object resources.
In this section, yous will see how we tin delete the S3 bucket once we are washed working with the S3.
When you are using Terraform, the deletion part is always like shooting fish in a barrel. You just need to run the universal $ terraform destroy command and it will delete all the resource which yous have created previously.
As you can see in the screenshot, Terraform has deleted the resources in the reverse chronological order starting from test2.txt, test2.txt, and finally the bucket spacelift-test1-s3.
Central Points
Using Terraform to create an S3 bucket is relatively simple, but information technology is not recommended to use Terraform for uploading thousands of files into the S3 bucket. Terraform is an infrastructure provisioning tool and should not be used for performing data-intensive tasks. This blog is a comprehensive guide to getting yourself familiar with Terraform and the S3 bucket.
Source: https://spacelift.io/blog/terraform-aws-s3-bucket
0 Response to "Terraform Resource Trigger Always Upload Aws S3 Bucket Object"
Post a Comment