Git Product home page Git Product logo

tfedit's Introduction

tfedit

License: MIT GitHub release GoDoc

Features

Easy refactoring Terraform configurations in a scalable way.

  • CLI-friendly: Read HCL from stdin, apply filters and write results to stdout, easily pipe and combine other commands.
  • Keep comments: Update lots of existing Terraform configurations without losing comments as much as possible.
  • Built-in operations:
    • filter awsv4upgrade: Upgrade configurations to AWS provider v4.
  • Generate a migration file for state operations: Read a Terraform plan file in JSON format and generate a migration file in tfmigrate HCL format. Currently, only import actions required by awsv4upgrade are supported.

Although the initial goal of this project is providing a way for bulk refactoring of the aws_s3_bucket resource required by breaking changes in AWS provider v4, but the project scope is not limited to specific use-cases. It's by no means intended to be an upgrade tool for all your providers. Instead of covering all you need, it provides reusable building blocks for Terraform refactoring and shows examples for how to compose them in real world use-cases.

awsv4upgrade

Overview

In short, given the following Terraform configuration file for the AWS provider v3:

$ cat ./test-fixtures/awsv4upgrade/aws_s3_bucket/simple/main.tf
resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
  acl    = "private"
}

Apply a filter for awsv4upgrade:

$ tfedit filter awsv4upgrade -f ./test-fixtures/awsv4upgrade/aws_s3_bucket/simple/main.tf
resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id
  acl    = "private"
}

You can see the acl argument has been split into an aws_s3_bucket_acl resource for the AWS provider v4 compatible.

To resolve the conflict between the configuration and the existing state, you need to import the new resource. As you know, you can run the terraform import command directly, but if you prefer to check the upgrade results without updating remote state, use tfmigrate, which allows you to run the terraform import command in a declarative way.

Generate a migration file for importing the new resource from a Terraform plan:

$ terraform plan -out=tmp.tfplan
$ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
$ cat tfmigrate_fromplan.hcl
migration "state" "fromplan" {
  actions = [
    "import aws_s3_bucket_acl.example tfedit-test,private",
  ]
}

Run the tfmigrate plan command to check to see if the terraform plan command has no changes after the migration without updating remote state:

# tfmigrate plan tfmigrate_fromplan.hcl
(snip.)
YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator plan success!
# echo $?
0

If looks good, apply it:

# tfmigrate apply tfmigrate_fromplan.hcl
(snip.)
YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator apply success!
# echo $?
0

This is a brief overview of what tfedit is, but an executable example is described later.

If you are not ready for the upgrade, you can pin version constraints in your Terraform configurations with tfupdate.

Implementation status:

For upgrading AWS provider v4, some rules have not been implemented yet. The current implementation status is as follows:

S3 Bucket Refactor

  • Arguments of aws_s3_bucket resource
    • acceleration_status
    • acl
    • cors_rule
    • grant
    • lifecycle_rule
    • logging
    • object_lock_configuration rule
    • policy
    • replication_configuration
    • request_payer
    • server_side_encryption_configuration
    • versioning
    • website
  • Meta arguments of resource
    • provider
    • count
    • for_each
    • dynamic
  • Rename references in an expression to new resource type
  • Generate import commands for new split resources

New Provider Arguments

  • Arguments of provider aws
    • s3_force_path_style

Known limitations:

  • Some arguments were changed not only their names but also valid values. In this case, if a value of the argument is a variable, not literal, it's impossible to automatically rewrite the value of the variable. It potentially could be passed from outside of module or even overwritten at runtime. If it's not literal, you need to change the value of the variable by yourself. The following arguments have this limitation:
    • grant:
      • permissions: A permissions attribute of grant block was a list in v3, but in v4 we need to set each permission to each grant block respectively. If the permissions attribute is passed as a variable or generated by a function, it cannot be split automatically.
    • lifecycle_rule:
      • enabled = true => status = "Enabled"
      • enabled = false => status = "Disabled"
      • transition:
        • date = "2022-12-31" => date = "2022-12-31T00:00:00Z"
      • expiration:
        • date = "2022-12-31" => date = "2022-12-31T00:00:00Z"
    • object_lock_configuration:
      • object_lock_configuration.object_lock_enabled = "Enabled" => object_lock_enabled = true
    • versioning:
      • enabled = true => status = "Enabled"
      • enabled = false => It also depends on the current status of your bucket. Set status = "Suspended" or use for_each to avoid creating aws_s3_bucket_versioning resource.
      • mfa_delete = true => mfa_delete = "Enabled"
      • mfa_delete = false => mfa_delete = "Disabled"
  • Some arguments cannot be converted correctly without knowing the current state of AWS resources. The tfedit never calls the AWS API on your behalf. You have to check it by yourself. The following arguments have this limitation:

Example

We recommend you to play an example in a sandbox environment first, which is safe to run terraform and tfmigrate command without any credentials. The sandbox environment mocks the AWS API with localstack and doesn't actually create any resources. So you can safely and easily understand how it works.

Build a sandbox environment with docker compose and run bash:

$ git clone https://github.com/minamijoyo/tfedit
$ cd tfedit/
$ docker compose build
$ docker compose run --rm tfedit /bin/bash

In the sandbox environment, create and initialize a working directory from test fixtures:

# mkdir -p tmp/dir1 && cd tmp/dir1
# terraform init -from-module=../../test-fixtures/awsv4upgrade/aws_s3_bucket/simple/
# cat main.tf

This example contains a simple aws_s3_bucket resource:

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
  acl    = "private"
}

Apply it and create the aws_s3_bucket resource with the AWS provider v3.74.3, which is the last version without deprecation warnings:

# terraform -v
Terraform v1.1.8
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.74.3

# terraform apply -auto-approve
# terraform state list
aws_s3_bucket.example

Then, let's upgrade the AWS provider to the latest v4.x. We recommend upgrading to v4.9.0 or later because before 4.9.0 includes some breaking changes. To update the provider version constraint, of course you can edit the required_providers block in the config.tf with your text editor, but it's easy to do with tfupdate:

# tfupdate provider aws -v "~> 4.9" .
# terraform init -upgrade
# terraform -v
Terraform v1.1.8
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.9.0

You can see a deprecation warning as follows:

# terraform validate
╷
│ Warning: Argument is deprecated
│
│   with aws_s3_bucket.example,
│   on main.tf line 3, in resource "aws_s3_bucket" "example":
│    3:   acl    = "private"
│
│ Use the aws_s3_bucket_acl resource instead
╵
Success! The configuration is valid, but there were some validation warnings as shown above.

Now, it's time to upgrade Terraform configuration to the AWS provider v4 compatible with tfedit:

# tfedit filter awsv4upgrade -u -f main.tf
# cat main.tf

You can see the acl argument has been split into an aws_s3_bucket_acl resource:

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_acl" "example" {
  bucket = aws_s3_bucket.example.id
  acl    = "private"
}

You can also see that the deprecation warning has been resolved:

# terraform validate
Success! The configuration is valid.

At this point, if you run the terraform plan command, you can see that a new aws_s3_bucket_acl resource will be created:

# terraform plan
(snip.)
Plan: 1 to add, 0 to change, 0 to destroy.

To resolve the conflict between the configuration and the existing state, you need to import the new resource. As you know, you can run the terraform import command directly, but if you prefer to check the upgrade results without updating remote state, use tfmigrate, which allows you to run the terraform import command in a declarative way.

Generate a migration file for importing the new resource from a Terraform plan:

$ terraform plan -out=tmp.tfplan
$ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
$ cat tfmigrate_fromplan.hcl
migration "state" "fromplan" {
  actions = [
    "import aws_s3_bucket_acl.example tfedit-test,private",
  ]
}

Run the tfmigrate plan command to check to see if the terraform plan command has no changes after the migration without updating remote state:

# tfmigrate plan tfmigrate_fromplan.hcl
(snip.)
YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator plan success!
# echo $?
0

If looks good, apply it:

# tfmigrate apply tfmigrate_fromplan.hcl
(snip.)
YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator apply success!
# echo $?
0

The tfmigrate apply command computes a new state and pushes it to remote state. It will fail if the terraform plan command detects any diffs with the new state.

Finally, You can confirm the latest remote state has no changes with the terraform plan command in v4:

# terraform plan
(snip.)
No changes. Infrastructure is up-to-date.

# terraform state list
aws_s3_bucket.example
aws_s3_bucket_acl.example

To clean up the sandbox environment:

# terraform destroy -auto-approve
# cd ../../
# rm -rf tmp/dir1
# exit
$ docker compose down

Tips: If you see something was wrong, you can run the awslocal command, which is configured to call AWS APIs to the localstack endpoint:

$ docker exec -it tfedit_localstack_1 awslocal s3api list-buckets

Install

Homebrew

If you are macOS user:

$ brew install minamijoyo/tfedit/tfedit

Download

Download the latest compiled binaries and put it anywhere in your executable path.

https://github.com/minamijoyo/tfedit/releases

Source

If you have Go 1.22+ development environment:

$ go install github.com/minamijoyo/tfedit@latest
$ tfedit version

Usage

$ tfedit --help
A refactoring tool for Terraform

Usage:
  tfedit [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  filter      Apply a built-in filter
  help        Help about any command
  migration   Generate a migration file for state operations
  version     Print version

Flags:
  -h, --help   help for tfedit

Use "tfedit [command] --help" for more information about a command.
$ tfedit filter --help
Apply a built-in filter

Usage:
  tfedit filter [flags]
  tfedit filter [command]

Available Commands:
  awsv4upgrade Apply a built-in filter for awsv4upgrade

Flags:
  -h, --help   help for filter

Global Flags:
  -f, --file string   A path of input file (default "-")
  -u, --update        Update files in-place

Use "tfedit filter [command] --help" for more information about a command.
$ tfedit filter awsv4upgrade --help
Apply a built-in filter for awsv4upgrade

Upgrade configurations to AWS provider v4.

Usage:
  tfedit filter awsv4upgrade [flags]

Flags:
  -h, --help   help for awsv4upgrade

Global Flags:
  -f, --file string   A path of input file (default "-")
  -u, --update        Update files in-place

By default, the input is read from stdin, and the output is written to stdout. You can also read a file with -f flag, and update the file in-place with -u flag.

$ tfedit migration --help
Generate a migration file for state operations

Usage:
  tfedit migration [flags]
  tfedit migration [command]

Available Commands:
  fromplan    Generate a migration file from Terraform JSON plan file

Flags:
  -h, --help   help for migration

Use "tfedit migration [command] --help" for more information about a command.
$ tfedit migration fromplan --help
Generate a migration file from Terraform JSON plan file

Read a Terraform plan file in JSON format and
generate a migration file in tfmigrate HCL format.
Currently, only import actions required by awsv4upgrade are supported.

Usage:
  tfedit migration fromplan [flags]

Flags:
  -d, --dir string    Set a dir attribute in a migration file
  -f, --file string   A path to input Terraform JSON plan file (default "-")
  -h, --help          help for fromplan
  -o, --out string    Write a migration file to a given path (default "-")

By default, the input is read from stdin, and the output is written to stdout. You can also read a file with -f flag, and write a file with -o flag.

License

MIT

tfedit's People

Contributors

minamijoyo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tfedit's Issues

aws_s3_bucket_versioning with mfa_delete needs status = Enabled

resource "aws_s3_bucket" "global_cloudtrail_logs" {
  bucket        = "cloudtrail-logs"

  versioning {
    mfa_delete = true
  }
}

Is translated to:

resource "aws_s3_bucket" "global_cloudtrail_logs" {
  bucket = "cloudtrail-logs"
}

resource "aws_s3_bucket_versioning" "global_cloudtrail_logs" {
  bucket = aws_s3_bucket.global_cloudtrail_logs.id

  versioning_configuration {
    mfa_delete = "Enabled"
  }
}

This isn't quite correct, it should also have status

resource "aws_s3_bucket_versioning" "whim_global_cloudtrail_logs" {
  bucket = aws_s3_bucket.whim_global_cloudtrail_logs.id

  versioning_configuration {
    status     = "Enabled"
    mfa_delete = "Enabled"
  }
}

Relates to #40

Preserve comments on noncurrent_version_expiration

resource "aws_s3_bucket" "mybucket" {
  bucket = "mybucket"

  lifecycle_rule {
    id      = "cleanup"
    enabled = true

    expiration {
      days = 14 # mark as expired 14 days after creation
    }

    noncurrent_version_expiration {
      days = 14 # delete expired 14 days after they expired
    }
  }
}

loses the comments on noncurrent_version_expiration after filtering

resource "aws_s3_bucket" "mybucket" {
  bucket = "mybucket"
}

resource "aws_s3_bucket_lifecycle_configuration" "mybucket" {
  bucket = aws_s3_bucket.mybucket.id

  rule {
    id = "cleanup"

    expiration {
      days = 14 # mark as expired 14 days after creation
    }

    noncurrent_version_expiration {
      noncurrent_days = 14
    }
    status = "Enabled"

    filter {
      prefix = ""
    }
  }
}

aws_s3_bucket_website_configuration: An argument named "routing_rules" is not expected here.

Summary

In AWS v3, aws_s3_bucket.website.routing_rules is a string which is a JSON array contains routing rules.
In AWS v4, aws_s3_bucket_website_configuration.routing_rule is a block. We need to parse json and build corresponding blocks representation.

https://registry.terraform.io/providers/hashicorp/aws/3.74.3/docs/resources/s3_bucket#website
https://registry.terraform.io/providers/hashicorp/aws/4.14.0/docs/resources/s3_bucket_website_configuration

Version

$ tfedit version
0.0.3

$ terraform -v
Terraform v1.1.9
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.14.0

Expected behavior

tmp/iss30/main.tf

before

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"

  website {
    index_document = "index.html"
    error_document = "error.html"

    routing_rules = <<EOF
[{
    "Condition": {
        "KeyPrefixEquals": "docs/"
    },
    "Redirect": {
        "ReplaceKeyPrefixWith": "documents/"
    }
}]
EOF
  }
}
$ cat tmp/iss30/main.tf | tfedit filter awsv4upgrade

after

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }

  routing_rule {
    condition {
      key_prefix_equals = "docs/"
    }
    redirect {
      replace_key_prefix_with = "documents/"
    }
  }
}

Actual behavior

$ cat tmp/iss30/main.tf | tfedit filter awsv4upgrade
resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_website_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }


  routing_rules = <<EOF
[{
    "Condition": {
        "KeyPrefixEquals": "docs/"
    },
    "Redirect": {
        "ReplaceKeyPrefixWith": "documents/"
    }
}]
EOF
}

aws_s3_bucket_lifecycle_configuration: The argument "id" is required, but no definition was found

I noticed that in AWS v3, aws_s3_bucket.lifecycle_rule.id argument is optional and computed.
https://registry.terraform.io/providers/hashicorp/aws/3.74.3/docs/resources/s3_bucket#lifecycle_rule

However in AWS v4, aws_s3_bucket_lifecycle_configuration.rule.id argument is now required.
https://registry.terraform.io/providers/hashicorp/aws/4.14.0/docs/resources/s3_bucket_lifecycle_configuration#id

This means that if the id is omitted in the configuration, there is no way to know the id without reading tfstate or calling AWS API with aws s3api get-bucket-lifecycle-configuration --bucket <bucketname>.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html

aws_s3_bucket_lifecycle_configuration: days_after_initiation = 0 causes a drift on import

Summary

When aws_s3_bucket.lifecycle_rule.abort_incomplete_multipart_upload_days was explicitly set to 0 in AWS v3, importing aws_s3_bucket_lifecycle_configuration.abort_incomplete_multipart_upload.days_after_initiation = 0 in AWS v4 with tfmigrate plan will be failed due to detecting a drift.

According to the implementation, the zero value is explicitly skipped setting the parameter:
https://github.com/hashicorp/terraform-provider-aws/blob/v3.74.3/internal/service/s3/bucket.go#L2266-L2271
https://github.com/hashicorp/terraform-provider-aws/blob/v4.15.1/internal/service/s3control/bucket_lifecycle_configuration.go#L291-L293

After applying tfmigrate with force mode, then terraform apply with AWS v4 converges the drift.
I'm not sure whether this is a bug of the AWS provider or not.

Version

$ tfedit version
0.0.3

$ tfmigrate -v
0.3.3

$ terraform -v
Terraform v1.2.1
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.15.1

Configuration

AWS v3.74.3

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"

  lifecycle_rule {
    id                                     = "test"
    enabled                                = true
    abort_incomplete_multipart_upload_days = 0
  }
}

AWS v4.15.1

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id     = "test"
    status = "Enabled"

    filter {
      prefix = ""
    }

    abort_incomplete_multipart_upload {
      days_after_initiation = 0
    }
  }
}

Expected behavior

$ terraform plan -out=tmp.tfplan
$ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
$ cat tfmigrate_fromplan.hcl
migration "state" "fromplan" {
  actions = [
    "import aws_s3_bucket_lifecycle_configuration.example tfedit-test",
  ]
}

$ tfmigrate plan tfmigrate_fromplan.hcl
(snip.)
YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator plan success!

Actual behavior

$ tfmigrate plan tfmigrate_fromplan.hcl
2022/05/27 09:03:59 [INFO] [runner] load migration file: tfmigrate_fromplan.hcl
2022/05/27 09:03:59 [INFO] [migrator] start state migrator plan
2022/05/27 09:03:59 [INFO] [migrator@.] terraform version: 1.2.1
2022/05/27 09:03:59 [INFO] [migrator@.] initialize work dir
2022/05/27 09:04:02 [INFO] [migrator@.] get the current remote state
2022/05/27 09:04:03 [INFO] [migrator@.] override backend to local
2022/05/27 09:04:03 [INFO] [executor@.] create an override file
2022/05/27 09:04:03 [INFO] [migrator@.] creating local workspace folder in: terraform.tfstate.d/default
2022/05/27 09:04:03 [INFO] [executor@.] switch backend to local
2022/05/27 09:04:07 [INFO] [migrator@.] compute a new state
2022/05/27 09:04:21 [INFO] [migrator@.] check diffs
2022/05/27 09:04:36 [ERROR] [migrator@.] unexpected diffs
2022/05/27 09:04:36 [INFO] [executor@.] remove the override file
2022/05/27 09:04:36 [INFO] [executor@.] remove the workspace state folder
2022/05/27 09:04:36 [INFO] [executor@.] switch back to remote
terraform plan command returns unexpected diffs: failed to run command (exited 2): terraform plan -state=/tmp/tmp3105549665 -out=/tmp/tfplan2994504524 -input=false -no-color -detailed-exitcode
stdout:
aws_s3_bucket.example: Refreshing state... [id=tfedit-test]
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=tfedit-test]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_s3_bucket_lifecycle_configuration.example will be updated in-place
  ~ resource "aws_s3_bucket_lifecycle_configuration" "example" {
        id     = "tfedit-test"
        # (1 unchanged attribute hidden)

      ~ rule {
            id     = "test"
            # (1 unchanged attribute hidden)

          + abort_incomplete_multipart_upload {
              + days_after_initiation = 0
            }

          - expiration {
              - days                         = 0 -> null
              - expired_object_delete_marker = false -> null
            }

            # (1 unchanged block hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: /tmp/tfplan2994504524

To perform exactly these actions, run the following command to apply:
    terraform apply "/tmp/tfplan2994504524"

stderr:

Create new resources directly below existing aws_s3_bucket resources

Firstly, thank you so much for this tool. It has saved me a lot of time and energy.

The only manual work I needed to do with tfedit was to move the generated resources from the bottom of the file to directly below their parent s3 bucket.

It would be great if it was possible to create the new resources in the middle of the file next to the bucket, rather than appending to the bottom of the file.

Thanks again!

The `tfedit migration fromplan` command should suppress creating a migration file when no action

Summary

The tfedit migration fromplan command should suppress creating a migration file when no action.

The current implementation generates a migration file with no action even if terraform plan has no changes. It is not only redundant, but also causes an error as an invalid migration file when loaded by tfmigrate.

Version

$ tfedit version
0.0.3

$ terraform -v
Terraform v1.1.9
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.14.0

$ tfmigrate -v
0.3.3

Expected behavior

If terraform plan has no changes, should not create a migration file.

$ terraform plan -out=tmp.tfplan
$ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
$ cat tfmigrate_fromplan.hcl
cat: tfmigrate_fromplan.hcl: No such file or directory

Actual behavior

$ terraform plan -out=tmp.tfplan
$ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
$ cat tfmigrate_fromplan.hcl
migration "state" "fromplan" {
  actions = [
  ]
}

$ tfmigrate plan tfmigrate_fromplan
(snip.)

faild to NewMigrator with no actions

aws_s3_bucket_lifecycle_configuration: empty filter and tags causes a drift on import

Summary

When aws_s3_bucket.lifecycle_rule.filter and tags were empty in AWS v3, importing aws_s3_bucket_lifecycle_configuration in AWS v4 with tfmigrate plan will be failed due to detecting a drift.

When both filter and tags were empty in v3, just removing the empty filter in v4 converges the drift.
This looks like a similar problem to #29, but actually a different one.

Version

$ tfedit version
0.0.3

$ tfmigrate -v
0.3.3

$ terraform -v
Terraform v1.2.1
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.15.1

Configuration

AWS v3.74.3

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"

  lifecycle_rule {
    id      = "log"
    enabled = true
    prefix  = ""
    tags    = {}

    noncurrent_version_transition {
      days          = 30
      storage_class = "GLACIER"
    }

    noncurrent_version_expiration {
      days = 90
    }
  }
}

AWS v4.15.1

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id = "log"

    noncurrent_version_transition {
      storage_class   = "GLACIER"
      noncurrent_days = 30
    }

    noncurrent_version_expiration {
      noncurrent_days = 90
    }
    status = "Enabled"

    filter {

      and {
        prefix = ""
        tags   = {}
      }
    }
  }
}

Expected behavior

$ terraform plan -out=tmp.tfplan
$ terraform show -json tmp.tfplan | tfedit migration fromplan -o=tfmigrate_fromplan.hcl
$ cat tfmigrate_fromplan.hcl
migration "state" "fromplan" {
  actions = [
    "import aws_s3_bucket_lifecycle_configuration.example tfedit-test",
  ]
}

$ tfmigrate plan tfmigrate_fromplan.hcl
(snip.)
YYYY/MM/DD hh:mm:ss [INFO] [migrator] state migrator plan success!

Actual behavior

$ tfmigrate plan tfmigrate_fromplan.hcl
2022/05/27 09:47:02 [INFO] [runner] load migration file: tfmigrate_fromplan.hcl
2022/05/27 09:47:02 [INFO] [migrator] start state migrator plan
2022/05/27 09:47:02 [INFO] [migrator@.] terraform version: 1.2.1
2022/05/27 09:47:02 [INFO] [migrator@.] initialize work dir
2022/05/27 09:47:05 [INFO] [migrator@.] get the current remote state
2022/05/27 09:47:06 [INFO] [migrator@.] override backend to local
2022/05/27 09:47:06 [INFO] [executor@.] create an override file
2022/05/27 09:47:06 [INFO] [migrator@.] creating local workspace folder in: terraform.tfstate.d/default
2022/05/27 09:47:06 [INFO] [executor@.] switch backend to local
2022/05/27 09:47:10 [INFO] [migrator@.] compute a new state
2022/05/27 09:47:24 [INFO] [migrator@.] check diffs
2022/05/27 09:47:39 [ERROR] [migrator@.] unexpected diffs
2022/05/27 09:47:39 [INFO] [executor@.] remove the override file
2022/05/27 09:47:39 [INFO] [executor@.] remove the workspace state folder
2022/05/27 09:47:39 [INFO] [executor@.] switch back to remote
terraform plan command returns unexpected diffs: failed to run command (exited 2): terraform plan -state=/tmp/tmp783177411 -out=/tmp/tfplan2179793936 -input=false -no-color -detailed-exitcode
stdout:
aws_s3_bucket.example: Refreshing state... [id=tfedit-test]
aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=tfedit-test]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_s3_bucket_lifecycle_configuration.example will be updated in-place
  ~ resource "aws_s3_bucket_lifecycle_configuration" "example" {
        id     = "tfedit-test"
        # (1 unchanged attribute hidden)

      ~ rule {
            id     = "log"
            # (1 unchanged attribute hidden)

          ~ filter {
              + and {}
            }


            # (2 unchanged blocks hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: /tmp/tfplan2179793936

To perform exactly these actions, run the following command to apply:
    terraform apply "/tmp/tfplan2179793936"

stderr:

aws_s3_bucket_lifecycle_configuration: An argument named "tags" is not expected here.

Summary

An tags argument of aws_s3_bucket_lifecycle_configuration should be wrapped in and block.
When both tags and filter arguments are defined, it works as expected. However, when only tags is defined and filter is not defined, the result doesn't wraps tags with and block. Note that tags parameter is only valid in and block in schema.

https://registry.terraform.io/providers/hashicorp/aws/3.74.3/docs/resources/s3_bucket#lifecycle_rule
https://registry.terraform.io/providers/hashicorp/aws/4.14.0/docs/resources/s3_bucket_lifecycle_configuration

Version

$ tfedit version
0.0.3

$ terraform -v
Terraform v1.1.9
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.14.0

Expected behavior

tmp/iss29/main.tf

before

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"

  lifecycle_rule {
    id      = "log"
    enabled = true

    tags = {
      rule      = "log"
      autoclean = "true"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
  }

  lifecycle_rule {
    id      = "tmp"
    prefix  = "tmp/"
    enabled = true

    expiration {
      days = 90
    }
  }
}
$ cat tmp/iss29/main.tf | tfedit filter awsv4upgrade

after

resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id = "log"


    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
    status = "Enabled"

    filter {

      and {
        prefix = ""
        tags = {
          rule      = "log"
          autoclean = "true"
        }
      }
    }
  }

  rule {
    id = "tmp"

    expiration {
      days = 90
    }
    status = "Enabled"

    filter {
      prefix = "tmp/"
    }
  }
}

Actual behavior

$ cat tmp/iss29/main.tf | tfedit filter awsv4upgrade
resource "aws_s3_bucket" "example" {
  bucket = "tfedit-test"
}

resource "aws_s3_bucket_lifecycle_configuration" "example" {
  bucket = aws_s3_bucket.example.id

  rule {
    id = "log"


    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 60
      storage_class = "GLACIER"
    }

    expiration {
      days = 90
    }
    status = "Enabled"

    filter {
      prefix = ""
      tags = {
        rule      = "log"
        autoclean = "true"
      }
    }
  }

  rule {
    id = "tmp"

    expiration {
      days = 90
    }
    status = "Enabled"

    filter {
      prefix = "tmp/"
    }
  }
}

Custom provider not copied to new resources

If you set a provider on an S3 resource, it's not copied to the child S3 resources

resource "aws_s3_bucket" "bucket" {
  provider = aws.ohio
  bucket   = "mybucket"
  acl      = "private"
}

After migration:

resource "aws_s3_bucket" "bucket" {
  provider = aws.ohio
  bucket   = "mybucket"
}

resource "aws_s3_bucket_acl" "bucket" {
  bucket = aws_s3_bucket.bucket.id
  acl    = "private"
}

This will cause an error when you try and import:

│ Error: error getting S3 bucket ACL (bucket,private): AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.