Git Product home page Git Product logo

hashicorp / terraform-plugin-sdk Goto Github PK

View Code? Open in Web Editor NEW
419.0 26.0 227.0 14.86 MB

Terraform Plugin SDK enables building plugins (providers) to manage any service providers or custom in-house solutions

Home Page: https://developer.hashicorp.com/terraform/plugin

License: Mozilla Public License 2.0

Go 88.32% Makefile 0.12% Shell 0.19% HCL 0.03% MDX 11.35%
terraform sdk grpc grpc-go terraform-provider

terraform-plugin-sdk's Introduction

PkgGoDev

Terraform Plugin SDK

This SDK enables building Terraform plugin which allows Terraform's users to manage existing and popular service providers as well as custom in-house solutions. The SDK is stable and broadly used across the provider ecosystem.

For new provider development it is recommended to investigate terraform-plugin-framework, which is a reimagined provider SDK that supports additional capabilities. Refer to the Which SDK Should I Use? documentation for more information about differences between SDKs.

Terraform itself is a tool for building, changing, and versioning infrastructure safely and efficiently. You can find more about Terraform on its website and its GitHub repository.

Terraform CLI Compatibility

Terraform 0.12.0 or later is needed for version 2.0.0 and later of the Plugin SDK.

When running provider tests, Terraform 0.12.26 or later is needed for version 2.0.0 and later of the Plugin SDK. Users can still use any version after 0.12.0.

Go Compatibility

This project follows the support policy of Go as its support policy. The two latest major releases of Go are supported by the project.

Currently, that means Go 1.21 or later must be used when including this project as a dependency.

Getting Started

See the Call APIs with Terraform Providers guide on learn.hashicorp.com for a guided tour of provider development.

Documentation

See Extending Terraform section on the website.

Scope (Providers VS Core)

Terraform Core

  • acts as gRPC client
  • interacts with the user
  • parses (HCL/JSON) configuration
  • manages state as whole, asks Provider(s) to mutate provider-specific parts of state
  • handles backends & provisioners
  • handles inputs, outputs, modules, and functions
  • discovers Provider(s) and their versions per configuration
  • manages Provider(s) lifecycle (i.e. spins up & tears down provider process)
  • passes relevant parts of parsed (valid JSON/HCL) and interpolated configuration to Provider(s)
  • decides ordering of (Create, Read, Update, Delete) operations on resources and data sources
  • ...

Terraform Provider (via this SDK)

  • acts as gRPC server
  • executes any domain-specific logic based on received parsed configuration
    • (Create, Read, Update, Delete, Import, Validate) a Resource
    • Read a Data Source
  • tests domain-specific logic via provided acceptance test framework
  • provides Core updated state of a resource or data source and/or appropriate feedback in the form of validation or other errors

Migrating to SDK v1 from built-in SDK

Migrating to the standalone SDK v1 is covered on the Plugin SDK section of the website.

Migrating to SDK v2 from SDK v1

Migrating to the v2 release of the SDK is covered in the v2 Upgrade Guide of the website.

Versioning

The Terraform Plugin SDK is a Go module versioned using semantic versioning. See SUPPORT.md for information on our support policies.

Contributing

See .github/CONTRIBUTING.md

License

Mozilla Public License v2.0

terraform-plugin-sdk's People

Contributors

apparentlymart avatar appilon avatar armon avatar bflad avatar catsby avatar cgriggs01 avatar danawillow avatar dependabot[bot] avatar findkim avatar grubernaut avatar hashicorp-tsccr[bot] avatar hc-github-team-tf-provider-devex avatar jbardin avatar jen20 avatar jtopjian avatar justincampbell avatar kmoe avatar lwander avatar mbfrahry avatar mildwonkey avatar mitchellh avatar paddycarver avatar paultyng avatar pearkes avatar phinze avatar radeksimko avatar sparkprime avatar stack72 avatar tombuildsstuff avatar vancluever avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-plugin-sdk's Issues

Proposal: helper/schema TypeEnum type

This proposal is for a new type in helper/schema to represent an enum, schema.TypeEnum, as well as highlighting two possible extensions/use cases for the type.

To other provider contributors reading this; if you have use cases or ideas that you can add, feel free to add them below as comments.


It's a common pattern in the Google provider for us to want to consume an enum
value from a Terraform configuration. Using google_bigtable_instance's
storage_type field as an example, we specify it like this right now:

"storage_type": {
    Type:         schema.TypeString,
    Optional:     true,
    ForceNew:     true,
    Default:      "SSD",
    ValidateFunc: validation.StringInSlice([]string{"SSD", "HDD"}, false),
},

A schema.TypeEnum would be represented as a string; that means that it would
not be an object type, and could not have child elements like a TypeList or
TypeSet. It would have a Values attribute which is a slice of strings.

Terraform will automatically run validation.StringInSlice with ignoreCase: false on the value, and also run any additional ValidateFunc that is set, if
any, displaying the set of errors if there are any.

Terraform will validate that the Default value is present in Values when
performing internal validation.

This means that we would have roughly the same behaviour, as well as validation
that our Default is correct. We could specify storage_type like:

"storage_type": {
    Type:         schema.TypeEnum,
    Optional:     true,
    ForceNew:     true,
    Default:      "SSD",
    Values: []string{"SSD", "HDD"},
},

This functionality alone is what this issue is asking for; like this,
schema.TypeEnum is a nice alias over schema.TypeString that attaches some
more semantic significance. There are two ways we could extend it, though;


This extension would be a nice-to-have, and would provide a lot of benefit for
us. It's not necessary for schema.TypeEnum as a whole, but is something I
would like to see as a provider developer.

Past Google Go clients have used string to represent enums in the API. Newer
Google clients under the cloud.google.com/go/ like bigtable use constants
to represent enum values
,
and we need to perform awkward mappings from string -> enum.

Instead of Values accepting a slice of strings, it would take in a
map[string]interface{} of config-side keys to arbitrary values. The
responsibility of casting to the correct type would be on the client code. The
default value specified in schema would be the string key. If the enum is
represented as a string, like Google's older clients such as for Compute, you
would map from string to string.

For context for this example, Bigtable's enums are bigtable.HDD and
bigtable.SSD and have the type bigtable.StorageType; that means we would
then write storage_type as

"storage_type": {
    Type:         schema.TypeEnum,
    Optional:     true,
    ForceNew:     true,
    Default:      "SSD",
    Values: map[string]interface{}{
        "SSD": bigtable.SSD,
        "HDD": bigtable.HDD
    },
},

If a user specified

storage_type = "HDD"

then we would get it in code in the correct type like:

storageType := d.Get("storage_type").(bigtable.StorageType)

This feature is just as much asking if it's possible as it is a nice-to-have; it
is completely unnecessary for the rest of the proposal, but it lets us handle a
gross edge case that comes up a few times in schema instead of code.

Having a constrained set of values would also let us perform validations based
on what the user has specified in their config file. As a specific example,
google_sql_database_instance has a field database_version that could be used
as an enum. It looks like this right now:

"database_version": &schema.Schema{
    Type:     schema.TypeString,
    Optional: true,
    Default:  "MYSQL_5_6",
    ForceNew: true,
},

Even though we currently allow free-form input, and rely on the API for
validation, it has a constrained set of potential values. They are: MYSQL_5_5,
MYSQL_5_6, MYSQL_5_7, and POSTGRES_9_6. So, it would look like:

"database_version": &schema.Schema{
    Type:     schema.TypeEnum,
    Optional: true,
    Default:  "MYSQL_5_6",
    ForceNew: true,
    Values: []string{"MYSQL_5_5", "MYSQL_5_6", "MYSQL_5_7", "POSTGRES_9_6"},
},

Because we know exhaustively the set of potential values and schema.TypeEnum
cannot have children, we should be able to use ConflictsWith with specific
enum values. This would give plan-time errors with invalid configurations.

For google_sql_database_instance, we support a replica_configuration object
which represents the API-side
mysqlReplicaConfiguration
.
This isn't valid when using a POSTGRES_9_6 instance.

We would specify it like so;

"replica_configuration": &schema.Schema{
    Type:     schema.TypeList,
    Optional: true,
    MaxItems: 1,
    ConflictsWith: []string{"database_version.POSTGRES_9_6"},
    Elem: &schema.Resource{ /* omitted */ },
}

So if we specified this in our google_sql_database_instance body:

database_version = "POSTGRES_9_6"
replica_configuration {
}

We would receive an error at plan time like:

Cannot specify `replica_configuration` when `database_version` has value "POSTGRES_9_6"

Implement graceful shutdown for helper/resource waiter

For many resources it's currently impossible for Terraform to gracefully stop the execution mid-flight as helper/resource.StateChangeConf is not cancellable from above the resource/provider.

Mitchell has done the initial work in hashicorp/terraform#9536 (also includes changes in Azure provider + 1 resource). The goal is to make similar changes to (most) other providers - at least the ones we know leverage helper/resource.StateChangeConf and pass the StopContext down to the resource.StateChangeConf in the resource code.

Terraform Version

v0.8.5-dev (aa3eda76425a0b936192c6e95e45758e2727ba4b)

Affected Resource(s)

  • any resource using resource.StateChangeConf, but mainly ones that have long timeouts, e.g. AWS RDS, AWS Elasticsearch Domain

Terraform Configuration Files

resource "aws_rds_cluster" "default" {
  cluster_identifier = "aurora-cluster-demo"
  availability_zones = ["eu-west-2a","eu-west-2b"]
  database_name = "mydb"
  master_username = "foo"
  master_password = "barbarbar"
  backup_retention_period = 5
  preferred_backup_window = "07:00-09:00"
}

Expected Behavior

aws_rds_cluster.default: Creating...
...
aws_rds_cluster.default: Still creating... (10s elapsed)
^CInterrupt received. Gracefully shutting down...
Apply failed. Interruption received, partial state saved.

Actual Behavior

aws_rds_cluster.default: Creating...
...
aws_rds_cluster.default: Still creating... (10s elapsed)
^CInterrupt received. Gracefully shutting down...
aws_rds_cluster.default: Still creating... (20s elapsed)
aws_rds_cluster.default: Still creating... (30s elapsed)
aws_rds_cluster.default: Still creating... (40s elapsed)
aws_rds_cluster.default: Still creating... (50s elapsed)
aws_rds_cluster.default: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Steps to Reproduce

  1. terraform apply
  2. Ctrl+C (once)

References

MapFieldWriter failing on set inside list

I'm trying to add import functionality for the aws_appautoscaling_policy and aws_appautoscaling_target resources, but am running into problems implementing the former. I've narrowed it down to an issue with (*ResourceData).Set not writing successfully for the step_scaling_policy_configuration attribute in resourceAwsAppautoscalingPolicyRead; the set function fails when writing to step_adjustment. This error seems to occur because addrToSchema in field_reader.go is unable to find a schema for the address []string{"step_scaling_policy_configuration", "0", "step_adjustment"}. That's all I can really suss out -- this is my first time working with Terraform, so any help (perhaps something I've overlooked) would be appreciated!

Terraform Version

v10.7

Debug Output

Error message:

2017/10/12 13:29:06 Error running d.Set("step_scaling_policy_configuration": Invalid address to set: []string{"step_scaling_policy_configuration", "0", "step_adjustment"}

Stack trace in field_reader.go:

    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_reader.go:176 +0x3bf
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).WriteField(0xc4205a15e0, 0xc4208bddd0, 0x3, 0x3, 0x2b0a3a0, 0xc4205a1200, 0x0, 0x0)
        $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:50 +0xdc
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).setSet(0xc420565c20, 0xc4208bddd0, 0x3, 0x3, 0x2b0a3a0, 0xc4205a1200, 0xc4202ce000, 0xffffffffffffffff, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:283 +0x631
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).set(0xc420565c20, 0xc4208bddd0, 0x3, 0x3, 0x2b0a3a0, 0xc4205a1200, 0x0, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:96 +0x2ac
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).setObject(0xc420565c20, 0xc4205a1340, 0x2, 0x2, 0x2d42000, 0xc4208bdbc0, 0xc4208632c0, 0x0, 0xa00208deef0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:202 +0x241
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).set(0xc420565c20, 0xc4205a1340, 0x2, 0x2, 0x2d42000, 0xc4208bdbc0, 0x1, 0x1)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:98 +0x22b
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).setList.func1(0x3245678, 0x1, 0x2d42000, 0xc4208bdbc0, 0x0, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:112 +0x139
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).setList(0xc420565c20, 0xc420439fe0, 0x1, 0x1, 0x2b0a3a0, 0xc4205a1220, 0xc4202cf860, 0xc4208bf800, 0xc42008e120)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:124 +0x20a
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).set(0xc420565c20, 0xc420439fe0, 0x1, 0x1, 0x2b0a3a0, 0xc4205a1220, 0x1, 0x10)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:92 +0x10b
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).WriteField(0xc420565c20, 0xc420439fe0, 0x1, 0x1, 0x2b0a3a0, 0xc4205a1220, 0x0, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:78 +0x561
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*ResourceData).Set(0xc4201012d0, 0x31e515e, 0x21, 0x2b0a3a0, 0xc4205a1220, 0x0, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go:191 +0x149
github.com/terraform-providers/terraform-provider-aws/aws.resourceAwsAppautoscalingPolicyRead(0xc4201012d0, 0x2e6e000, 0xc42010e200, 0x1, 0x1)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_appautoscaling_policy.go:308 +0x57f
github.com/terraform-providers/terraform-provider-aws/aws.resourceAwsAppautoscalingPolicyCreate(0xc4201012d0, 0x2e6e000, 0xc42010e200, 0x24, 0x4a31da0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_appautoscaling_policy.go:283 +0x451
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*Resource).Apply(0xc4202c85a0, 0xc4205ad6d0, 0xc42055b680, 0x2e6e000, 0xc42010e200, 0xc420023901, 0x47, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/resource.go:193 +0x3b6
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*Provider).Apply(0xc4201f53b0, 0xc4205633b0, 0xc4205ad6d0, 0xc42055b680, 0x1, 0x0, 0x1800)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/provider.go:259 +0xa4
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform.(*EvalApply).Eval(0xc4206d8680, 0x49cbf60, 0xc4205717a0, 0x2, 0x2, 0x31955e0, 0x4)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform/eval_apply.go:57 +0x239
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform.EvalRaw(0x49bdb80, 0xc4206d8680, 0x49cbf60, 0xc4205717a0, 0x0, 0x0, 0x0, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform/eval.go:53 +0x173
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform.(*EvalSequence).Eval(0xc420266300, 0x49cbf60, 0xc4205717a0, 0x2, 0x2, 0x31955e0, 0x4)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform/eval_sequence.go:14 +0x7e
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform.EvalRaw(0x49be540, 0xc420266300, 0x49cbf60, 0xc4205717a0, 0x2d0afe0, 0x49cce02, 0x2b3f500, 0xc420591730)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform/eval.go:53 +0x173
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform.Eval(0x49be540, 0xc420266300, 0x49cbf60, 0xc4205717a0, 0xc420266300, 0x49be540, 0xc420266300, 0xc42006f800)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform/eval.go:34 +0x4d
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform.(*Graph).walk.func1(0x310cd60, 0xc4204f6048, 0x0, 0x0)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/terraform/graph.go:126 +0xd0d
github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/dag.(*Walker).walkVertex(0xc4208f8460, 0x310cd60, 0xc4204f6048, 0xc420796040)
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/dag/walk.go:387 +0x3c1
created by github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/dag.(*Walker).Update
    $GOPATH/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/dag/walk.go:310 +0x1364

Expected Behavior

d.Set("step_scaling_policy_configuration", <value>) should successfully write the value.

Actual Behavior

It does not!

Steps to Reproduce

Run the TestAccAWSAppautoScalingPolicy_basic acceptance test, and log the error returned on resource_aws_appautoscaling_policy.go:302

Reset resource without destroying/recreating

Many providers have support for "restore from factory-settings"-like functionality, e.g. you can revert a machine state to a droplet image or ISO file, without destroying the resource itself. This could be useful to expose through Terraform (terraform reset my-server?) since in immutable infrastructure, you might want to reset a resource, but not lose its otherwise ephemeral properties, like its IP address, or the contents of a data disk which is separate from the boot disk.
(Not all providers support floating IPs and/or disk mounts separate from the resource itself)

Support 0.12 Diagnostics in Providers

Errors:

  * aws_db_subnet_group.main: only lowercase alphanumeric characters, hyphens, underscores, periods, and spaces allowed in "name"

Suggestion 1) Specify the module in which the error occurred. If a module has only one of a certain resource, I'd prefer to name it, for example, aws_subnet.main, but in current TF I use aws_subnet.mymodule to make error messages more useful.

Suggestion 2) For errors like the above, output the specific bad value that was passed for name.

helper/resource: Support Expected/Unexpected Diff Changes in TestStep

In the resource acceptance testing framework, we run into cases where we need to test the update behaviors of a resource, however the framework does not have a method to ensure the resource update is not actually a create/destroy (forces new resource), which may or may not be expected.

Expected Behavior

When we have acceptance testing that should only be an update, that it performs an update, not create/destroy.

Actual Behavior

Silently always passes as this is not supported at the moment. We need to manually check for create/destroy actions on update in the acceptance test debug logs to ensure we are not recreating resources.

Steps to Reproduce

e.g. in the AWS provider repository:
make testacc TEST=./aws TESTARGS='-run=TestAccAwsXXX_MyUpdateTest'

Additional Context

I'm linking to the latest reference of this issue in the AWS provider, but I have assuredly run into this myself in the past.

References

Implementation Proposal

(Sorry if this looks incorrect, its my first time diving into the diff code ๐Ÿ˜„ )

Option(s) are available on the resource.TestStep struct for either expected or unexpected diff actions. A few competing ideas:

  • ExpectedDiffChanges []string for testing all resources in the TestStep, e.g. terraform.DiffUpdate
  • ExpectedDiffChanges map[string]string for targeted diff behavior at the resource layer, e.g. "resource.name": terraform.DiffUpdate

I think I lean towards the latter since it more succinctly captures what you are concerned about and could ignore other resources that are undefined in the map.

These will in turn add logic inside a test step to ensure the Terraform diff for the/all resource match, or fails the test step.

for k, v := range step.ExpectedDiffActions {
  if p.Diff == nil || p.Diff.Empty() {
    return state, fmt.Errorf("Expected a non-empty plan, but got an empty plan!")
  }
  instanceDiff, ok := p.Diff.RootModule().Resources[k]
  if !ok {
    return state, fmt.Errorf("Expected a non-empty diff for %s, but got an empty diff!", k)
  }
  if instanceDiff.ChangeType() != v {
    // It doesn't look like we provide a mapping for DiffChangeType to something friendly
    // For plans the human readable operation is buried in ModuleDiff.String()
    // So this just would notify the tester who would need to check out the debug logs
    return state, fmt.Errorf("Unexpected diff for %s!", k)
  }
}

Changing default value to nil for TypeInt in Optional parameter [ Custom Providers Development ]

Hi,

Questions

  • Is there any way to set default value of TypeInt to nil in optional parameters in Schema?

  • Is there any way to find the value is provided by user or it is auto-set ?

Terraform Version

Terraform v0.10.8

Use Case

I am building custom terraform provider ,there I need to call api's through the configuration

  • I have some default values , that is optional I and even if I am passing nothing in tf configuration it is making them zero.
  • And I can't able to filter while making json request ( because that field may or may not hold zero , and I don't know any way to differentiate between user input and auto-set value)

Please suggest a way I can make it work ?

Provide tfstate file (or a representation thereof) to TestCheckFuncs

Terraform Version

Terraform v0.7.0

I think this is a feature request, although I might just be missing something. Any help would be appreciated, thanks!

Currently, TestCheckFuncs are passed a *terraform.State which, as far as I understand, represents the state of the instances terraform is managing. That state is refreshed after create/update and before TestCheckFuncs are executed. This means that when testing various failure scenarios to ensure the proper implementation of partial state management, there is no way to inspect the state that would be written out to the tfstate file via the argument passed to the TestCheckFunc, as the two states can differ depending on the partial update implementation. How can I write a test that ensures my implementation of partial state management is as I expect? It would be nice if there was a way for TestCheckFuncs to inspect the state file, or some representation of the state that would be written to the file.

Crash during plan - interface {} is string, not map[string]interface {}

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

If your issue relates to a specific Terraform provider, please open it in the provider's own repository. The index of providers is at https://github.com/terraform-providers .

Terraform Version

Terraform v0.10.8

Terraform Configuration Files

Crash Output

https://gist.github.com/anonymous/2deff2ea451b4a2b77861e42bdfc58cd

Expected Behavior

terraform plan to work

Actual Behavior

Terraform crashed.

Steps to Reproduce

Please list the full steps required to reproduce the issue, for example:

  1. terraform init
  2. terraform plan

Unchanged NestedSets are not returned by DiffFieldReader.

This issue has a PR fix here: hashicorp/terraform#8891
I'm opening an issue for posterity, but feel free to close if the PR is good enough.

Given a schema such as:

schema := map[string]*Schema{
  "list_of_sets_1": &Schema{
    Type: TypeList,
    Elem: &Resource{
      Schema: map[string]*Schema{
        "nested_set": &Schema{
          Type: TypeSet,
          Elem: &Resource{
            Schema: map[string]*Schema{
              "val": &Schema{
                Type: TypeInt,
              },
            },
          },
          Set: func(a interface{}) int {
            m := a.(map[string]interface{})
            return m["val"].(int)
          },
        },
      },
    },
  },
  "list_of_sets_2": &Schema{
    Type: TypeList,
    Elem: &Resource{
      Schema: map[string]*Schema{
        "nested_set": &Schema{
          Type: TypeSet,
          Elem: &Resource{
            Schema: map[string]*Schema{
              "val": &Schema{
                Type: TypeInt,
              },
            },
          },
          Set: func(a interface{}) int {
            m := a.(map[string]interface{})
            return m["val"].(int)
          },
        },
      },
    },
  },
}

And a configuration such as:

resource "provider_name_resource_name" "test" {
  list_of_sets_1 {
    nested_set {
      val = 1
    }
  }
  list_of_sets_2 {
    nested_set {
      val = 1
    }
  }
}

Updating the val of the nested_set inlist_of_sets_1 from 1 to 2:

resource "provider_name_resource_name" "test" {
  list_of_sets_1 {
    nested_set {
      val = 2
    }
  }
  list_of_sets_2 {
    nested_set {
      val = 1
    }
  }
}

Results in resourceData.Get("list_of_sets_2") returning an empty *schema.Set.

This PR provides a unit test along with a fix. The fix is to check if the state has a value for the field after determining that the value does not exist in the diff. This matches the behavior of the primitive and other field types.

Terraform version for the User-Agent string is hard coded to v0.10.6

Terraform Version

Terraform v0.11.2

Terraform Configuration Files

N/A

Debug Output

N/A

Crash Output

N/A

Expected Behavior

The actual version of the Terraform Runtime should be emitted.

Actual Behavior

Terraform version is hard coded to version 10.6

Steps to Reproduce

  1. Use latest Terraform(v0.11.2) to provision azure resource, please specify the provider version to be latest.
  2. Search from ARM log, you will see the userAgent string is always v0.10.6

helper/schema feature: nestable resources

I've talked about this feature on several threads now. Time to centralize the conversation!

Background

The context: some resources are so closely related to each other that they are nearly always considered together: Security Group / Rule, DNS Zone / Record, IAM Group / User, etc.

The legacy implementation: it was common to model these as "sub-resources" in the schema, e.g. Security Group Rules Sub-Resource

The problem: "sub-resources" are bug-prone, difficult for provider implementors to work with, and require that all details be known at resource definition time.

Interim solution: lean on top-level resources for their simplicity in implementation and their flexibility (e.g. Security Group Rule Top-Level Resource)

Remaining problem: top-level resources are clunky and verbose to work with in configs

Proposed Solution

Add a helper/schema feature I'm calling "nestable resources", allowing provider authors to configure places to nest top-level resources into the definition of a related resource.

A sketch of what this might look like from the provider implementation side, using Security Group Rules as an example:

// in SecurityGroup resource definition...
"ingress": &schema.NestedResource{
  ResourceType: "aws_security_group_rule",

  // FixedAttributes are always set on the nested resource
  FixedAttributes: map[string]interface{}{
    "type" = "ingress",
  },

  // MappedAttributes define which fields of the parent resource to
  // map into the nested resource
  //    - keys: parent attribute name
  //    - values: nested resource attribute name
  MappedAttributes: map[string]string{}{
    "id": "security_group_id",
  },
},
// ...

This would make the following two configs equivalent:

resource "aws_security_group" "foo" {
  // ...
  ingress {
    // ...
  }
}
resource "aws_security_group" "foo" {
  // ...
}

resource "aws_security_group_rule" "foo" {
  security_group_id = "${aws_security_group.foo.id}"
  // ...
}

Setting Sensitive to true for a schema.TypeSet/TypeList parameter in a resource has no effect

Terraform Version

0.9.5

Affected Resource(s)

Writing a new resource where some of the fields are TypeSet and TypeList

terraform plan

For a TypeMap field I can see -
parameters.%: "1"
parameters.a: "<sensitive>"
So this works.

Expected Behavior

All the fields in the Set or List should be marked as sensitive
For example -
tags.#: "1"
tags.677375163: "<sensitive>"

Actual Behavior

TypeSet/TypeList fields for a resource appear as they are in the tf file

For example -
tags.#: "1"
tags.677375163: "sometag"

schema: Add RequiredWhen

This would help hashicorp/terraform#2089 and also hashicorp/terraform#1822 but I believe many other resources would benefit from such option as well.

Here are the challenges:

  1. It will change order of parsing, which will affect CLI prompt -> we'll have to ask user first for fields that have some "childs" depending on them and then eventually for fields that have RequiredWhen.
  2. I remember @phinze was adding some workarounds to make it work for nested structures, so that ConflictsWith can be used w/ nested fields, I'm not sure if we want to go ahead with similar solution?
  3. I remember having issues with defining nested structures w/ more than 2 levels, but that's probably not scope of this issue?

Any other ideas/gotchas that come up on your minds or any suggestions for better solutions of mentioned issues ^ ?

Set implementation does not appropriately handle hash collisions

The set implementation in helper/schema/set.go produces incorrect results if elements in the set have colliding hash code values.

This should be relatively rare in practice because the size of the sets represented by this implementation would typically be relatively small, but the consequences of a collision would be very surprising to encounter.

The method used by, e.g., HashSet in Java is to also require an equality operation to be defined for elements in the set. I don't think that would be possible without changing the interface of TypeSet, so I'm not sure what the best path forward is.

Terraform Version

Terraform v0.6.15-dev (current master / 8cf13d9582309f45e4a04cd4cd36e717b5b60c75)

Affected Resource(s)

All resources using TypeSet.

Terraform Configuration Files

provider "aws" {
    region = "us-east-1"
}

resource "aws_instance" "web" {
    ami = "ami-408c7f28"
    instance_type = "t1.micro"
    tags {
        Name = "HelloWorld"
    }

    // these security group IDs all have the same CRC32 hash code (1373619311)
    // https://gist.github.com/mattmoyer/5565a1dd5795c0ff53daa8e73b06c37b
    vpc_security_group_ids = [
        "sg-8c0f398e",
        "sg-3615fc73",
        "sg-eaf01421",
    ]
}

Expected Behavior

The terraform plan output should show all three security group associations.

Actual Behavior

In this case, aws_instance uses the HashString helper which is a CRC32 checksum. The three security group IDs in this case were deliberately chosen as examples that have a colliding CRC32 value of 1373619311, so they collide and only one of them ends up in the terraform plan output:

[...]
+ aws_instance.web
    ami:                               "" => "ami-408c7f28"
    availability_zone:                 "" => "<computed>"
    ebs_block_device.#:                "" => "<computed>"
    ephemeral_block_device.#:          "" => "<computed>"
    instance_state:                    "" => "<computed>"
    instance_type:                     "" => "t1.micro"
    key_name:                          "" => "<computed>"
    placement_group:                   "" => "<computed>"
    private_dns:                       "" => "<computed>"
    private_ip:                        "" => "<computed>"
    public_dns:                        "" => "<computed>"
    public_ip:                         "" => "<computed>"
    root_block_device.#:               "" => "<computed>"
    security_groups.#:                 "" => "<computed>"
    source_dest_check:                 "" => "1"
    subnet_id:                         "" => "<computed>"
    tags.#:                            "" => "1"
    tags.Name:                         "" => "HelloWorld"
    tenancy:                           "" => "<computed>"
    vpc_security_group_ids.#:          "" => "1"
    vpc_security_group_ids.1373619311: "" => "sg-eaf01421"


Plan: 1 to add, 0 to change, 0 to destroy.

A similar failure case exists for any other SchemaSetFunc implementation used with TypeSet, since they all output a 32 bit code that may have collisions.

Steps to Reproduce

  1. Copy the configuration pasted above into a .tf file, and run terraform plan.

Provider API Batch Requests

Have ya'll considered adding support for batching API requests? We are using Terraform to instrument 50+ websites and services in a single repository and occasionally hit throttling limits when dealing with DNS resources. I know we could segregate our .tfstate file so we don't have to process them on every apply but even so it seems like it should be possible to do something like

resource "aws_route53_record" "example-com_A_example-com" {
  zone_id = "${aws_route53_zone.example-com.zone_id}"
  type = "A"
  name = "example.com"
  ttl = "1"
  records = ["${module.example-com.public_ip}"]
  batch {
    name = "example.com"
  }
}

resource "aws_route53_record" "example-com_CNAME_www-example-com" {
  zone_id = "${aws_route53_zone.example-com.zone_id}"
  type = "CNAME"
  name = "www"
  ttl = "1"
  records = ["example.com"]
  batch {
    name = "example.com"
  }
}

This would only work on a per-resource-type basis, and, if the provider supported it, batching could be controlled by the added configuration shown above.

Is this idea feasible?

Add support for maps with non-primitive types (rich / object map support)

Terraform Version

Terraform v0.6.14

Terraform Configuration Files

resource "aws_api_gateway_stage" "boo" {
    rest_api_id = "${aws_api_gateway_rest_api.demo.id}"
    name = "test"

    method_setting "yada" {
        metrics_enabled = true
        logging_level = "DEBUG"
    }

    method_setting "bada" {
        metrics_enabled = false
        logging_level = "ERROR"
    }
}

Expected Behavior

+ aws_api_gateway_stage.boo
    rest_api_id:                         "" => "${aws_api_gateway_rest_api.demo.id}"
    name:                                "" => "test"
    method_setting.#:                    "" => "2"
    method_setting.yada.metrics_enabled: "" => "1"
    method_setting.yada.logging_level:   "" => "DEBUG"
    method_setting.bada.metrics_enabled: "" => "0"
    method_setting.bada.logging_level:   "" => "ERROR"

Actual Behavior

Error running plan: 1 error(s) occurred:

* method_setting: 2 error(s) decoding:

* '[bada]' expected type 'string', got unconvertible type '[]map[string]interface {}'
* '[yada]' expected type 'string', got unconvertible type '[]map[string]interface {}'

Steps to Reproduce

            "method_setting": &schema.Schema{
                Type:     schema.TypeMap,
                Optional: true,
                Elem: &schema.Resource{
                    Schema: map[string]*schema.Schema{
                        "metrics_enabled": &schema.Schema{
                            Type:     schema.TypeBool,
                            Optional: true,
                        },
                        "logging_level": &schema.Schema{
                            Type:     schema.TypeString,
                            Optional: true,
                        },
                        "data_trace_enabled": &schema.Schema{
                            Type:     schema.TypeBool,
                            Optional: true,
                        },
                    },
                },
            },
$ terraform plan

The current schema does support TypeMap which translates into map[string]interface{}. The interface is then decoded via mapstructure:
https://github.com/mitchellh/mapstructure/blob/master/mapstructure.go#L70

which seems to support slice of maps, but somehow expects string.


I'm creating this issue as I'm working around this by putting the key inside TypeSet as another field and I plan to link here.

helper/schema: Online/Network/API validation

Depends on hashicorp/terraform#15895
Or it's probably better if it hashicorp/terraform#15895 is implemented first.


It is certainly undesirable to perform any "slow" validation which requires network access by default in terraform validate, but there's still value in having such validation.

It can be opt-in for validate command and it can also (more importantly) run as part of plan.

The implementation can be very much similar to ValidateFunc, except that the interface needs access to provider's meta.

Example use cases in AWS provider:

"vpc_security_group_ids": {
	Type:     schema.TypeString,
	Optional: true,
	NetworkValidateFunc: func(k string, v, meta interface{}) (ws []string, es []error) {
		if hasEc2Classic(meta.(*AWSClient).supportedplatforms) {
			es = append(es, fmt.Errorf("Use security_groups (with SG names) in EC2 Classic-enabled region"))
			return
		}
		return
	},
},
"security_groups": {
	Type:     schema.TypeString,
	Optional: true,
	NetworkValidateFunc: func(k string, v, meta interface{}) (ws []string, es []error) {
		if !hasEc2Classic(meta.(*AWSClient).supportedplatforms) {
			es = append(es, fmt.Errorf("Use security_group_ids (with SG IDs) in VPC-enabled region"))
			return
		}
		return
	},
},

I'm not sure if NetworkValidateFunc is the best name, but this is rather a simple "reminder" rather than full-blown proposal with all answers.

Related: hashicorp/terraform-provider-aws#3897

vendored grpc library causes panics if a plugin attempts to compile against any other library that uses grpc, vendored or not

Terraform Version

0.10.8, but the problem exists in 0.11.x, too

Crash Output

The only relevant part is right here:

panic: http: multiple registrations for /debug/requests

Expected Behavior

It shouldn't panic

Actual Behavior

panic

Steps to Reproduce

Compile any plugin against both terraform and some other library which also has a dependency on grpc (the etcd/clientv3 client is a good example). Make sure the code in the plugin actually initializes gRPC, at which point, the code will panic because gRPC attempts to register golang/x/trace against debug/requests in its initializer, which happens before any code can execute to prevent it, and that 2nd registration blows up. If the various libraries were NOT using vendored versions of the libraries, initialization would only happen once no matter how many times it is imported, since it would recognize that all duplicate import were for the same library and would only init once. The net result is that it is impossible to write a provider for etcd because it isn't possible to build a provider that uses the etcd client.

References

google search for 'gRPC "panic: http: multiple registrations"' will find dozens, if not hundreds, of reports of this problem.

Error when fixing template with 2 IAM policies with the same name

I ran into an issue where I accidentally had the same name for 2 different policies (which works in terraform and it just overwrites eachother) and when I tried to apply a fix to give both a different name, I got the following error.

Terraform Version: 0.11.1
Resource ID: aws_cloudfront_distribution.main
Mismatch reason: attribute mismatch: default_cache_behavior.3176785357.allowed_methods.#
Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"default_cache_behavior.3176785357.allowed_methods.#":*terraform.ResourceAttrDiff{Old:"2", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.max_ttl":*terraform.ResourceAttrDiff{Old:"", New:"86400", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.viewer_protocol_policy":*terraform.ResourceAttrDiff{Old:"", New:"allow-all", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.#":*terraform.ResourceAttrDiff{Old:"0", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.#":*terraform.ResourceAttrDiff{Old:"1", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.target_origin_id":*terraform.ResourceAttrDiff{Old:"", New:"disneynow.go.com", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.trusted_signers.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.max_ttl":*terraform.ResourceAttrDiff{Old:"86400", New:"0", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.cached_methods.0":*terraform.ResourceAttrDiff{Old:"HEAD", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.2759845635.query_string":*terraform.ResourceAttrDiff{Old:"", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.default_ttl":*terraform.ResourceAttrDiff{Old:"3600", New:"0", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.2759845635.cookies.2625240281.whitelisted_names.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.lambda_function_association.~1052768284.lambda_arn":*terraform.ResourceAttrDiff{Old:"", New:"${aws_lambda_function.main.qualified_arn}", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.allowed_methods.0":*terraform.ResourceAttrDiff{Old:"", New:"GET", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.cached_methods.0":*terraform.ResourceAttrDiff{Old:"", New:"GET", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.compress":*terraform.ResourceAttrDiff{Old:"", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.trusted_signers.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.min_ttl":*terraform.ResourceAttrDiff{Old:"", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.2759845635.cookies.#":*terraform.ResourceAttrDiff{Old:"0", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.cached_methods.#":*terraform.ResourceAttrDiff{Old:"2", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.min_ttl":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.cached_methods.#":*terraform.ResourceAttrDiff{Old:"0", New:"2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.lambda_function_association.~1052768284.event_type":*terraform.ResourceAttrDiff{Old:"", New:"viewer-request", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.smooth_streaming":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.cached_methods.1":*terraform.ResourceAttrDiff{Old:"GET", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.lambda_function_association.#":*terraform.ResourceAttrDiff{Old:"1", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.cached_methods.1":*terraform.ResourceAttrDiff{Old:"", New:"HEAD", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.default_ttl":*terraform.ResourceAttrDiff{Old:"", New:"3600", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.target_origin_id":*terraform.ResourceAttrDiff{Old:"disneynow.go.com", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.allowed_methods.0":*terraform.ResourceAttrDiff{Old:"HEAD", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.2759845635.headers.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.field_level_encryption_id":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.2759845635.cookies.#":*terraform.ResourceAttrDiff{Old:"1", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.2759845635.cookies.2625240281.forward":*terraform.ResourceAttrDiff{Old:"", New:"none", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.compress":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.lambda_function_association.2300812592.event_type":*terraform.ResourceAttrDiff{Old:"viewer-request", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.2759845635.cookies.2625240281.whitelisted_names.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.lambda_function_association.#":*terraform.ResourceAttrDiff{Old:"0", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.2759845635.query_string":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.allowed_methods.1":*terraform.ResourceAttrDiff{Old:"GET", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.allowed_methods.#":*terraform.ResourceAttrDiff{Old:"0", New:"2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.smooth_streaming":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.2759845635.query_string_cache_keys.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.field_level_encryption_id":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.lambda_function_association.2300812592.lambda_arn":*terraform.ResourceAttrDiff{Old:"arn:aws:lambda:us-east-1:049826115511:function:DisneyNowRedirects_edge_nonprod:3", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.2759845635.cookies.2625240281.forward":*terraform.ResourceAttrDiff{Old:"none", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.forwarded_values.2759845635.query_string_cache_keys.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.viewer_protocol_policy":*terraform.ResourceAttrDiff{Old:"allow-all", New:"", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.~2894911237.allowed_methods.1":*terraform.ResourceAttrDiff{Old:"", New:"HEAD", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "default_cache_behavior.3176785357.forwarded_values.2759845635.headers.#":*terraform.ResourceAttrDiff{Old:"0", New:"0", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff(nil), Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}

Default is ignored in properties with a DiffSuppressFunc

Terraform Version

Terraform v0.7.7

Affected Resource(s)

Any resource with a property that has both a Default value and a DiffSuppressFunc

Example code

https://github.com/softlayer/terraform-provider-softlayer/blob/aa6f146/softlayer/resource_softlayer_virtual_guest.go#L197-L209

Description

If you have a property that has both a Default value and a DiffSuppressFunc, Terraform does not seem to use the Default value when generating the ResourceData for a new resource (no existing state yet).

Workaround: https://github.com/softlayer/terraform-provider-softlayer/blob/aa6f146/softlayer/resource_softlayer_virtual_guest.go#L310-L316

Expected Behavior

For Terraform to use the Default value for optional properties when there is no existing resource state yet.

lifecycle configuration to disable field validation

Feature Description

Terraform should provide a way for the user to disable the ValidationFunc for a specific field via a lifecycle configuration.

Example (from @bflad)

resource "XXX" "example" {
  # ... potentially other configuration ...
  example_attribute = "incorrectly-validated-value"

  lifecycle {
    ignore_validation = ["example_attribute"]
  }
}

Reasoning

While discussing an MR about upgrading the validation on S3 bucket names the need of a feature like this came up due to the fact that there could be old resources that would fail the validation even though no new ones not passing the ValidateFunc can be created. The proposed feature would provide a way for providers to adapt to new requirements and still provide a way for users with grandfathered resources to upgrade.

As @bflad mentioned in the linked MR this would have the additional use case of being able to disable validations in cases where a service changed in a way that a ValidateFunc is unnecessary strict and the provider didn't update yet.

Possible other solutions:

Initially I proposed to solve the problem of grandfathered resources with having a way for providers to declare ValidateFunc's that only get executed against resources that will be created and not against existing once, but I think this will be unevenly more complex and of less general use then this proposal.

References

Crash when using `aws_route53_zone`

Terraform Version

$ terraform --version
Terraform v0.11.2
+ provider.aws v1.7.0
+ provider.ignition v1.0.0
+ provider.random v1.1.0

Terraform Configuration Files

# dns.tf
data "aws_route53_zone" "domain" {
  name = "domain.io"
}

resource "aws_route53_zone" "vpc" {
  name = "vpc.${data.aws_route53_zone.domain.name}"
  tags = ["${var.tags}"]
}

resource "aws_route53_record" "vpc_ns" {
  zone_id = "${aws_route53_zone.vpc.zone_id}"
  name    = "vpc.${data.aws_route53_zone.domain.name}"
  type    = "NS"
  ttl     = "5"

  // records = ["${aws_route53_zone.vpc.name_servers}"]
  records = [
    "${aws_route53_zone.vpc.name_servers.0}",
    "${aws_route53_zone.vpc.name_servers.1}",
    "${aws_route53_zone.vpc.name_servers.2}",
    "${aws_route53_zone.vpc.name_servers.3}",
  ]
}

# tags.tf
variable "tags" {
  default = {
    environment = "dev"
    phase       = "research"
    provisioner = "terraform"
    unit        = "multi"
  }

  type = "map"
}

Crash Output

https://gist.github.com/shakefu/48cbd993c8b38c023fb8e67efa69fff3

Expected Behavior

A new plan should be generated including a subdomain zone.

Actual Behavior

Terraform crashes.

Steps to Reproduce

(Probably)

  1. Create DNS zone with tags key.
  2. Crash!

Additional Context

Found this as part of a large configuration. Was able to replicate with the minimal HCL (private domains redacted) above.

Using ConflictsWith with Lists or Sets

I'm trying to use ConflictsWith on a TypeList and it isn't working. I think I've narrowed down the issue, but I'm not entirely sure.

The openstack_compute_instance_v2 resource has a network attribute of TypeList. The list defines several nested attributes that can all be used to create a network.

Some of the attributes can't be used together. For example, port and floating_ip. So I made the following changes:

"port": &schema.Schema{
        Type:     schema.TypeString,
        Optional: true,
        ForceNew: true,
        Computed: true,
        ConflictsWith: []string{"network.floating_ip"},
},
"floating_ip": &schema.Schema{
        Type:     schema.TypeString,
        Optional: true,
        ForceNew: true,
        Computed: true,
        ConflictsWith: []string{"network.port"},
},

Then, given something simple like:

resource "openstack_compute_instance_v2" "instance_1" {
  name = "instance_1"
  network {
    floating_ip = "foo"
    port = "bar"
  }
}

I would expect an error to be throw, however, one isn't. But, if I change the schema to:

"port": &schema.Schema{
        Type:     schema.TypeString,
        Optional: true,
        ForceNew: true,
        Computed: true,
        ConflictsWith: []string{"network.0.floating_ip"},
},
"floating_ip": &schema.Schema{
        Type:     schema.TypeString,
        Optional: true,
        ForceNew: true,
        Computed: true,
        ConflictsWith: []string{"network.0.port"},
},

Then I get a validation error. Of course, that is only applicable to the first defined network.

Throwing some debug output into helper/schema/schema.go and terraform/resource.go, I think the following is happening:

During the loop for a conflicting key, s is network.0.port but conflicting_key is network.floating_ip. When network.floating_ip is checked here, since no index is specified, no value is returned.

Stop silently converting map Elem from Resource to TypeString

Proposed resolution: modify all the schemas which use Type: TypeMap with Elem: Resource; then tighten this validation in getValueType()

Arguably a follow on to hashicorp/terraform#12638

In helper/schema/schema.go:getValueType we see:

if _, ok := schema.Elem.(*Resource); ok {
	// TODO: We don't actually support this (yet)
	// but silently pass the validation, until we decide
	// how to handle nested structures in maps
	return TypeString, nil
}

This has allowed us to write a provider which looks like it's actually validating its elements, but does no such thing.

return &schema.Resource{
	Schema: map[string]*schema.Schema{
		"create_vnic_details": {
			Type:     schema.TypeMap,
			Optional: true,
			Elem: &schema.Resource{
				Schema: map[string]*schema.Schema{
					"assign_public_ip": {
						Type:     schema.TypeBool,
						Optional: true,
					},
				},
			},
		},
	},
}

I do notice this was originally introduced by @radeksimko in hashicorp/terraform@1df1c21

Expected Behavior

getValueType() should have returned an error like "create_vnic_details: unexpected map value type: schema.Resource"

Actual Behavior

Silently allowed using create_vnic_details.assign_public_ip but instead of rendering as "true" or "false", renders as "1" or "0"

Steps to Reproduce

I can do an actual test of this if desired.
Create a provider with the above schema, then use to access create_vnic_details.assign_public_ip

Add mean to denote resource restarts/downtime during plan, similar to ForceNew

Following up on @bflad's suggestion in hashicorp/terraform-provider-aws#2250 (review), it would be great if Terraform core supported denoting resource restarts/downtime during plan phase, similar to how it denotes recreation (forces new resource).

Maybe adding a schema modifier like ForcesDowntime like the ForcesNew would do it.

For example, adding ForcesDowntime to the instance_type field of an EC2 instance would warn the user executing the plan that his modification to the configuration, or an out of band modification, will result in downtime of that instance.

Default values for TypeList

It is currently not possible to populate TypeList values with default values.

"default_cached_methods": &schema.Schema{
  Type:     schema.TypeList,
  Elem:     &schema.Schema{Type: schema.TypeString},
  Optional: true,
  Default:  []string{"GET", "HEAD"},
},

Useful for attributes which are required by the provider

ImportStateVerify should apply DiffSuppressFuncs

ImportStateVerify tests import a resource with a given id, and then check all fields in the imported state against an existing resource. However, it does these checks with strict equality, meaning that if a diff would have been suppressed, it still shows up in these tests as a diff. It would be great to be able to suppress those automatically so we don't have to complicate our test logic around checking those fields.

Provisioners belonging to providers

Currently there is a strict separation between providers and provisioners, which makes sense given the current set of available provisioners.

However I have some use-cases where a provisioner and a provider would be more closely related:

  • For AWS opsworks (working on that in hashicorp/terraform#1892), certain lifecycle events are triggered via the API and could be useful to use as provisioners on an opsworks stack.
  • With Rundeck (see hashicorp/terraform#2412) one could trigger a rundeck job as a provisioner, allowing rundeck to handle the details of SSHing into the necessary machines and retaining the audit logs of what was done.

While these certainly could be implemented as standalone provisioners that happen to interact with the same APIs as the provider, this is inconvenient both as an implementer (need to re-implement things such as client instantiation, credentials handling) and as a user (need to duplicate all of the provider settings inside the provisioner block, rather than just having them inherit from the provider as we see with resources).

It feels to me like it would be most convenient for providers to be able to provide provisioners as well as resources, and then provider-provided provisioners would get access to the same "meta" value that the resource definitions get access to, which most providers use to stash their API client. Presumably in the dependency graph such a provisioner would depend both on the resource it's provisioning and on the provider it came from.

I'm mainly just opening this ticket to start a discussion about the issue and see if folks have other similar use-cases or alternative approaches.

Show warnings when fixing names via StateFunc?

This is related to hashicorp/terraform#4020 but also many other PRs which have been mimicking the AWS API behaviour in similar ways.

I understand the convenience for the user having existing resources and I understand that is the motivation for @stack72 and others to solve these issues that way. However I think that Terraform shouldn't be so silent about these fixes.

I think we should be more transparent and at least show warnings if the converted names differ from the original ones.


I'd personally like to use warnings in a way that it would allow us to turn on strict validation in the future (after a few versions, when we're sure enough users have been warned), but I understand the convenience for existing users has probably priority and that others may not like this (correct me if I'm wrong).

How to debug terraform

Is there any better way to debug terraform provider other than using TF_LOG=DEBUG.
Please list the steps required to reproduce the log which i did.

set TF_LOG=DEBUG
set TF_TF_LOG_PATH=/tmp/log
terraform apply
observe TRACE level logs in the file /tmp/log

Please let me know if there is better way than this.

ConflictsWith should flag a violation only if the field has a non zero value set.

Hi there,

The issue originally came up in the terraform-provider-google.
hashicorp/terraform-provider-google#683

As of now, when ConflictsWith looks for violations, it only checks if the other field is set. Ideally, it would check that the field is set and that it doesn't have the zero value (at least for strings).

The code checking for violations of ConflictsWith:

for _, conflicting_key := range schema.ConflictsWith {
	if value, ok := c.Get(conflicting_key); ok {
		return fmt.Errorf("%q: conflicts with %s (%#v)", k, conflicting_key, value)
	}
}

https://github.com/hashicorp/terraform/blob/master/helper/schema/schema.go?#L1235

if c.Get could have a behavior similar to the GetOk method, it would be nice:
https://github.com/hashicorp/terraform/blob/master/helper/schema/resource_data.go#L89

It makes it easy to create parametrizable module where you have two conflicting field and the user specifies one or the other.

Here is a concrete example in the google provider where this would be useful:

Specifying an empty string value for org_id or folder_id does not "unset" the argument for the google_project resource.

My scenario is to have a module that parameterizes the creation of a project:

resource "google_project" "host" {
  name            = "${var.project_name}"
  org_id          = "${var.org_id}"
  folder_id       = "${var.folder_id}"
  project_id      = "${var.project_id == "" ? random_id.default-project-id.hex : var.project_id}"
  billing_account = "${var.billing_account}"
}

In this example I can't provide both org_id and folder_id even if one is empty "" without getting this error:

Error: module.shared-vpc.google_project.host: "folder_id": conflicts with org_id ("1234567890")

Error: module.shared-vpc.google_project.host: "org_id": conflicts with folder_id ("")

If I could pass an empty value for either org_id or folder_id, that would be fantastic.

/cc @danisla

Ability to write resource implementation in scripted languages

Looking at plugin documentation, it should be pretty easy to write a plugin that delegates all work to shell script written in bash or python, namely calling it with operation name (get-schema, Create, Read, Update, Delete, Exists) and passing data via env variables.

Motivation: manage application deployment but do not kill application host in the process (it may host many applications, it is slower, there could be useful data etc)

helper/schema: can't track changes to individual fields in a schema.Set

Currently, when we have a logical resource in a schema.Set (e.g. an ebs_block_device in an aws_instance), and there are computed fields, we key the hash off the minimal unique set of fields. Because if this, changes to other fields of that set don't register as a diff. This not only prevents changes that require a new resource from being detected (changing the volume_size or iops of an ebs_block_device won't create a new instance and volume), but it prevents mutable fields like tags from being updated.

This ultimately prevents us from implementing features like hashicorp/terraform#3531, where changes to a tags field would require an update.

  • Adding all fields by default to the hash value doesn't work, because computed fields change the hash value, so there is always a diff between the config and the state.
  • Adding only the fields mutable within the config to the hash value also doesn't work, because the new hash value still creates a new set in the diff, which will trigger the ForceNew for all the remaining fields.

Terraform Crash

Hi everyone , Please need advice/help. I am getting below error message " !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
" in pre-prod env, however its working fine in other env like ( Prod) . Current installed version Terraform v0.9.11 .

Allow defining relationships in config schemas

Given the following piece of code in helper/schema/schema.go

func (m schemaMap) Input(
    input terraform.UIInput,
    c *terraform.ResourceConfig) (*terraform.ResourceConfig, error) {
    keys := make([]string, 0, len(m))
    for k, _ := range m {
        keys = append(keys, k)
    }
    sort.Strings(keys)

    for _, k := range keys {
...

all config options are sorted alphabetically and treated independently in the prompt.
I can imagine cases where I would like the user to enter some config options first and eventually not ask for some other options based on the entered data.

Example?
Imagine following AWS provider config:

  • credentials_provider (detect | iam | env | static)
  • credentials_file_path
  • credentials_file_profile_name
  • access_key
  • secret_key
  • security_token

It is obvious that:

  1. It would be annoying to ask user for all of these
  2. Some options with certain values are making other ones superfluous (e.g. credentials_provider = detect, iam or env means that I should ignore all other options)

Would there be interest in such feature? If not, how would you suggest solving the problem with dependent config options above?

Related: hashicorp/terraform#1049

Crash Referencing terraform_remote_state

Hi

Terraform versions:
Terraform v0.11.3

  • provider.aws v1.10.0
  • provider.template v1.0.0

Crash log located here: https://gist.githubusercontent.com/moorichardmoo/5fb92530c7ec33de93163ce8d8517130/raw/9bc64e80a41d5d4d9c7af8d49e53bea00fc4fcc5/gistfile1.txt

Steps to reproduce:
terraform plan -var-file="terraform.tfvars" -var-file="secret.tfvars"

Expected Behavior:
Terraform plan created

Actual Behavior:
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Steps to Reproduce:
terraform init --backend-config="backend.conf" -reconfigure -upgrade
terraform plan -var-file="terraform.tfvars" -var-file="secret.tfvars"

I've tried a full reboot, upgrading, removing recently added parts of my .tf files, and have tried digging through the crash log but I can't see anything that might help. If you have any ideas I'd be grateful to hear them.

Thanks

Richard

ConflictsWith does not function correctly on fields within a list or set

Terraform Version

v0.9.1

Affected Resource(s)

All, this is a core issue. For example though, we'll focus on the postgresql_schema resource since it has ConflictsWith rules in place on a set object (policy).

Terraform Configuration Files

resource "postgresql_schema" "test" {
  name = "test"

  policy = {
    create            = "true"
    create_with_grant = "true"
  }
}

Expected Behavior

In general, resource properties within a list or set should respect the ConflictsWith attribute and the check should be performed within each item in the list/set.

This should work, but does not:
ConflictsWith: []string { "list_property.conflicting_field_name" }

The above terraform config should result in an error when executing a plan command since the create and create_with_grant fields under policy conflict with each other.

Here is the code for the postgresql resource where the conflicts are configured:

schemaPolicyAttr: &schema.Schema{
    Type:     schema.TypeSet,
    Optional: true,
    Computed: true,
    Elem: &schema.Resource{
        Schema: map[string]*schema.Schema{
            schemaPolicyCreateAttr: {
                Type:          schema.TypeBool,
                Optional:      true,
                Default:       false,
                Description:   "If true, allow the specified ROLEs to CREATE new objects within the schema(s)",
            --> ConflictsWith: []string{schemaPolicyAttr + "." + schemaPolicyCreateWithGrantAttr},
            },
            schemaPolicyCreateWithGrantAttr: {
                Type:          schema.TypeBool,
                Optional:      true,
                Default:       false,
                Description:   "If true, allow the specified ROLEs to CREATE new objects within the schema(s) and GRANT the same CREATE privilege to different ROLEs",```
            --> ConflictsWith: []string{schemaPolicyAttr + "." + schemaPolicyCreateAttr},
            },

Actual Behavior

In the terraform config above, the plan command succeeds without error.

ConflictsWith can only work for a list/set if you manually specify the index in the ConflictsWith value and therefore can only really work if there is a single item in the list.

For exmaple:
ConflictsWith: []string { "list_property.0.conflicting_field_name" }

Steps to Reproduce

  1. Using the config above, run terraform plan and just press enter through all the inputs.
  2. Observe that there is no conflict error

References

Discovered while implementing managed disk support for azurerm:

Opened PR for proposed fix:

Diffs: going from zero value to nil

The current diff logic does not currently account for a transition from a zero value to a nil state. This can be seen here, in that any nils returned during the diff process (in the config layer, specifically) do not get treated as nils, but converted to that type's zero value.

This creates problems like diffs being completely ignored for primitive transitions including, but not limited to:

false => nil
0 => nil
"" => nil

The first two are the most significant, as there has been plenty of precedent that these states are necessary in Terraform where a nil state has meaning, possibly instructing an API to disable a feature or configure it to inherit settings from a parent subsystem, and is why GetOkExists was exposed - which currently has its own issues, see hashicorp/terraform#17557. These two issues combined basically mean that a nil state only has meaning in a resource if the intention is to never move away from the nil state, which is a one-way trip that can't be reverted without state hacking.

As a note, both of these issues will be addressed in future releases (namely with the advent of HCL2, as mentioned in hashicorp/terraform#17557). This is mainly just a note to ensure there is a TODO to make sure the diff shortcomings are addressed.

schema: Computed Value Hints for Downstream Validation

Consider the following example:

resource "aws_cloudtrail" "foobar" {
    name = "tf-trail-foobar"
    s3_bucket_name = "${aws_s3_bucket.foo.arn}" # ARN instead of name
    s3_key_prefix = "/prefix"
    include_global_service_events = false
}

resource "aws_s3_bucket" "foo" {
    bucket = "tf-yada-test-trail"
    force_destroy = true
    policy = <<POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSCloudTrailAclCheck",
            "Effect": "Allow",
            "Principal": {
              "Service": "cloudtrail.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::tf-yada-test-trail"
        },
        {
            "Sid": "AWSCloudTrailWrite",
            "Effect": "Allow",
            "Principal": {
              "Service": "cloudtrail.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::tf-yada-test-trail/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
POLICY
}

Even though it is obvious that ARN cannot be used in a parameter where we expect raw name, terraform isn't able to check this, because ARN is computed (put away the fact we don't have ValidateFunc on s3_bucket_name for the moment). Instead API error is returned at a point when S3 bucket has been already created.

Error applying plan:

1 error(s) occurred:

* aws_cloudtrail.foobar: InvalidS3BucketNameException: Bucket name should not contain ':': arn:aws:s3:::tf-yada-test-trail

Maybe this could be done by introducing something like example_value for each Computed field and output against which we could then validate?

d.HasChange() is returning true for elements that have not actually changed

Terraform Version

terraform -v
Terraform v0.11.3
+ provider.google (unversioned)

Terraform Configuration Files

resource "google_service_account" "is-1108" {
  account_id   = "is-1108"
  display_name = "is-1108"
}

resource "google_project_iam_member" "is-1108" {
  role   = "roles/editor"
  member = "serviceAccount:${google_service_account.is-1108.email}"
}

resource "google_compute_instance" "is-1108" {
  name         = "is-1108-3"
  machine_type = "n1-standard-1"
  zone         = "us-central1-f"
  tags         = ["foo", "bar"]

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-9"
    }
  }

  network_interface {
    network = "default"
    access_config {
      # Ephemeral IP
    }
  }

  service_account {
    email = "${google_service_account.is-1108.email}"
    scopes = []
  }

  depends_on = ["google_project_iam_member.is-1108"]
}

Debug Output

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

google_service_account.is-1108: Refreshing state... (ID: projects/graphite-test-danahoffman-tf/s...danahoffman-tf.iam.gserviceaccount.com)
google_project_iam_member.is-1108: Refreshing state... (ID: graphite-test-danahoffman-tf/roles/edit...danahoffman-tf.iam.gserviceaccount.com)
google_compute_instance.is-1108: Refreshing state... (ID: is-1108-3)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ google_compute_instance.is-1108
      tags.1996459178: "" => "bar"
      tags.2015626392: "baz" => ""
      tags.2356372769: "foo" => "foo"


Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

https://gist.github.com/danawillow/fac76b10939fab4a31112108e70bbf37
Note the extra statements I added at the end:

2018-02-21T13:26:39.443-0800 [DEBUG] plugin.terraform-provider-google: 2018/02/21 13:26:39 [INFO] machine_type: HasChange false, Old n1-standard-1, New n1-standard-1
2018-02-21T13:26:39.443-0800 [DEBUG] plugin.terraform-provider-google: 2018/02/21 13:26:39 [INFO] min_cpu_platform: HasChange false, Old , New
2018-02-21T13:26:39.443-0800 [DEBUG] plugin.terraform-provider-google: 2018/02/21 13:26:39 [INFO] service_account: HasChange true, Old [map[scopes:0xc42024f280 email:[email protected]]], New [map[email:[email protected] scopes:0xc42024f1a0]]
2018-02-21T13:26:39.443-0800 [DEBUG] plugin.terraform-provider-google: 2018/02/21 13:26:39 [INFO] oScopes: []string{}
2018-02-21T13:26:39.443-0800 [DEBUG] plugin.terraform-provider-google: 2018/02/21 13:26:39 [INFO] nScopes: []string{}

Expected Behavior

Only the tags should have updated

Actual Behavior

The tags did update, but d.HasChange("service_account") returned true, causing Terraform to try to update the service account as well.

Steps to Reproduce

Use config above

  1. terraform init
  2. terraform apply
  3. Change the tags in some way
  4. terraform apply

Additional Context

None.

References

Originally reported at hashicorp/terraform-provider-google#1108, though the real bug appears to be in core and not the provider.

Add support for 64-bit integers and/or "big" integers?

Motivated by support for Amazon side ASNs in hashicorp/terraform-provider-aws#1888 and hashicorp/terraform-provider-aws#2861 but more widely applicable.

Terraform uses the golang int type for values of TypeInt and the underlying width of the integer depends on the platform the binary is built for, so integer values greater than 2147483647 cannot safely be represented using TypeInt.

Adding support for TypeInt64 (golang int64) and/or TypeBigInt (golang big.Int) would touch more than just the schema code, affecting HIL, the interpolation functions etc.

Updating TypeMap during create doesn't work

Terraform Version

Terraform v0.10.7-dev

This could quite possibly be my lack of understanding but I am working on the libvirt provider and cannot seem to update an entry in a TypeMap during the Create cycle, updating a TypeString works fine.

Here is part of the schema

			"graphics": &schema.Schema{
				Type:     schema.TypeMap,
				Optional: true,
				Computed: true,
			},
			"video_type": &schema.Schema{
				Type:     schema.TypeString,
				Optional: true,
				Computed: true,
			},

during

func resourceLibvirtDomainCreate(d *schema.ResourceData, meta interface{}) error {

I am checking various things and if video_type or graphics.autoport are not set I add defaults.

d.Set("video_type", "cirrus")
d.Set("graphics.autoport", "yes")

When the resource is created and read back video_type correctly appears, graphics.autoport does not.

I was wondering if this is just expected behaviour and I should be using TypeList or TypeSet so I can specify the individual structure that forms the graphics block, or is this a bug ?

Full code is in my graphicsandvnc branch file libvirt/resource_libvirt_domain.go

TestAccLibvirtDomain_GraphicsVNCSimple

is failing because of this.

Thanks

timestamp in TXT record

Just reporting, because my console asked so nicely :-)

Terraform Version

Terraform v0.10.4

Console Output

Error applying plan:

1 error(s) occurred:

* module.auth0-token-endpoint.aws_route53_record.txt: aws_route53_record.txt: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.

Please include the following information in your report:

    Terraform Version: 0.10.4
    Resource ID: aws_route53_record.txt
    Mismatch reason: attribute mismatch: records.4029139338
    Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"zone_id":*terraform.ResourceAttrDiff{Old:"", New:"Z1HJLND6NK0LRH", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "records.2673983296":*terraform.ResourceAttrDiff{Old:"", New:"at=2017-09-12T14:11:49Z", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"", New:"auth0\\040token\\040endpoint._api._tcp.kodiak.farmad.be", NewComputed:false, NewRemoved:false, NewExtra:"auth0\\040token\\040endpoint._api._tcp.kodiak.farmad.be", RequiresNew:true, Sensitive:false, Type:0x0}, "records.4029139338":*terraform.ResourceAttrDiff{Old:"", New:"submitted=2017-09-12T14:16:31Z", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "type":*terraform.ResourceAttrDiff{Old:"", New:"TXT", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.#":*terraform.ResourceAttrDiff{Old:"", New:"4", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.214571025":*terraform.ResourceAttrDiff{Old:"", New:"txtvers=1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ttl":*terraform.ResourceAttrDiff{Old:"", New:"30", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.2612474839":*terraform.ResourceAttrDiff{Old:"", New:"path=/oauth/token", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "fqdn":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
    Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"name":*terraform.ResourceAttrDiff{Old:"", New:"auth0\\040token\\040endpoint._api._tcp.kodiak.farmad.be", NewComputed:false, NewRemoved:false, NewExtra:"auth0\\040token\\040endpoint._api._tcp.kodiak.farmad.be", RequiresNew:true, Sensitive:false, Type:0x0}, "records.2612474839":*terraform.ResourceAttrDiff{Old:"", New:"path=/oauth/token", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ttl":*terraform.ResourceAttrDiff{Old:"", New:"30", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.214571025":*terraform.ResourceAttrDiff{Old:"", New:"txtvers=1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "fqdn":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "type":*terraform.ResourceAttrDiff{Old:"", New:"TXT", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.#":*terraform.ResourceAttrDiff{Old:"", New:"4", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.2673983296":*terraform.ResourceAttrDiff{Old:"", New:"at=2017-09-12T14:11:49Z", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "records.3211157325":*terraform.ResourceAttrDiff{Old:"", New:"submitted=2017-09-12T14:16:36Z", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "zone_id":*terraform.ResourceAttrDiff{Old:"", New:"Z1HJLND6NK0LRH", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}

Also include as much context as you can about your config, state, and the steps you performed to trigger this error.


Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

This occurs when I use the module (under development)

github.com/peopleware/terraform-ppwcode-modules//serviceInstance?ref=b941f7b8147a3b77c882ef2d95b883870faea209

The issue is clearly with the "submitted" property, I try to add to a TXT record. The value of this property is, in the above SHA, filled in with timestamp().

It turns out Terraform doesn't like that.

In the output, the property is reported with different ids, 4029139338 vs 3211157325, and with a 5-second-difference 2017-09-12T14:16:31Z vs 2017-09-12T14:16:36Z:

"records.4029139338":*terraform.ResourceAttrDiff{Old:"", New:"submitted=2017-09-12T14:16:31Z", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, 

versus

"records.3211157325":*terraform.ResourceAttrDiff{Old:"", New:"submitted=2017-09-12T14:16:36Z", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0},

Parallelism control per-resource when count > 1

Feature/Enhancement request

I'd like to see support for modifying the parallelism on a per-resource basis.

Use cases:

  1. Create a quantity of the same resource, but actions must be executed one at a time. An example of this would be chef-backend, and joining the cluster. A leader (the first) must be established, but then subsequent backend members (initially followers) must join one at a time due to the backend configuration steps going on in the background. This would avoid the need to craft a waiting mechanism for N objects (here's where count comes into play)
  2. For load considerations, sometimes more or less than the current parallelisim can be handled. It would be nice to have control per resource on this.
  3. In the case of a cluster initiator and cluster members joining (sequential) we have the possibility to control initialization versus joining, and do so in sequence in one resource versus two or more with complicated logic.
  4. Sequential phases are more tightly controlled, and doesn't affect depends_on but reduces overall line count and appearances of resource duplication

Cleanup, sync and commit actions for Providers

Hi!

The Provider API could use a TeardownFunc method just like it offers a ConfigureFunc method.
Basically, some use cases need to make a final call to some API after the whole Terraform run is done, say, for example, in cases where the client used to communicate with the API does not support token-based authentication but instead user/password combination, and must make a call in the end to kill the session (some systems start to show issues if you leave too many connections open as the timeout is quite long and the developer may not have control whatsoever over that value).
There is another use case for this that I can think of at the moment and that is the Cobbler provider: by the end of creating a bunch of systems you need to make a call to their sync method, which you can call from when you create a resource from within Terraform but this has lead to odd behaviour as Cobbler is not thread safe and making a call to this sync method several times in a short period of time will make things look funky (cobbler/cobbler#1570 & hashicorp/terraform#5969)

Something like this is what I have in mind:

schema.Provider{
  Schema: ...,
  ResourcesMap: ...,
  ConfigureFunc: func (d *schema.ResourceData) ({}interface, error) {},
  TeardownFunc: func (meta {}interface) (error) {},
}

And the teardown method would receive the value previously returned by the ConfigureFunc.

What do you guys think?

Bug in terraform/helper/resource: WaitForState loop never exits when "TARGET" state is met

It seems terraform/helper/resource/state.go has a faulty check for the target state.

I'm writing a custom terraform provider and using terraform/helper/resource StateChangeConf functionality for checking on the progress of asynchronous task. Even though the resource.StateRefreshFunc returns the "state" string that is == to "Target", the WaitForState() still times outs without identifying that two of them match.

Code extract for relevant functions and the relevant error are outline below:

func resourceViprHostWaitForTask(timeout time.Duration, client *Client, hostID string, opID string) error {
    stateConf := &resource.StateChangeConf{
		Pending:    []string{"pending"},
		Target:     []string{"ready"},
		Refresh:    hostStateRefreshFunc(client, hostID, opID),
		Timeout:    timeout,
		MinTimeout: HostOperationMinTimeout,
		Delay:      HostOperationRetryDelay,
	}
	
	_, err := stateConf.WaitForState()
	if err != nil {
		return err
	}
	return nil
}
func hostStateRefreshFunc(client *Client, hostID string, opID string) resource.StateRefreshFunc {
	return func() (interface{}, string, error) {

		log.Print("[DEBUG] Refreshing host task state")
		//debug
		log.Print(fmt.Sprintf("[DEBUG] Refreshing host task state %s %s", hostID, opID))
		req, err := client.Request("GET", fmt.Sprintf("/compute/hosts/%s/tasks/%s", hostID, opID), nil)
		if err != nil {
			return nil, "", err
		}

		resp, err := client.Http.Do(req)
		if err != nil {
			return nil, "", err
		}
		defer resp.Body.Close()

		if resp.StatusCode != 200 {
			return nil, "", fmt.Errorf("Error on refreshing tasks for host %s: %s", hostID, resp.Status)
		}

		var task taskResponse
		err = json.NewDecoder(resp.Body).Decode(&task)
		if err != nil {
			return nil, "", err
		}
                //debug print to ensure that I'm returning the exact string the "target" in StateChangeConf
                //is expecting
		if task.State == "ready" {
			log.Print("[DEBUG] RETURNING READY")
		    return nil, "ready", nil
		}
		return nil, task.State, nil
	}
}

For testing purposes I'm returning an exact string that the "Target" expects for successful completion, but the code keeps looping, as seen in the logs here:

2017/12/27 16:14:38 [TRACE] dag/walk: vertex "provider.vipr (close)", waiting for: "vipr_host.host3 (destroy)"
2017-12-27T16:14:38.428-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:38 [DEBUG] Refreshing host task state
2017-12-27T16:14:38.428-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:38 [DEBUG] Refreshing host task state urn:storageos:Host:bbfec379-afbd-46d9-bfd8-bb02a761c253:vdc1 1b098970-8afb-4acb-b924-73ba41f890a1
2017-12-27T16:14:38.470-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:38 [DEBUG] RETURNINGREADY
2017-12-27T16:14:38.470-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:38 [TRACE] Waiting 10s before next try
2017/12/27 16:14:43 [TRACE] dag/walk: vertex "root", waiting for: "meta.count-boundary (count boundary fixup)"
2017/12/27 16:14:43 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vipr_host.host3 (destroy)"
2017/12/27 16:14:43 [TRACE] dag/walk: vertex "provider.vipr (close)", waiting for: "vipr_host.host3 (destroy)"
vipr_host.host3: Still destroying... (ID: urn:storageos:Host:bbfec379-afbd-46d9-bfd8-bb02a761c253:vdc1, 2m50s elapsed)
2017/12/27 16:14:48 [TRACE] dag/walk: vertex "provider.vipr (close)", waiting for: "vipr_host.host3 (destroy)"
2017/12/27 16:14:48 [TRACE] dag/walk: vertex "root", waiting for: "meta.count-boundary (count boundary fixup)"
2017/12/27 16:14:48 [TRACE] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "vipr_host.host3 (destroy)"
2017-12-27T16:14:48.475-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:48 [DEBUG] Refreshing host task state
2017-12-27T16:14:48.475-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:48 [DEBUG] Refreshing host task state urn:storageos:Host:bbfec379-afbd-46d9-bfd8-bb02a761c253:vdc1 1b098970-8afb-4acb-b924-73ba41f890a1
2017-12-27T16:14:48.518-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:48 [DEBUG] RETURNINGREADY
2017-12-27T16:14:48.518-0500 [DEBUG] plugin.terraform-provider-vipr: 2017/12/27 16:14:48 [TRACE] Waiting 10s before next try

As a result, there is a timeout error, that confirms that both states are the same(the bolded portion is the error passed from terraform/helper/resource/
vipr_host.host3: Error deleting ViPR host urn:storageos:Host:bbfec379-afbd-46d9-bfd8-bb02a761c253:vdc1: **timeout while waiting for state to become 'ready' (last state: 'ready', timeout: 3m0s)**

error as I added new output variable to existing infra

Terraform Version

0.11.2

Terraform Configuration Files

main.tf
(commneted part is never used)

# Specify the provider and access details
provider "aws" {
  region = "${var.aws_region}"
}

resource "aws_vpc" "default" {
  cidr_block           = "10.2.0.0/16"
  enable_dns_hostnames = true

  tags {
    Name = "tf_test"
  }
}


# Public Subnet 1

resource "aws_subnet" "tf_test_subnet" {
  vpc_id                  = "${aws_vpc.default.id}"
  cidr_block              = "10.2.0.0/24"
  map_public_ip_on_launch = true

  tags {
    Name = "tf_test_subnet"
    Tier = "web"
  }
}





resource "aws_internet_gateway" "gw" {
  vpc_id = "${aws_vpc.default.id}"

  tags {
    Name = "tf_test_ig"
  }
}

resource "aws_route_table" "r" {
  vpc_id = "${aws_vpc.default.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.gw.id}"
  }

  tags {
    Name = "aws_route_table"
  }
}

resource "aws_route_table_association" "a" {
  subnet_id      = "${aws_subnet.tf_test_subnet.id}"
  route_table_id = "${aws_route_table.r.id}"
}




# ELASTIC IP NEEDS TO BE MENTIONED # Now for NAT.. Count can be increased for EC2
# FOr Multiple https://github.com/hashicorp/terraform/issues/5185

resource "aws_eip" "nat" {
    count = "1"
    vpc = true
    depends_on = ["aws_internet_gateway.gw"]
}




# PRIVATE SUBNET

resource "aws_subnet" "tf_test_subnet_private" {
  vpc_id                  = "${aws_vpc.default.id}"
  cidr_block              = "10.2.1.0/24"
  map_public_ip_on_launch = false

  tags {
    Name = "tf_test_subnet_private"
    Tier = "web"
  }
}

# NAT GATEWAY FOR PRIVATE SUBNET

resource "aws_nat_gateway" "gw" {
  subnet_id     = "${aws_subnet.tf_test_subnet.id}"
  allocation_id = "${aws_eip.nat.id}"         
  depends_on = ["aws_internet_gateway.gw"]
  
  tags {
    Name = "tf_nat_gateway"
  }
}



resource "aws_route_table" "r2" {
  vpc_id = "${aws_vpc.default.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_nat_gateway.gw.id}"
  }

  tags {
    Name = "aws_route_table_nat"
  }
}

resource "aws_route_table_association" "b" {
  subnet_id      = "${aws_subnet.tf_test_subnet_private.id}"
  route_table_id = "${aws_route_table.r2.id}"
}



# GET DATA INFO FOR multi subnets

#data "aws_vpc" "target_vpc" {
#filter = {
#    name = "tag:Name"
#    values = ["tf_test"]
#  }
#}
#data "aws_subnet_ids" "target_web_tier_subnet_ids" {
#  vpc_id = "${data.aws_vpc.target_vpc.id}"
#  tags {
#    Tier = "web"
#  }
#
#}
#data "aws_subnet" "app_tier" {
#  count = "${length(data.aws_subnet_ids.target_web_tier_subnet_ids.ids)}"
#  id = "${data.aws_subnet_ids.target_web_tier_subnet_ids.ids[count.index]}"
#}








# Our default security group to access
# the instances over SSH and HTTP
resource "aws_security_group" "default" {
  name        = "instance_sg"
  description = "Used in the terraform"
  vpc_id      = "${aws_vpc.default.id}"

  # SSH access from anywhere
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # HTTP access from anywhere
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Our elb security group to access
# the ELB over HTTP
resource "aws_security_group" "elb" {
  name        = "elb_sg"
  description = "Used in the terraform"

  vpc_id = "${aws_vpc.default.id}"

  # HTTP access from anywhere
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # ensure the VPC has an Internet gateway or this step will fail
  depends_on = ["aws_internet_gateway.gw"]
}



resource "aws_elb" "web" {
  name = "example-elb"

  # The same availability zone as our instance
  subnets = ["${aws_subnet.tf_test_subnet.id}"]

  security_groups = ["${aws_security_group.elb.id}"]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }

  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/"
    interval            = 30
  }

  # The instance is registered automatically

  instances                   = [ "${module.frontend_api.instance_ids}" ]
  cross_zone_load_balancing   = true
  idle_timeout                = 400
  connection_draining         = true
  connection_draining_timeout = 400
}


resource "aws_lb_cookie_stickiness_policy" "default" {
  name                     = "lbpolicy"
  load_balancer            = "${aws_elb.web.id}"
  lb_port                  = 80
  cookie_expiration_period = 600
}





#launch in private / public

module "frontend_api" 
{
  source = "../modules/app-hosts"
  name = "${var.environment}-app"
  count = 1
  ami                   = "ami-7f675e4f"
  instance_type          = "t2.micro"
  key_name               = "terraform_acc"
  monitoring             = true
  subnet_id              = "${aws_subnet.tf_test_subnet.id}"
  vpc_security_group_ids = ["${aws_security_group.default.id}"]
  disk_size         = 50 

#  iam_instance_profile = 
  
  tags = {
    Terraform = "true"
    Environment = "dev"
  }
}


#launch is private

module "backend_api" 
{
  source = "../modules/app-hosts"
  name = "${var.environment}-cel"
  count = 1
  ami                   = "ami-7f675e4f"
  instance_type          = "t2.micro"
  key_name               = "terraform_acc"
  monitoring             = true
   subnet_id              = "${aws_subnet.tf_test_subnet.id}"
  vpc_security_group_ids = ["${aws_security_group.default.id}"]
  disk_size         = 50 

#  iam_instance_profile = 
  
  tags = {
    Terraform = "true"
    Environment = "dev"
  }
}

outputs.tf

output "address" {
  value = "${aws_elb.web.dns_name}"
}


output "public_subnet_ids" {
  value = [
    "${aws_subnet.tf_test_subnet.id}"
  ]
}

output "private_subnet_ids" {
  value = [
    "${aws_subnet.tf_test_subnet_private.id}"
  ]
}

output "vpc_id" {
  
  value = "${aws_vpc.default.id}"
}

Debug Output

Crash Output

https://gist.github.com/ranvijayj/1ded35b16acbb71cbe934e8f429d16b4

AND

Also include as much context as you can about your config, state, and the steps you performed to trigger this error.

  • module.backend_api.aws_instance.instance: aws_instance.instance: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.

Expected Behavior

Actual Behavior

Steps to Reproduce

Found a really good way to reproduce

add any new variable to output.tf and ter plan and apply... It will fail with same error..

  1. terraform init
  2. terraform apply

Additional Context

References

For state management also using s3 and dynamodb as backends...
I removed the s3 backends and just changed the instance count in 1 module but all these changes showed up for no reason

Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  ~ aws_elb.web
      instances.#:                                       "" => <computed>

  ~ aws_route_table.r2
      route.3467256586.cidr_block:                       "" => "0.0.0.0/0"
      route.3467256586.egress_only_gateway_id:           "" => ""
      route.3467256586.gateway_id:                       "" => "nat-05b3deb26eef39c8b"
      route.3467256586.instance_id:                      "" => ""
      route.3467256586.ipv6_cidr_block:                  "" => ""
      route.3467256586.nat_gateway_id:                   "" => ""
      route.3467256586.network_interface_id:             "" => ""
      route.3467256586.vpc_peering_connection_id:        "" => ""
      route.3800590742.cidr_block:                       "0.0.0.0/0" => ""
      route.3800590742.egress_only_gateway_id:           "" => ""
      route.3800590742.gateway_id:                       "" => ""
      route.3800590742.instance_id:                      "" => ""
      route.3800590742.ipv6_cidr_block:                  "" => ""
      route.3800590742.nat_gateway_id:                   "nat-05b3deb26eef39c8b" => ""
      route.3800590742.network_interface_id:             "" => ""
      route.3800590742.vpc_peering_connection_id:        "" => ""

-/+ module.backend_api.aws_instance.instance (new resource required)
      id:                                                "i-0bc6c01a9c7d59dae" => <computed> (forces new resource)
      ami:                                               "ami-7f675e4f" => "ami-7f675e4f"
      associate_public_ip_address:                       "false" => "false"
      availability_zone:                                 "us-west-2c" => <computed>
      disable_api_termination:                           "false" => "false"
      ebs_block_device.#:                                "0" => "1"
      ebs_block_device.3239300295.delete_on_termination: "" => "false" (forces new resource)
      ebs_block_device.3239300295.device_name:           "" => "/dev/sda1" (forces new resource)
      ebs_block_device.3239300295.encrypted:             "" => <computed> (forces new resource)
      ebs_block_device.3239300295.snapshot_id:           "" => <computed> (forces new resource)
      ebs_block_device.3239300295.volume_id:             "" => <computed>
      ebs_block_device.3239300295.volume_size:           "" => "50" (forces new resource)
      ebs_block_device.3239300295.volume_type:           "" => "gp2" (forces new resource)
      ebs_optimized:                                     "false" => "false"
      instance_state:                                    "running" => <computed>
      instance_type:                                     "t2.micro" => "t2.micro"
      ipv6_address_count:                                "0" => "0"
      key_name:                                          "terraform_acc" => "terraform_acc"
      monitoring:                                        "true" => "true"
      network_interface.#:                               "0" => <computed>
      network_interface_id:                              "eni-24fcd720" => <computed>
      placement_group:                                   "" => <computed>
      primary_network_interface_id:                      "eni-24fcd720" => <computed>
      private_dns:                                       "ip-10-2-0-128.us-west-2.compute.internal" => <computed>
      private_ip:                                        "10.2.0.128" => <computed>
      public_dns:                                        "" => <computed>
      public_ip:                                         "" => <computed>
      root_block_device.#:                               "1" => "0"
      root_block_device.0.delete_on_termination:         "false" => "true" (forces new resource)
      security_groups.#:                                 "0" => <computed>
      source_dest_check:                                 "true" => "true"
      subnet_id:                                         "subnet-6d2ae537" => "subnet-6d2ae537"
      tags.%:                                            "3" => "3"
      tags.Environment:                                  "dev" => "dev"
      tags.Name:                                         "dev-cel-1" => "dev-cel-1"
      tags.Terraform:                                    "true" => "true"
      tenancy:                                           "default" => "default"
      user_data:                                         "da39a3ee5e6b4b0d3255bfef95601890afd80709" => "da39a3ee5e6b4b0d3255bfef95601890afd80709"
      volume_tags.%:                                     "0" => <computed>
      vpc_security_group_ids.#:                          "1" => "1"
      vpc_security_group_ids.1952608629:                 "sg-a8c7c9d4" => "sg-a8c7c9d4"

-/+ module.frontend_api.aws_instance.instance (new resource required)
      id:                                                "i-0adbabf5e442b1535" => <computed> (forces new resource)
      ami:                                               "ami-7f675e4f" => "ami-7f675e4f"
      associate_public_ip_address:                       "false" => "false"
      availability_zone:                                 "us-west-2c" => <computed>
      disable_api_termination:                           "false" => "false"
      ebs_block_device.#:                                "0" => "1"
      ebs_block_device.3239300295.delete_on_termination: "" => "false" (forces new resource)
      ebs_block_device.3239300295.device_name:           "" => "/dev/sda1" (forces new resource)
      ebs_block_device.3239300295.encrypted:             "" => <computed> (forces new resource)
      ebs_block_device.3239300295.snapshot_id:           "" => <computed> (forces new resource)
      ebs_block_device.3239300295.volume_id:             "" => <computed>
      ebs_block_device.3239300295.volume_size:           "" => "50" (forces new resource)
      ebs_block_device.3239300295.volume_type:           "" => "gp2" (forces new resource)
      ebs_optimized:                                     "false" => "false"
      instance_state:                                    "running" => <computed>
      instance_type:                                     "t2.micro" => "t2.micro"
      ipv6_address_count:                                "0" => "0"
      key_name:                                          "terraform_acc" => "terraform_acc"
      monitoring:                                        "true" => "true"
      network_interface.#:                               "0" => <computed>
      network_interface_id:                              "eni-e2fed5e6" => <computed>
      placement_group:                                   "" => <computed>
      primary_network_interface_id:                      "eni-e2fed5e6" => <computed>
      private_dns:                                       "ip-10-2-0-6.us-west-2.compute.internal" => <computed>
      private_ip:                                        "10.2.0.6" => <computed>
      public_dns:                                        "" => <computed>
      public_ip:                                         "" => <computed>
      root_block_device.#:                               "1" => "0"
      root_block_device.0.delete_on_termination:         "false" => "true" (forces new resource)
      security_groups.#:                                 "0" => <computed>
      source_dest_check:                                 "true" => "true"
      subnet_id:                                         "subnet-6d2ae537" => "subnet-6d2ae537"
      tags.%:                                            "3" => "3"
      tags.Environment:                                  "dev" => "dev"
      tags.Name:                                         "dev-app-1" => "dev-app-1"
      tags.Terraform:                                    "true" => "true"
      tenancy:                                           "default" => "default"
      user_data:                                         "da39a3ee5e6b4b0d3255bfef95601890afd80709" => "da39a3ee5e6b4b0d3255bfef95601890afd80709"
      volume_tags.%:                                     "0" => <computed>
      vpc_security_group_ids.#:                          "1" => "1"
      vpc_security_group_ids.1952608629:                 "sg-a8c7c9d4" => "sg-a8c7c9d4"```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.