Git Product home page Git Product logo

terraform-provider-chef's Introduction

Terraform Provider

Requirements

  • Terraform 0.10.x
  • Go 1.11 (to build the provider plugin)

Building The Provider

Clone repository to: $GOPATH/src/github.com/terraform-providers/terraform-provider-chef

$ mkdir -p $GOPATH/src/github.com/terraform-providers; cd $GOPATH/src/github.com/terraform-providers
$ git clone [email protected]:terraform-providers/terraform-provider-chef

Enter the provider directory and build the provider

$ cd $GOPATH/src/github.com/terraform-providers/terraform-provider-chef
$ make build

Using the provider

Fill in for each provider

Developing the Provider

If you wish to work on the provider, you'll first need Go installed on your machine (version 1.11+ is required). You'll also need to correctly setup a GOPATH, as well as adding $GOPATH/bin to your $PATH.

To compile the provider, run make build. This will build the provider and put the provider binary in the $GOPATH/bin directory.

$ make bin
...
$ $GOPATH/bin/terraform-provider-chef
...

In order to test the provider, you can simply run make test.

$ make test

In order to run the full suite of Acceptance tests, run make testacc.

Note: Acceptance tests create real resources, and often cost money to run.

$ make testacc

terraform-provider-chef's People

Contributors

apparentlymart avatar appilon avatar gechr avatar grubernaut avatar jen20 avatar juliandunn avatar kmoe avatar mitchellh avatar nicolai86 avatar radeksimko avatar rata avatar sethvargo avatar stack72 avatar tombuildsstuff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-chef's Issues

chef_data_bag and chef_data_bag_item do not support secret key

This issue was originally opened by @oridistor as hashicorp/terraform#11086. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.8.2

Affected Resource(s)

chef_data_bag
chef_data_bag_item

Moving from a bash code using knife to a terraform code, I've noticed that chef_data_bag does not support the --secret feature that is supported in knife.
As of that, I can't encrypt my keys and I can't securely move them to the chef server.
It would be nice if we had a "secret" variable for databag and databag items that would act like knife data bag --secret.

Unable to create a Chef resource using the Chef provider when the Chef sever is using a self signed certificate.

This issue was originally opened by @sbobylev as hashicorp/terraform#18916. It was migrated here as a result of the provider split. The original body of the issue is below.


Unable to create a role in AWS OpsWorks for Chef Automate using terraform and the chef provider. Since OpsWorks is using a self signed certificate, terraform apply fails.

Terraform Version

Terraform v0.11.8
+ provider.chef v0.1.0

Terraform Configuration Files

backend.tf

provider "chef" {
  server_url = "https://test-xgibsgi18eldm7wa.us-east-2.opsworks-cm.io/organizations/default"
  client_name  = "terraform"
  key_material = "${file("chef-terraform.pem")}"
}

test_chef_role.tf

resource "chef_role" "test" {
  name     = "test-role"
}

Crash Output

terraform apply -auto-approve

chef_role.test: Creating...
  default_attributes_json:  "" => "{}"
  description:              "" => "Managed by Terraform"
  name:                     "" => "test-role"
  override_attributes_json: "" => "{}"

Error: Error applying plan:

1 error(s) occurred:

* chef_role.test: 1 error(s) occurred:

* chef_role.test: Post https://test-xgibsgi18eldm7wa.us-east-2.opsworks-cm.io/organizations/roles: x509: certificate signed by unknown authority

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Expected Behavior

A new chef resource gets created.

Actual Behavior

Terraform apply fails.

Steps to Reproduce

  1. terraform init
  2. terraform apply -auto-approve

Workaround

Set allow_unverified_ssl to true in the backend.tf file.

provider "chef" {
  server_url = "https://test-xgibsgi18eldm7wa.us-east-2.opsworks-cm.io/organizations/default"
  client_name  = "terraform"
  key_material = "${file("chef-terraform.pem")}"
  allow_unverified_ssl = true
}

Feature Request

Add support for ssl_ca_file option

chef_environment resource always modified

This issue was originally opened by @mengesb as hashicorp/terraform#13696. It was migrated here as part of the provider split. The original body of the issue is below.


I believe this started with 0.8.x series, though it's tough to recall. I had initially pawned this off as configuration drift, however on nearly every invocation of terraform, I'm seeing a diff operation on the chef_environment resource. While I can't see anything immediately different, it's always determining a diff and thus my destroy provisioner executes. Even when I run subsequent applies one right after the other, it detects a diff and must re-upload the environment and fire the triggered resource.

Terraform Version

0.9.2

Affected Resource(s)

  • chef_environment

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

# Chef provider settings
provider "chef" {
  server_url      = "https://${var.chef["server"]}${length(var.chef["org"]) > 0 ? "/organizations/${var.chef["org"]}" : ""}"
  client_name     = "${var.chef["user"]}"
  key_material    = "${file("${var.chef["key"]}")}"
}
# environment template
data "template_file" "cheffy_env" {
  depends_on = ["aws_efs_mount_target.model_store"]
  template   = "${file("${path.module}/files/chef-environment.tpl")}"

  vars = {
    ....
  }
}
resource "chef_environment" "cheffy" {
  name                    = "${var.environment}"
  description             = "REDACTED Environment"
  default_attributes_json = "${data.template_file.cheffy_env.rendered}"

  cookbook_constraints {
    ....
  }
}
resource "null_resource" "cheffy_env" {
  depends_on = ["chef_environment.cheffy"]
  triggers {
    attributes_json = "${chef_environment.cheffy.default_attributes_json}"
  }
  provisioner "local-exec" {
    command = "[ -d .chef ] || mkdir -p .chef ; echo Directory .chef exists"
  }
  provisioner "local-exec" {
    command = "[ -f .chef/${var.environment}.json ] && rm -f .chef/${var.environment}.json ; echo Environment file purged"
  }
  provisioner "local-exec" {
    command = "knife environment show ${var.environment} -F json > .chef/${var.environment}.json"
  }
  provisioner "local-exec" {
    when    = "destroy"
    command = "rm -rf .chef"
  }
}

Debug Output

Small snipped segment shows the diff is nil, which is where i think it has the problem.

2017/04/16 14:53:09 [DEBUG] dag/walk: walking "chef_environment.cheffy"
2017/04/16 14:53:09 [DEBUG] vertex 'root.chef_environment.cheffy': walking
2017/04/16 14:53:09 [DEBUG] vertex 'root.chef_environment.cheffy': evaluating
2017/04/16 14:53:09 [TRACE] [walkApply] Entering eval tree: chef_environment.cheffy
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalSequence
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalInstanceInfo
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalReadDiff
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalIf
2017/04/16 14:53:09 [DEBUG] root: eval: terraform.EvalNoop
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalIf
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalInterpolate
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalGetProvider
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalReadState
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalValidateResource
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalDiff
2017/04/16 14:53:09 [DEBUG] plugin: terraform: chef-provider (internal) 2017/04/16 14:53:09 [DEBUG] Instance Diff is nil in Diff()
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalReadDiff
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalCompareDiff
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalGetProvider
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalReadState
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalApplyPre
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalApply
2017/04/16 14:53:09 [DEBUG] apply: chef_environment.cheffy: diff is empty, doing nothing.
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalWriteState
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalApplyProvisioners
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalIf
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalWriteState
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalWriteDiff
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalApplyPost
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalUpdateStateHook
chef_environment.cheffy: Modifying... (ID: kesha)
2017/04/16 14:53:09 [DEBUG] dag/walk: walking "meta.count-boundary (count boundary fixup)"
2017/04/16 14:53:09 [DEBUG] vertex 'root.meta.count-boundary (count boundary fixup)': walking
2017/04/16 14:53:09 [DEBUG] vertex 'root.meta.count-boundary (count boundary fixup)': evaluating
2017/04/16 14:53:09 [DEBUG] root: eval: *terraform.EvalCountFixZeroOneBoundaryGlobal
chef_environment.cheffy: Modifications complete (ID: kesha)

Panic Output

None

Expected Behavior

Subsequent runs where no environment attributes change result in no diff, and no action, and no trigger firing

Actual Behavior

Nearly always, there's a computed diff and thus the trigger fires

Steps to Reproduce

  1. terraform apply
  2. terraform apply

Important Factoids

Nothing comes to mind

References

None

Chef provider: add content_file for all resources

This issue was originally opened by @partamonov as hashicorp/terraform#4581. It was migrated here as part of the provider split. The original body of the issue is below.


With current implementation all roles/data bags items/envs should be specified inside .tf file.

This mean that to use it you have to move all roles/data bags items/envs inside .tf file, but in many cases these roles/data bags items/envs are stored in repositories and better approach (or additional available) is to read from file, with file() interpolation function.

  • chef_data_bag_item -> content_file "file('path/to/data_bag_item.json')"
  • chef_environment -> content_file, should discard all other attributes
  • chef_role -> content_file, should discard all other attributes

Also for chef_data_bag_item we can use template_file to get content, but it's better to add this to documentation example.

Chef provisioner fails when using "for_each" and "each" for resources

Hi,

There seems to be an issue with the chef provisioner when generating resources using "for_each" (new in terraform 0.12.6). Whenever the "each" variable is used within the chef provisioner block of the resource to dynamically fill out attributes (eg. "node_name") terraform fails during the apply phase (after the vm instance is created but not yet provisioned) with the following error:

Error: 2 problems:

- Reference to "each" in context without for_each: The "each" object can be used only in "resource" blocks, and only when the "for_each" argument is set.
- Reference to "each" in context without for_each: The "each" object can be used only in "resource" blocks, and only when the "for_each" argument is set.

Terraform Version

Terraform v0.12.6

  • provider.chef v0.2.0

Affected Resource(s)

azurerm_virtual_machine

Terraform Configuration Files

# Provisioner block from the azurerm_virtual_machine
# A list of server variables (name, size, etc.) are provided as a map to the "for_each" attribute
# Creating VM resources without the provisioner block works correctly
provisioner "chef" {
    node_name = each.key
    server_url = "https://chef.XXX.io/organizations/XXX/"
    user_key = var.chef_provisioner_private_key
    user_name = var.chef_admin_username
    channel = "stable"
    version = "15.1.36"
    client_options = ["chef_license 'accept'"]
    attributes_json = jsonencode({"cache_db_name" = join("_", ["pgcache", each.value.region, each.value.tenant])})
    recreate_client = true
    secret_key = var.chef_provisioner_databag_key
    ssl_verify_mode = ":verify_none"
    connection {
      host = azurerm_public_ip.pgsqlcache-ip[each.key].fqdn
      private_key = var.ssh_admin_private_key
      type = "ssh"
      user = var.ssh_admin_username
    }
    run_list = ["postgresql"]
  }

Expected Behavior

1 or more VM's should be created and provisioning applied to all of them using the "each" variable to provide individual VM details.

Actual Behavior

The "apply" is run and VM instances created but no provisioning occurs. The apply run ends with:

Error: 2 problems:

- Reference to "each" in context without for_each: The "each" object can be used only in "resource" blocks, and only when the "for_each" argument is set.
- Reference to "each" in context without for_each: The "each" object can be used only in "resource" blocks, and only when the "for_each" argument is set.

Steps to Reproduce

  1. terraform apply

References

hashicorp/terraform#17179

Enable terraform 0.12

Terraform Version

0.12.2

Affected Resource(s)

  • provider "chef"

Terraform Configuration Files

# Configure the Chef provider
provider "chef" {
  server_url = "https://https://api.chef.io/organizations/MYORG/"

  # You can set up a "Client" within the Chef Server management console.
  client_name  = "terraform"
  key_material = "${file("chef-terraform.pem")}"
}

# Create a Chef Environment
resource "chef_environment" "tf_production" {
  name = "tf_production"
}

# Create a Chef Role
resource "chef_role" "app_server" {
  name = "app_server"

  run_list = [
    "recipe[terraform]",
  ]
}

Debug Output

Panic Output

Expected Behavior

$ terraform init
Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.chef: version = "~> 0.1"

Terraform has been successfully initialized!
...

Actual Behavior

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...

No available provider "chef" plugins are compatible with this Terraform version.

From time to time, new Terraform major releases can change the requirements for
plugins such that older plugins become incompatible.

Terraform checked all of the plugin versions matching the given constraint:
    (any version)

Unfortunately, none of the suitable versions are compatible with this version
of Terraform. If you have recently upgraded Terraform, it may be necessary to
move to a newer major release of this provider. Alternatively, if you are
attempting to upgrade the provider to a new major version you may need to
also upgrade Terraform to support the new version.

Consult the documentation for this provider for more information on
compatibility between provider versions and Terraform versions.


Error: no available version is compatible with this version of Terraform

Steps to Reproduce

  1. terraform init

Important Factoids

If I run terraform init on terraform v0.10.8, it will install the chef plugin. Subsequent runs of terraform init on terraform v0.12.2 appear to pick up the installed plugin. I have not tested whether the plugin is actually compatible yet, though.

References

Chef knife-acl provider to manage RBAC

This issue was originally opened by @spuder as hashicorp/terraform#4682. It was migrated here as part of the provider split. The original body of the issue is below.


The new chef provider is great. It would be fantastic if it could also manage the state of the acl.
Enterprise chef users are expected to use the knife-acl plugin.

https://github.com/chef/knife-acl

For example to create a new user 'foo' and assign them to a group 'bar' then give them the ability to modify environments roles and databags

knife client create foo
knife group create bar
knife group add client foo bar

# Environments
knife acl add group bar containers environments create,read,update
knife acl bulk add group bar environments '.*' create,read,update --yes

# Roles
knife acl add group bar containers roles create,read,update
knife acl bulk add bar admin-clients roles '.*' create,read,update --yes

# Databags
knife acl add group bar containers data create,read,update
knife acl bulk add group bar data '.*' create,read,update --yes

Chef Provider: Proxy not seeing

This issue was originally opened by @gopisaba as hashicorp/terraform#5898. It was migrated here as part of the provider split. The original body of the issue is below.


Hi Team,

I am using terraform to build AWS and Chef infrastructure. From company laptop we have strict security policies and using proxies to connect to internet. We have set environment variables like http_proxy and https_proxy to connect to the AWS and chef servers.

The problem is terraform can able to connect to the AWS but not to the Chef server.

`module.production.chef_environment.environment: Refreshing state... (ID: production)
Error refreshing state: 1 error(s) occurred:

  • chef_environment.environment: Get https://api.chef.io/organizations/xxxxx/environments/production: dial tcp 52.20.79.149:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time or established connection failed bec
    ause connected host has failed to respond.`

I am able to connect to the Chef server via knife command and GUI but not with the help of terraform. Looks like terraform-chef provider is not seeing the http_proxy and https_proxy like terraform-aws provider. I could able to replicate the same problem in different machines as well.

Feature request: Chef Vault resources

Hi,

It would be great to have resources for chef vaults in Terraform. Our current use case is that we have all configuration, including secrets, in Chef Vault but for ECS we need certain passwords etc. in AWS Secrets Manager. Of course we don't want do duplicate the secrets and we also need to keep them in sync when passwords/keys are rotated.

Terraform Configuration Files

A possible terraform code could look like this

data "chef_vault" "prod_environment" {
  "vault" = "secrets"
  "item" = "production"
}

resource "aws_secretsmanager_secret_version" "example" {
  secret_id     = aws_secretsmanager_secret.example.id
  secret_string = data.chef_vault.prod_environment.json["database"]["password"]
}

Missing Delete Action for Chef Environment Provider

This issue was originally opened by @rb1whitney as hashicorp/terraform#11805. It was migrated here as part of the provider split. The original body of the issue is below.


I am requesting that we add the missing delete function for chef_environment. Per the code in: https://github.com/hashicorp/terraform/blob/master/vendor/github.com/go-chef/chef/environment.go, there is mention of a delete function. Is it possible to have this missing feature implemented? When you provision a chef_environment with chef nodes, the chef environment sticks around and causes issues if the environment is re-created. I was able to monkey patch this issue locally by just copying the logic from client.go and renaming client to environment (to hit the right rest api for chef) and would like to see it fixed in future versions if possible.

Terraform Version

0.8.6

Affected Resource(s)

Please list the resources as a list, for example:

  • chef_environment

Expected Behavior

Chef Environment should be able to delete resource when calling terraform destroy

Actual Behavior

Nothing happens since chef_environment is missing a delete function

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
0. terraform apply

  1. terraform destroy

References

None

chef_url requires a trailing slash

When the configured chef_url value does not contain a trailing slash, the organization value is not included in requests.

For example:

provider "chef" {
  chef_url = "https://chef.example.org/organizations/someorganization"
  <...>
}
Error applying plan:

1 error(s) occurred:

* module.test.chef_environment.name_prefix: 1 error(s) occurred:

* chef_environment.foobar: POST https://chef.example.org/organizations/environments: 405

Terraform Version

0.10.6

Chef Provider Version

0.1.0

Expected Behavior

Current documentation shows a trailing slash in examples, but the argument reference does not note that it is required. I would expect that the trailing slash is optional.

Actual Behavior

If a trailing slash is not included, a 405 error is returned from the chef server, because the organization value is left out of requests.

Will `chef-client` runs everything time with `terraform apply` happens ?

Hi There,

I have a question:

I am using the provisioner chef {} to provision the ec2 nodes once they are build using provider aws {}. I was wondering after a successful build if I make any changes in the chef attributes and re-run the terraform apply. Will that change be pulled down to the nodes from Chef Server.
Oh, chef-client is not running as daemon service on the nodes. So I was wondering if provisioner chef {} kicks the chef-client every time terraform apply happens?

Please advise.

Chef Provisioner - Invalid Private Key

This issue was originally opened by @BMonsalvatge as hashicorp/terraform#18461. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.7

Terraform Configuration Files

resource "aws_instance" "bastion_server" {
  ami                    = "ami-5cc39523"
  instance_type          = "t2.micro"
  subnet_id              = "${data.terraform_remote_state.core.public_subnets[0]}"
  vpc_security_group_ids = ["${module.bastion_sg.this_security_group_id}", "${data.terraform_remote_state.core.test_sg}"]
  key_name               = "${data.terraform_remote_state.core.central_key_pair}"

  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = "${file(var.provisioner_key)}"
    agent       = false
  }

  provisioner "chef" {
    environment     = "utility"
    run_list        = ["cookbook::bastion"]
    node_name       = "${aws_instance.bastion_server.tags.Name}"
    server_url      = "https://chef.server.com/organizations/orgname/"
    user_name       = "${var.chef_user}"
    user_key        = "${var.chef_key}"
    ssl_verify_mode = ":verify_peer"
    version         = "${var.chef_version}"
    recreate_client = true
  }
  tags {
    Name         = "bastion_server"
  }
}

Expected Behavior

Chef should have bootstrapped the node.

Actual Behavior

aws_instance.bastion_server (chef): Preparing to unpack .../chef_14.3.37-1_amd64.deb ...
aws_instance.bastion_server (chef): Unpacking chef (14.3.37-1) ...
aws_instance.bastion_server: Still creating... (40s elapsed)
aws_instance.bastion_server (chef): Setting up chef (14.3.37-1) ...
aws_instance.bastion_server (chef): Thank you for installing Chef!
aws_instance.bastion_server (chef): Creating configuration files...
aws_instance.bastion_server (chef): Generate the private key...
aws_instance.bastion_server: Still creating... (50s elapsed)
aws_instance.bastion_server (chef): Cleanup user key...
aws_instance.bastion_server (chef): ERROR: Chef::Exceptions::InvalidPrivateKey: The file /etc/chef/validator.pem or :raw_key option does not contain a correctly formatted private key or the key is encrypted.
aws_instance.bastion_server (chef): The key file should begin with '-----BEGIN RSA PRIVATE KEY-----' and end with '-----END RSA PRIVATE KEY-----'
Releasing state lock. This may take a few moments...

Additionally on the server the contents of /etc/chef/ are the following:

-rw-------  1 root root  192 Jul 15 04:44 client.rb
-rw-------  1 root root   37 Jul 15 04:44 first-boot.json

contents of client.rb are:

log_location            STDOUT
chef_server_url         "https://chef.server.com/organizations/orgname/"
node_name               "bastion_server

ssl_verify_mode  :verify_peer

If I add the correct key to /etc/chef/validator.pem & edit the client.rb file to look like the following, sudo chef-client works and connects to the chef server:

log_location            STDOUT
chef_server_url         "https://chef.server.com/organizations/orgname/"
node_name               "bastion_server

validation_client_name   'validator'
validation_key		'/etc/chef/validator.pem'

ssl_verify_mode  :verify_peer

Steps to Reproduce

terraform init
terraform apply

Additional Context

References

Implement data source for pulling data from chef server

To easily use data stored on a chef server in terraform would be very useful.

I envision something you would call like this:

data "chef_search" "rabbitmq" {
  index = "node"
  query = "chef_environment:test AND role:rabbitmq"
  filter {
    name   = "rabbitmq"
    values = ["rabbitmq"]
  }

  filter {
    name   = "host"
    values = ["ipaddress"]
  }
  unique = true
}

data.chef_search.rabbitmq.rabbitmq.default_password and data.chef_search.rabbitmq.host would then be examples of attributes to use.

This is similar to how you would use search in a chef recipe:
https://docs.chef.io/chef_search.html#filter-search-results

One gotcha is that chef search normally return a list of results, while a terraform data source (as far as I understand) isn't allowed to do that. So that is why I introduced "unique" above, which would mean that a unique result is expected (and an error would be raised otherwise). Without "unique = true" the result would instead contain the list attribute "rows" with all the results.

chef_roles should support env_run_list

Affected Resource(s)

  • chef_role

Expected Behavior

environment specific run lists should be supported via the chef_role resource

Actual Behavior

only default run_lists are supported via the chef_role resource

Return more useful error information when the chef role exist in chef server

This issue was originally opened by @binlialfie as hashicorp/terraform#18163. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.7

Terraform Configuration Files

resource "chef_role" "HLN-testing-tf" {
 
  name     = "${var.proj_CompName}-${var.env}-tf"
  description = "${var.proj_CompName}-core role for terraform/AMI instances."
  run_list = [
    "recipe[aaa::bbb]" 
  ]

  override_attributes_json = <<EOF
      {
      }
EOF
}

Debug Output

terraform plan
terraform apply

Error: Error applying plan:

1 error(s) occurred:

  • chef_role.HLN-testing-tf: 1 error(s) occurred:

  • chef_role.HLN-testing-tf: POST https://chef_url/organizations/xxx/roles: 409

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error

Actual Behavior

spend a lot of time trying to debug json syntax error, but it actually fail because the role already exist in chef server.

is there any more chance to report role exist error than just throw 409?

resource chef_data_bag doesn't support import

Terraform Version

terraform -v
Terraform v0.11.2
+ provider.aws v1.19.0
+ provider.chef v0.1.0
+ provider.datadog v1.4.0
+ provider.rabbitmq v1.0.0
+ provider.template v1.0.0

Would be awesome if the chef provider could support importing from existing resources.

Cheers,
Gunter

chef provisioner data bag secret file name

This issue was originally opened by @mrinella-gp as hashicorp/terraform#15854. It was migrated here as a result of the provider split. The original body of the issue is below.


This is a minor thing, but I'm coming to terraform with an existing chef managed infrastructure, and existing data bag keys, none of which were named "encrypted_data_bag_secret". Maybe I missed a doc, but it seems you can only place the key uploaded by terraform on the instance as "/etc/chef/encrypted_data_bag_secret". Having the ability to tell terraform to name that key file something of my choosing would have been beneficial, although we were able to work around this.

Thanks,
Matt

[PROPOSAL] Switch to Go Modules

As part of the preparation for Terraform v0.12, we would like to migrate all providers to use Go Modules. We plan to continue checking dependencies into vendor/ to remain compatible with existing tooling/CI for a period of time, however go modules will be used for management. Go Modules is the official solution for the go programming language, we understand some providers might not want this change yet, however we encourage providers to begin looking towards the switch as this is how we will be managing all Go projects in the future. Would maintainers please react with ๐Ÿ‘ for support, or ๐Ÿ‘Ž if you wish to have this provider omitted from the first wave of pull requests. If your provider is in support, we would ask that you avoid merging any pull requests that mutate the dependencies while the Go Modules PR is open (in fact a total codefreeze would be even more helpful), otherwise we will need to close that PR and re-run go mod init. Once merged, dependencies can be added or updated as follows:

$ GO111MODULE=on go get github.com/some/module@master
$ GO111MODULE=on go mod tidy
$ GO111MODULE=on go mod vendor

GO111MODULE=on might be unnecessary depending on your environment, this example will fetch a module @ master and record it in your project's go.mod and go.sum files. It's a good idea to tidy up afterward and then copy the dependencies into vendor/. To remove dependencies from your project, simply remove all usage from your codebase and run:

$ GO111MODULE=on go mody tidy
$ GO111MODULE=on go mod vendor

Thank you sincerely for all your time, contributions, and cooperation!

Chef_node import missing

This issue was originally opened by @apetitbois as hashicorp/terraform#13813. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.9.4-dev

Affected Resource(s)

chef_node

Expected Behavior

I'd like to be able to manage chef_node after an import

Actual Behavior

Import is currently not supported

Steps to Reproduce

terraform import chef_node.test testnode
Error importing: 1 error(s) occurred:

  • chef_node.test (import id: testnode): import chef_node.test (id: testnode): resource chef_node doesn't support import

Code to implement:

terraform/builtin/chef/resource_node.go
after line 17 add :

Importer: &schema.ResourceImporter{
           State: schema.ImportStatePassthrough,
       },

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.