Git Product home page Git Product logo

terraform-provider-shell's People

Contributors

derhally avatar harskogr avatar hgontijo avatar juergenhoetzel avatar lawrencegripper avatar millmakerjm avatar pacon-vib avatar rucciva avatar scottwinkler avatar sfc-gh-swinkler avatar stuartleeks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-shell's Issues

Unable to get any output :-(

I have problems getting anything working here, including the most basic of the examples in the docs:

2020-03-11T08:12:18.171+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] Locking "shellScriptMutexKey"
2020-03-11T08:12:18.171+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] Locked "shellScriptMutexKey"
2020-03-11T08:12:18.171+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] shell script command old state: "&{[] map[]}"
2020-03-11T08:12:18.171+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] shell script going to execute: /bin/sh -c "cd . && echo "{"user":
"$(whoami)"}" >&3
2020-03-11T08:12:18.171+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: "
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] Command execution completed. Reading from output pipe: >&3
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] shell script command stdout: ""
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] shell script command stderr: ""
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] shell script command output: ""
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] Unable to unmarshall data to map[string]string: 'unexpected end of
JSON input'
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] State from read operation was nil. Marking resource for deletion.
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] Unlocking "shellScriptMutexKey"
2020-03-11T08:12:18.214+0100 [DEBUG] plugin.terraform-provider-shell_v0.1.4: 2020/03/11 08:12:18 [DEBUG] Unlocked "shellScriptMutexKey"

I simply get no output whatsoever:

$ terraform state show data.shell_script.user

data.shell_script.user:

data "shell_script" "user" {
}
$

Is it possible to use a returned value in delete

I use the create script to create some object. The object is created and assigned an ID in the target system. The script outputs some information about the object, including its internal ID. I would like to use the ID of the created object as the reference for the delete script.

Is there a way to do this?

In fact, I assumed that the object information would be passed, in one way or another, to the delete and update scripts. It seems that it's not.

Not usable without TF_LOG=TRACE

I wonder if someone figured out how to use 'as-is' and still get error message to shows-up without TF_LOG=TRACE. Even DEBUG is not good enough. Whatever happen in the script are just hidden, so each time something is wrong you need to run TF_LOG=TRACE, then in life nothing ever work so I need to always run with TF_LOG=TRACE and it's kind of annoying.

Update fail but state get updated

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # shell_script.github_repository will be created
  + resource "shell_script" "github_repository" {
      + dirty             = false
      + environment       = {
          + "DESCRIPTION" = "description"
          + "NAME"        = "hello world"
        }
      + id                = (known after apply)
      + output            = (known after apply)
      + working_directory = "."

      + lifecycle_commands {
          + create = "echo '{\"name\":\"'$NAME'\", \"description\": \"'$DESCRIPTION'\", \"map\":{\"test\":true}}' > tmp"
          + delete = "rm tmp"
          + read   = "cat tmp"
          + update = <<~EOT
                [ -f previous ]  && exit 1
                cat > previous
                echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'", "map":{"test":true}}' > tmp
            EOT
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

shell_script.github_repository: Creating...
shell_script.github_repository: Creation complete after 0s [id=bub34js82sathf6hqlug]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$ terraform apply
shell_script.github_repository: Refreshing state... [id=bub34js82sathf6hqlug]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # shell_script.github_repository will be updated in-place
  ~ resource "shell_script" "github_repository" {
        dirty             = false
      ~ environment       = {
            "DESCRIPTION" = "description"
          ~ "NAME"        = "hello world" -> "hello world 1"
        }
        id                = "bub34js82sathf6hqlug"
        output            = {
            "description" = "description"
            "map"         = jsonencode(
                {
                    test = true
                }
            )
            "name"        = "hello world"
        }
        working_directory = "."

        lifecycle_commands {
            create = "echo '{\"name\":\"'$NAME'\", \"description\": \"'$DESCRIPTION'\", \"map\":{\"test\":true}}' > tmp"
            delete = "rm tmp"
            read   = "cat tmp"
            update = <<~EOT
                [ -f previous ]  && exit 1
                cat > previous
                echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'", "map":{"test":true}}' > tmp
            EOT
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

shell_script.github_repository: Modifying... [id=bub34js82sathf6hqlug]
shell_script.github_repository: Modifications complete after 0s [id=bub34js82sathf6hqlug]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
$ terraform apply
shell_script.github_repository: Refreshing state... [id=bub34js82sathf6hqlug]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # shell_script.github_repository will be updated in-place
  ~ resource "shell_script" "github_repository" {
        dirty             = false
      ~ environment       = {
            "DESCRIPTION" = "description"
          ~ "NAME"        = "hello world 1" -> "hello world 2"
        }
        id                = "bub34js82sathf6hqlug"
        output            = {
            "description" = "description"
            "map"         = jsonencode(
                {
                    test = true
                }
            )
            "name"        = "hello world 1"
        }
        working_directory = "."

        lifecycle_commands {
            create = "echo '{\"name\":\"'$NAME'\", \"description\": \"'$DESCRIPTION'\", \"map\":{\"test\":true}}' > tmp"
            delete = "rm tmp"
            read   = "cat tmp"
            update = <<~EOT
                [ -f previous ]  && exit 1
                cat > previous
                echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'", "map":{"test":true}}' > tmp
            EOT
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

shell_script.github_repository: Modifying... [id=bub34js82sathf6hqlug]

Error: Error occured during shell execution.
Error: 
exit status 1

Command: 
[ -f previous ]  && exit 1
cat > previous
echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'", "map":{"test":true}}' > tmp

StdOut: 


StdErr: 


Env: 
[NAME=hello world 2 DESCRIPTION=description]

StdIn: 
'{"description":"description","map":"{\"test\":true}","name":"hello world 1"}'


  on main.tf line 1, in resource "shell_script" "github_repository":
   1: resource "shell_script" "github_repository" {
$ terraform apply
shell_script.github_repository: Refreshing state... [id=bub34js82sathf6hqlug]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

P.S: i'm also in the middle of developing custom provider and noticed this behavior. I don't know if this is considered bug in terraform or not. A workaround that i use is to manually reset the *ResourceData using value obtained from GetChange

Optional read not working, legacy plugin warnings

First of all, thanks for building this. It's a great idea for a utility that should have been included in Terraform itself from the start.

However, I'm having issues with getting the documented behaviour of leaving out read in the lifecycle commands of my resource. That is, when I leave out read expecting to use the other commands to output the state, the state from those other commands is ignored. I have tried this both with and without the update command present, and it doesn't seem to make a difference.

I thought maybe I was doing something wrong on my end, but then I saw this warning in the terraform output:

2019/10/16 09:49:23 [WARN] Provider "shell" produced an unexpected new value for shell_script.test, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .lifecycle_commands[0].read: was null, but now cty.StringVal("")
      - .lifecycle_commands[0].update: was null, but now cty.StringVal("")

It seems maybe that due to changes in the plugin SDK, maybe optional lifecycle commands are not being handled correctly anymore. That is, maybe the plugin is expecting it to be missing (null), but terraform is coercing it to an empty string? Or vice versa?

My terraform version is Terraform v0.12.6. I didn't yet find any documentation (or CI/CD) showing which Terraform versions this plugin is supposed to support - didn't see it in the README, and CI/CD doesn't seem to be testing with terraform from what I could tell.

JSON Node Traversal - get attribute

Currently, complex JSON is flattened into an outputs map. It would be nice to also a way to easily get nested JSON values at the top level. I propose adding a new output value called get to do this (potentially deprecating the output attribute). For example, in the following JSON

{
  "last_state_loss_time": 0,
  "spark_version": "5.3.x-scala2.11",
  "azure_attributes": [8, 17, 20 ],
  "state": "PENDING",
  "enable_elastic_disk": true,
  "init_scripts_safe_mode": false,
  "num_workers": 1,
  "driver_node_type_id": "Standard_D3_v2",
  "default_tags": {
    "Creator": "[email protected]",
    "ClusterName": "my-cluster",
    "ClusterId": "0327-174802-howdy690",
    "Vendor": "Databricks"
  }
} 

The ClusterName attribute could be read with the following expression:
shell_script.test.get["default_tags.ClusterName"].asString

the get object has the following attributes. It will try to cast a value into as many fields as possible. If it cannot set a field, it will use nil, or perhaps the default no-value if nil does not work.

get = {
  asString = string
  asNumber = number
  asBool = bool
  asList = list(string)
  asMap = map(string)
}

To get a value from a List you could do:
tonumber(shell_script.test.get["azure_attributes"].asList[1])

Alternatively you can /refer to individual elements with the [] accessor. For example:

shell_script.test.get["azure_attributes.[1]"]asNumber returns the number 17

Suggestion: Use CustomizeDiff and ForceNew if Update is empty

Calling Delete -> Create when Update is empty will result in Delete operation done using new state, instead of old state. For example in the following script, Delete will run rm /tmp/shell/folder/test1 instead of rm /tmp/shell/folder/test (which surprisingly does not produce error even though the script return error due to file not exist) when updating environment.File.

locals {
  path = "/tmp/shell/folder"  # change this
}

resource "null_resource" "directory"{
  triggers = {
    path = local.path
  }
  provisioner "local-exec" {
    command = "mkdir -p ${self.triggers.path}"
  }
  provisioner "local-exec" {
    when = destroy
    command = "rm -rf ${self.triggers.path}"
  }
}

resource "shell_script" "github_repository" {
  lifecycle_commands {
    create = "touch $FILE"
    read   = "echo '{\"name\":\"'\"$FILE\"'\"}'"
    delete = "rm $FILE"
  }
  
  environment = {
    FILE = "${null_resource.directory.triggers["path"]}/test1"
  }

}
$ ls /tmp/shell
ls: /tmp/shell: No such file or directory
$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.directory will be created
  + resource "null_resource" "directory" {
      + id       = (known after apply)
      + triggers = {
          + "path" = "/tmp/shell/folder"
        }
    }

  # shell_script.github_repository will be created
  + resource "shell_script" "github_repository" {
      + dirty             = false
      + environment       = {
          + "FILE" = "/tmp/shell/folder/test"
        }
      + id                = (known after apply)
      + output            = (known after apply)
      + working_directory = "."

      + lifecycle_commands {
          + create = "touch $FILE"
          + delete = "rm $FILE"
          + read   = "echo '{\"name\":\"'\"$FILE\"'\"}'"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.directory: Creating...
null_resource.directory: Provisioning with 'local-exec'...
null_resource.directory (local-exec): Executing: ["/bin/sh" "-c" "mkdir -p /tmp/shell/folder"]
null_resource.directory: Creation complete after 0s [id=8706134334622958825]
shell_script.github_repository: Creating...
shell_script.github_repository: Creation complete after 0s [id=buchmouvvhfkg3qaorj0]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
$ ls  /tmp/shell/folder/
test
$ terraform apply
null_resource.directory: Refreshing state... [id=8706134334622958825]
shell_script.github_repository: Refreshing state... [id=buchmouvvhfkg3qaorj0]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # shell_script.github_repository will be updated in-place
  ~ resource "shell_script" "github_repository" {
        dirty             = false
      ~ environment       = {
          ~ "FILE" = "/tmp/shell/folder/test" -> "/tmp/shell/folder/test1"
        }
        id                = "buchmouvvhfkg3qaorj0"
        output            = {
            "name" = "/tmp/shell/folder/test"
        }
        working_directory = "."

        lifecycle_commands {
            create = "touch $FILE"
            delete = "rm $FILE"
            read   = "echo '{\"name\":\"'\"$FILE\"'\"}'"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

shell_script.github_repository: Modifying... [id=buchmouvvhfkg3qaorj0]
shell_script.github_repository: Modifications complete after 0s [id=buchmv6vvhfkgjj4kq20]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
$ ls  /tmp/shell/folder/
test    test1

i suggest using CustomizeDiffFunc as such

                CustomizeDiff: func(c context.Context, rd *schema.ResourceDiff, i interface{}) (err error) {
			if rd.Id() == "" {
				return
			}
			if _, ok := rd.GetOk("lifecycle_commands.0.update"); ok {
				return
			}
			for _, key := range rd.GetChangedKeysPrefix("") {
                                 if strings.HasPrefix(key, "triggers") {
					continue // already force new
				}

				// need to remove index from map and list
				switch {
				case strings.HasPrefix(key, "environment"):
					key = strings.Split(key, ".")[0]
				case strings.HasPrefix(key, "sensitive_environment"):
					key = strings.Split(key, ".")[0]
				case strings.HasPrefix(key, "interpreter"):
					key = strings.Split(key, ".")[0]
				}

				err = rd.ForceNew(key)
				if err != nil {
					return
				}
			}
			return
		},

License

Hi! May I ask the license of this provider?

Failed to install provider, if it's used in a sub-module

Hello,

I'm using terraform 0.13.5 and unable to use it inside submodule. If I place this code at the root level (i.e. /main.tf then it works


data "shell_script" "administrator_role" {
    lifecycle_commands {
        read = <<-EOF
            marina --account-alias=xxx  iam-find-roles --pattern yyy
        EOF
    }

    # depends_on = [ aws_ssoadmin_account_assignment.administrators_group ]
}

If I put it anywhere in a submodule, for example inside module.team.module.sso, then I get this:

✗  terraform init
Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Using previously-installed cyrilgdn/postgresql v1.11.2
- Using previously-installed hashicorp/tls v3.0.0
- Using previously-installed hashicorp/aws v3.28.0
- Using previously-installed hashicorp/local v2.0.0
- Using previously-installed scottwinkler/shell v1.7.7
- Using previously-installed hashicorp/null v3.0.0
- Using previously-installed hashicorp/random v2.3.1
- Using previously-installed hashicorp/kubernetes v1.13.3
- Using previously-installed hashicorp/github v4.3.1
- Finding latest version of hashicorp/shell...

Error: Failed to install provider

Error while installing hashicorp/shell: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/shell


✗  terraform providers

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/github]
├── provider[registry.terraform.io/scottwinkler/shell] 1.7.7
├── provider[registry.terraform.io/hashicorp/aws] ~> 3.28.0
├── provider[registry.terraform.io/hashicorp/null] ~> 3.0.0
├── provider[registry.terraform.io/hashicorp/kubernetes] ~> 1.13
├── provider[registry.terraform.io/hashicorp/random] ~> 2.1
├── provider[registry.terraform.io/cyrilgdn/postgresql] 1.11.2
├── module.team
│   ├── provider[registry.terraform.io/hashicorp/github]
│   ├── provider[registry.terraform.io/hashicorp/random]
│   ├── provider[registry.terraform.io/hashicorp/aws]
│   ├── module.github_repository_tools
│   ├── module.sso
│       ├── provider[registry.terraform.io/hashicorp/aws]
│       └── provider[registry.terraform.io/hashicorp/shell]  < REFERENCED as "hashicorp/shell" instead of "scottwinkler/shell"
│   ├── module.github_repository_boilerplate_django
│   └── module.github_repository_boilerplate_laravel
└── module.config

Providers required by state:

    provider[registry.terraform.io/hashicorp/aws]

    provider[registry.terraform.io/hashicorp/github]

    provider[registry.terraform.io/hashicorp/local]

    provider[registry.terraform.io/hashicorp/null]

    provider[registry.terraform.io/hashicorp/random]

    provider[registry.terraform.io/hashicorp/tls]


#############################################################
# TERRAFORM
#############################################################
terraform {
  required_version = ">= 0.13.5"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.28.0"
    }
    null       = "~> 3.0.0"
    kubernetes = "~> 1.13"
    random     = "~> 2.1"

    postgresql = {
      source  = "cyrilgdn/postgresql"
      version = "1.11.2"
    }

    shell = {
      source = "scottwinkler/shell"
      version = "1.7.7"
    }
  }
}

#############################################################
# Shell Provider
#############################################################

provider "shell" {}


Documentation clarification

First and foremost, thank you so much for this provider!
The behavior of the provider to send in the existing state JSON as stdin to the spawned shell is neither explicitly nor implicitly mentioned in the provider's documentation. It took looking at examples (both in this repo as well as another) to realize what is happening. I'm not sure if this is an artifact of the larger Terraform provider architecture of specific to the shell provider (I don't really know the go language) but it's an especially useful bit of knowledge for the shell provider in particular so I think it should be called out explicitly in the provider's documentation.
Thanks,
--Brett

Improve web documentation

Now that the provider is on the provider registry need to update docs to be cleaner and more complete.

Provider crashes when script returns null as value in JSON state object

Thankyou for this great provider!

When I'm running some command-line tools that return JSON objects, but some fields are sometimes filled with null, the null causes the shell provider to panic.

Here's a minimal Terraform config that will cause a panic:

resource "shell_script" "demo" {
  lifecycle_commands {
    create = "echo '{\"baz\": null}' >&3"
    delete = "true"
  }
}

Here's an extract from crash.log; I can submit more if need be:

2020-02-18T10:23:58.730+1100 [DEBUG] plugin.terraform-provider-shell_v0.1.3: panic: interface conversion: interface {} is nil, not string
2020-02-18T10:23:58.730+1100 [DEBUG] plugin.terraform-provider-shell_v0.1.3: 
2020-02-18T10:23:58.730+1100 [DEBUG] plugin.terraform-provider-shell_v0.1.3: goroutine 131 [running]:
2020-02-18T10:23:58.730+1100 [DEBUG] plugin.terraform-provider-shell_v0.1.3: github.com/scottwinkler/terraform-provider-shell/shell.parseJSON(0xc000290000, 0x2000, 0x2000, 0x1, 0x1, 0x2000)
2020-02-18T10:23:58.730+1100 [DEBUG] plugin.terraform-provider-shell_v0.1.3: 	/Users/swinkler/go/src/github.com/scottwinkler/terraform-provider-shell/shell/utility.go:53 +0x2b7

Shell scripts are only supposed to return maps of strings to strings in the state, so returning null is naughty, but I would like to propose changing the behaviour to turn those nulls into empty strings, instead of panicking.

For now I'm raising an issue but my colleague is working on a pull request as well.

Documentation should stipulate terraform 0.13 or above

You specify 0.12 or above in the README, but your provider declaration uses new terraform 0.13 syntax, so it would be good if the requirements could match or it would be made clear what the terraform 0.12 syntax is.

As it stands I followed the README exactly command for command, and couldnt get the provider recognised with terraform 0.12 - with terraform 0.13 it was fine

Clarification on update lifecycle

hi! thanks for making this provider. Just have a bit of question.

i've specified update lifecycle_commands but when i change a value in environment, sensitive_environment or triggers, terraform need to destroy and create new resource, as such:

$ terraform apply
shell_script.github_repository: Refreshing state... [id=bu7d4dr3gcl4s8t7ks70]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # shell_script.github_repository must be replaced
-/+ resource "shell_script" "github_repository" {
        dirty                 = false
      ~ environment           = { # forces replacement
          ~ "DESCRIPTION" = "description" -> "description1"
            "NAME"        = "hello-world"
        }
      ~ id                    = "bu7d4dr3gcl4s8t7ks70" -> (known after apply)
      ~ output                = {
          - "description" = "description"
          - "name"        = "hello-world"
        } -> (known after apply)
        sensitive_environment = (sensitive value)
        triggers              = {
            "when_value_changed" = "test"
        }
        working_directory     = "."

        lifecycle_commands {
            create = "echo '{\"name\":\"'$NAME'\", \"description\": \"'$DESCRIPTION'\"}' > tmp"
            delete = "rm tmp"
            read   = "cat tmp"
            update = <<~EOT
                cat > previous
                echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'"}' > tmp
            EOT
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

change are indeed triggered but only if the actual state is changing or if there are any change in the script it self, as such:

$ terraform apply   # after manually modifying "tmp" files
shell_script.github_repository: Refreshing state... [id=bu7d4dr3gcl4s8t7ks70]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # shell_script.github_repository will be updated in-place
  ~ resource "shell_script" "github_repository" {
      ~ dirty                 = true -> false
        environment           = {
            "DESCRIPTION" = "description"
            "NAME"        = "hello-world"
        }
        id                    = "bu7d4dr3gcl4s8t7ks70"
        output                = {
            "description" = "description"
            "name"        = "hello-world"
        }
        sensitive_environment = (sensitive value)
        triggers              = {
            "when_value_changed" = "test"
        }
        working_directory     = "."

        lifecycle_commands {
            create = "echo '{\"name\":\"'$NAME'\", \"description\": \"'$DESCRIPTION'\"}' > tmp"
            delete = "rm tmp"
            read   = "cat tmp"
            update = <<~EOT
                cat > previous
                echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'"}' > tmp
            EOT
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
$ terraform apply
shell_script.github_repository: Refreshing state... [id=bu7d4dr3gcl4s8t7ks70]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # shell_script.github_repository will be updated in-place
  ~ resource "shell_script" "github_repository" {
        dirty                 = false
        environment           = {
            "DESCRIPTION" = "description"
            "NAME"        = "hello-world"
        }
        id                    = "bu7d4dr3gcl4s8t7ks70"
        output                = {
            "description" = "description"
            "name"        = "hello-world"
        }
        sensitive_environment = (sensitive value)
        triggers              = {
            "when_value_changed" = "test"
        }
        working_directory     = "."

      ~ lifecycle_commands {
            create = "echo '{\"name\":\"'$NAME'\", \"description\": \"'$DESCRIPTION'\"}' > tmp"
          ~ delete = <<~EOT
                rm tmp
              + echo
            EOT
            read   = "cat tmp"
            update = <<~EOT
                cat > previous
                echo '{"name":"'$NAME'", "description": "'$DESCRIPTION'"}' > tmp
            EOT
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

is this expected? if so, what is the recommended way to pass or update metadata to script (such as connection information) without triggering destroy lifecycle (which might actually destroy unrecoverable data).

Json string detection picking nested structures so state missing data

Hi,

First up, awesome work on the provider it's super useful!

I've got a scenario where the read is returning a complex object like the below. It looks like the json string detection picks the last valid json string so you end up with just {"spark.speculation": true} in the state file.

Not sure if this is me doing something wrong, will carry on playing.

{
  "last_state_loss_time": 0,
  "spark_version": "5.3.x-scala2.11",
  "azure_attributes": {},
  "state": "PENDING",
  "enable_elastic_disk": true,
  "init_scripts_safe_mode": false,
  "num_workers": 1,
  "driver_node_type_id": "Standard_D3_v2",
  "default_tags": {
    "Creator": "[email protected]",
    "ClusterName": "my-cluster",
    "ClusterId": "0327-174802-howdy690",
    "Vendor": "Databricks"
  },
  "creator_user_name": "[email protected]",
  "cluster_id": "0327-174802-howdy690",
  "cluster_name": "my-cluster",
  "node_type_id": "Standard_D3_v2",
  "state_message": "Finding instances for new nodes, acquiring more instances if necessary",
  "enable_local_disk_encryption": false,
  "autotermination_minutes": 0,
  "cluster_source": "API",
  "start_time": 1585331282783,
  "spark_conf": {
    "spark.speculation": "true"
  }
}

Full logs

DEBUG] shell script going to execute: /bin/sh -c
2020-03-27T17:48:04.198Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:04    pwsh ./scripts/read.ps1
2020-03-27T17:48:04.198Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:04 -------------------------
2020-03-27T17:48:04.198Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:04 [DEBUG] Starting execution...
2020-03-27T17:48:04.198Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:04 -------------------------
2020-03-27T17:48:05.331Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:05   {   "last_state_loss_time": 0,   "spark_version": "5.3.x-scala2.11",   "azure_attributes": {},   "state": "PENDING",   "enable_elastic_disk": true,   "init_scripts_safe_mode": false,   "num_workers": 1,   "driver_node_type_id": "Standard_D3_v2",   "default_tags": {     "Creator": "[email protected]",     "ClusterName": "my-cluster",     "ClusterId": "0327-174802-howdy690",     "Vendor": "Databricks"   },   "creator_user_name": "[email protected]",   "cluster_id": "0327-174802-howdy690",   "cluster_name": "my-cluster",   "node_type_id": "Standard_D3_v2",   "state_message": "Finding instances for new nodes, acquiring more instances if necessary",   "enable_local_disk_encryption": false,   "autotermination_minutes": 0,   "cluster_source": "API",   "start_time": 1585331282783,   "spark_conf": {     "spark.speculation": "true"   } }
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:05 -------------------------
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:05 [DEBUG] Command execution completed:
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:05 -------------------------
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:05 [DEBUG] JSON strings found: 
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell: [{   "last_state_loss_time": 0,   "spark_version": "5.3.x-scala2.11",   "azure_attributes": {} {     "Creator": "[email protected]",     "ClusterName": "my-cluster",     "ClusterId": "0327-174802-howdy690",     "Vendor": "Databricks"   } {     "spark.speculation": "true"   }]
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell: 2020/03/27 17:48:05 [DEBUG] Valid map[string]string:
2020-03-27T17:48:05.337Z [DEBUG] plugin.terraform-provider-shell:  map[spark.speculation:true]

State file

"instances": [
        {
          "schema_version": 0,
          "attributes": {
            "dirty": false,
            "environment": {
              "DATABRICKS_HOST": "https://REMOVED.azuredatabricks.net",
              "DATABRICKS_TOKEN": "REMOVED",
              "machine_sku": "Standard_D3_v2",
              "worker_nodes": "8"
            },
            "id": "bpv43aok1sip412ndec0",
            "lifecycle_commands": [
              {
                "create": "pwsh ./scripts/create.ps1",
                "delete": "pwsh ./scripts/delete.ps1",
                "read": "pwsh ./scripts/read.ps1",
                "update": "pwsh ./scripts/update.ps1"
              }
            ],
            "output": {
              "spark.speculation": "true"
            },
            "triggers": null,
            "working_directory": "."
          },
          "private": "bnVsbA=="
        }
      ]

Error: "changes to `lifecycle_commands` and/or `interpreter` should not be follwed by changes to other arguments"

When I tried to run terraform plan, I received this error:

changes to `lifecycle_commands` and/or `interpreter` should not be follwed by changes to other arguments

I'm not sure what it means. I'm using a shell resource like so:

# modules/foo/main.tf
resource "shell_script" "this" {
  lifecycle_commands {
    create = file("${path.module}/create.sh")
    delete = ""
    read   = ""
    update = file("${path.module}/update.sh")
  }

  environment = {
    NAME = "${var.name}"
  }
}

# modules/foo/variables.tf
variable "name" {
  type = string
}

# a_bunch_of_foos.tf
module "foo-1" {
  source = "./modules/foo"
  name   = "bar"
}

module "foo-2" {
  source = "./modules/foo"
  name   = "baz"
}

module "foo-2" {
  source = "./modules/foo"
  name   = "quux"
}

Inconsistent plan when introducing dependent object to output attribute while updating it

consider this initial resource

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # shell_script.script will be created
  + resource "shell_script" "script" {
      + dirty             = false
      + environment       = {
          + "CONTENT"      = jsonencode(
                {
                  + test = true
                }
            )
          + "CONTENT_PATH" = "./test-shell-provider"
        }
      + id                = (known after apply)
      + interpreter       = [
          + "/bin/bash",
          + "-c",
        ]
      + output            = (known after apply)
      + working_directory = "."

      + lifecycle_commands {
          + create = "echo -n $CONTENT > $CONTENT_PATH"
          + delete = "rm $CONTENT_PATH"
          + read   = "cat $CONTENT_PATH"
          + update = "echo -n $CONTENT > $CONTENT_PATH"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

shell_script.script: Creating...
shell_script.script: Creation complete after 0s [id=bun0t2i3k1k6j83s7tqg]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

If we were to introduce dependent object which depend on the output value while at the same time updating the shell_script resource that affect output value only after apply, it will produce inconsistent plan

$ terraform apply
shell_script.script: Refreshing state... [id=bun0t2i3k1k6j83s7tqg]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
  ~ update in-place

Terraform will perform the following actions:

  # null_resource.shell_output will be created
  + resource "null_resource" "shell_output" {
      + id       = (known after apply)
      + triggers = {
          + "output" = "true"
        }
    }

  # shell_script.script will be updated in-place
  ~ resource "shell_script" "script" {
        dirty             = false
      ~ environment       = {
          ~ "CONTENT"      = jsonencode(
              ~ {
                  ~ test = true -> false
                }
            )
            "CONTENT_PATH" = "./test-shell-provider"
        }
        id                = "bun0t2i3k1k6j83s7tqg"
        interpreter       = [
            "/bin/bash",
            "-c",
        ]
        output            = {
            "test" = "true"
        }
        working_directory = "."

        lifecycle_commands {
            create = "echo -n $CONTENT > $CONTENT_PATH"
            delete = "rm $CONTENT_PATH"
            read   = "cat $CONTENT_PATH"
            update = "echo -n $CONTENT > $CONTENT_PATH"
        }
    }

Plan: 1 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

shell_script.script: Modifying... [id=bun0t2i3k1k6j83s7tqg]
shell_script.script: Modifications complete after 0s [id=bun0t2i3k1k6j83s7tqg]

Error: Provider produced inconsistent final plan

When expanding the plan for null_resource.shell_output to include new values
learned so far during apply, provider "registry.terraform.io/hashicorp/null"
produced an invalid new value for .triggers["output"]: was
cty.StringVal("true"), but now cty.StringVal("false").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Though this is error is retriable and next terraform apply will produce no issue

$ terraform apply
shell_script.script: Refreshing state... [id=bun0t2i3k1k6j83s7tqg]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.shell_output will be created
  + resource "null_resource" "shell_output" {
      + id       = (known after apply)
      + triggers = {
          + "output" = "false"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

null_resource.shell_output: Creating...
null_resource.shell_output: Creation complete after 0s [id=5647154692069527402]

This is fixable if we invoke SetNewComputed whenever input changes or read commands change (and we also need to run the read commands instead of not executing any commands when its updated). But this will always trigger recreation/update to dependent objects even tough after apply the output was not modified

What do you think? should this considered as bug?

BUG: `dirty` field doesn't default to `false` on `shell_script` resource create

Terraform and Provider Version

Terraform: 0.13.4
scottwinkler/shell: 1.7.3

Affected Resource

shell_script

Expected Behavior

dirty property should default to false when creating resource as stated in

.

Actual Behavior

After doing terraform apply && terraform plan if thedirty property is not explicitly set in the Terraform configuration the terraform plan is not empty.

image

Update resoruce instead of destroy/create if environment changed

Shell provider going to update the object if lifecycle_commands section is changed but going to forcedly replace the object if environment or sensitive_environment section is changed.
Thus target object is destroyed/recreated if for example credentials passed via sensitive_environment are changed.
That is unacceptable in most cases.

Could you fix it or at least add an option to the provider configuration?

Is Release 1.7.6 tested under terraform 0.13.x ?

I'm getting an error on running update.sh that there is no StdIn when I am running with terraform 0.13.5:

TestPluginsProviderShell 2020-11-17T23:00:49Z logger.go:66: StdIn: 
TestPluginsProviderShell 2020-11-17T23:00:49Z logger.go:66: '{}'
TestPluginsProviderShell 2020-11-17T23:00:49Z logger.go:66: 
TestPluginsProviderShell 2020-11-17T23:00:49Z logger.go:66: 

create/read/delete seems working fine, but update is getting no StdIn.

Have not tested in 0.12.x whether it's working for update or not.

Output not showing as a field of shell_script

So I have a resource:

resource shell_script "fa_log" {
lifecycle_commands {
create = file("${path.module}/scripts/create_custom_log.sh")
update = file("${path.module}/scripts/update_custom_log.sh")
delete = file("${path.module}/scripts/delete_custom_log.sh")
}

environment = {
LOG_GROUP_ID = fa_log_group.id
LOG_RETENTION_DAYS = var.log_retention_days
}
}

And when I later on try to collect the above fa_log output, it tells me output is not an option.

I try to do another shell_script and the last line doesn't let me access fa_log.output

resource shell_script "agent_configuration" {
lifecycle_commands {
create = file("${path.module}/scripts/create_agent_config.sh")
delete = file("${path.module}/scripts/delete_agent_config.sh")
}

environment = {
LOG_ID = jsondecode(shell_script.fa_log.output.data).resources[0].identifier
}
}

Any ideas why ?

Should not use cached lifecycle commands for read or otherwise

Greetings!

I believe I stumbled upon a bug. I changed my lifecycle commands on a shell_script resource (was moving the script around), and couldn't even plan after that because the read would fail - I had to terraform state rm the resource to get going again.

If this is intended behavior (I can't see a reason - which doesn't mean there isn't one), then maybe it should behave in a "delete+create (recreate)" - but this feels like a tough thing to do when read fails.

Long string returned by the create command results the provider hang forever

A ~150Kb long string returned by the script with >&3 will force the provider to hang forever.

Reproduce:
test.sh

#!/bin/sh

TEST="Very very very long string, longer then about 65Kb"

jq -n  --arg test "$TEST" '{"test":$test}' >&3

provider:

resource "shell_script" "test" {
  lifecycle_commands {
    create = file("${path.root}/test.sh")
    delete = "echo {}"
  }
}

Provider configuration

Thanks for your time and effort for creating this provider. It works great. I was wondering if it would make sense to have the provider take a map and have it in turn set those as environment variable when the scripts execute. This would allow certain values be transient and saved in state, for example an api token which you don't want stored in state and needs to be refreshed in every execution.

right now if an api token is passed in via the sensitive_environment section, it gets stored in state, but the token can expire and that would cause errors when the resource needs to be deleted because the token would be read from state and not be refreshed.

Option to show stdout or stderr in Terraform output

As part of my lifecycle scripts I'd like to have the ability to print messages that will actually be shown to the user running Terraform.

For example, when running kubectl apply on a set of manifests I'd like to be able to show the individual resources getting created / being unchanged / etc.

Currently, I can inspect the output of the lifecycle commands using TF_LOG=debug, but that also prints a bunch of other verbose messages from Terraform and from the plugin that I dont' want to see.

I can understand that some people want the output of their commands to be hidden when not in debug mode, but I'd appreciate a way to present curated output. Maybe the provider could be modified to always print the contents of ether stdout or stderr (while hiding the other)? Or if you want to keep backwards compatibility, maybe you could open a new file descriptor for writing to the Terraform output (4?)


Somewhat separately, but somewhat related (let me know if you'd prefer to discuss in a different ticket), when a lifecycle command fails, I only see the nonzero exit status printed from terraform, but it would be nice to also see further information for troubleshooting purposes, like the stdout/stderr of the program, and possibly also the state it received as input.

Windows, Double Quotes, Interpreter and Curl

Hello,

Thanks for the provider. I'd like to see if there is a way around the escaping of double-quotes. When I run a curl command under windows (cmd interpreter), the interpreter escapes double-quotes during script evaluation and confuses curl. I suspect that this isn't the provider's doing, but I'm looking for guidance if anyone has experienced the same and got around it.

Example .bat file contains:

curl --header "Authorization:Bearer %TOKEN%"

However, the following wants to execute after evaluation:

curl --header "Authorization:Bearer %TOKEN% " ...

which causes curl to become confused on how to view the command (at least on windows).

I have a workaround where I put the full header (double quotes and all) in the environment/secured-environment block and then use it in my script. Example:

environment = {
TOKEN = ""Authorization: Bearer ...""
}

Then in script:

curl --header %TOKEN%

which expands the entire header argument properly, without escaping double-quotes when its finally evaluated by the command interpreter.

Anyway... Looking for ideas.

Thanks!

Alejandro

Ignore diff on lifecycle_commands

Even though ForceNew has been removed it still doesnt help because it will use old read command. Put something in to ignore diffs for this attribute

Add resource address to error message

When having multiple shell_script resources sharing the same name (i.e. in a module instanciated several times), it's hard to know which one is yielding an error because Terraform displays the error at the end without context.

Current

Error: Error occurred during execution.
 Command: './redacted read' 
 Error: 'exit status 127' 
 StdOut: 
  
 StdErr: 
 /bin/sh: 1: ./redacted: not found

One can guess it is the last shell_script resources to be read in that case.

I imagine that with parallelism turned on, such guessing becomes even harder.

Suggested

Example:

Error: Error occurred during execution.
Resource: module.mymodule.shell_script.mything
 Command: './redacted read' 
 Error: 'exit status 127' 
 StdOut: 
  
 StdErr: 
 /bin/sh: 1: ./redacted: not found

Issue: Output is null

I'm using v1.0.0 of the provider with terraform 0.12.24 and I keep getting an error. This happens both on a linux executor on CircleCI and locally on my Mac using macOS 10.14.

I've created a script to generate a kafka keystore/truststore but I keep getting this error when I try to reference the output of the script

Error: Attempt to index null value

  on infrastructure/gcp/modules/common/kafka-config/config.tf line 10, in resource "kubernetes_secret" "credentials":
  10:     keystore                 = replace(shell_script.create_kafka_stores.output["base64_encoded_keystore"], "\n", "")
    |----------------
    | shell_script.create_kafka_stores.output is null

This value is null, so it does not have any indices.

Here is my resource

resource "shell_script" "create_kafka_stores" {
  lifecycle_commands {
    create = file("${path.module}/create_stores.sh")
    delete = "echo {}"
  }

  environment = {
    ID = random_integer.id.result
    KAFKA_PRIVATE_KEY        = ovo_kafka_user.service_user.access_key
    KAFKA_CLIENT_CERTIFICATE = ovo_kafka_user.service_user.access_cert
    KEYSTORE_PASSWORD        = random_string.ssl_password.result
    TRUSTSTORE_PASSWORD       = random_string.ssl_password.result
    KAFKA_SERVICE_CERTIFICATE = var.kafka_ca_certificate
  }
}

and here is the script I'm using

#!/bin/sh

# Exit if any of the intermediate steps fail
set -e

echo "$KAFKA_SERVICE_CERTIFICATE" > /tmp/ca.txt

echo "$KAFKA_PRIVATE_KEY" > /tmp/compositefile-$ID.txt
echo "$KAFKA_CLIENT_CERTIFICATE" >> /tmp/compositefile-$ID.txt
openssl pkcs12 -export -in /tmp/compositefile-$ID.txt -out /tmp/keyStore-$ID.p12 -password pass:$KEYSTORE_PASSWORD

#`python -m base64 -d` base64 decoding to be able to run on Mac and Linux the same way
echo "$KAFKA_SERVICE_CERTIFICATE" | python -m base64 -d | keytool -keystore /tmp/trustStore-$ID.jks -alias CARoot -import -storepass $TRUSTSTORE_PASSWORD -noprompt

KEYSTORE_BINARY=$(base64 /tmp/keyStore-$ID.p12)
TRUSTSTORE_BINARY=$(base64 /tmp/trustStore-$ID.jks)

jq -n --arg keyStoreBinary "$KEYSTORE_BINARY" --arg trustStoreBinary "$TRUSTSTORE_BINARY" '{"base64_encoded_keystore":$keyStoreBinary, "base64_encoded_truststore":$trustStoreBinary}' >&1

Is there something different I should be doing to get the outputs from the resource? The kubernetes_secret in which I'm referencing the output also has a depends_on block to depend on the shell resource

depends_on = [shell_script.create_kafka_stores]

Update command fails, state updated anyway

Another interesting issue I've run into that I can't seem to root-cause. I have a failure in my update command, and this is caught by Terraform during terraform apply. The apply fails and raises the error, but re-running terraform apply shows no changes pending. Inspecting terraform.tfstate reveals that the changes were persisted to state, even though the update command failed.

I think what should happen on failure is that the changes should not be persisted to state, so that the next terraform apply will try again to run the update command.

Similarly, for the create command, if there is a failure in the script, the resource should be marked as tainted.

Notice the plan (below) references a change to the environment variable:

~ "DESCRIPTION"    = "changing things up" -> "foo bar"

After this apply is complete, the terraform state file captures this change and reflects a final value of "foo bar" for this environment variable.

Any suggestions?

Here are some log dumps from a terraform apply attempting to update a resource:

# excerpt from `terraform plan`

  # module.acl.shell_script.web_acl will be updated in-place
  ~ resource "shell_script" "web_acl" {
        dirty             = false
      ~ environment       = {
            "DEFAULT_ACTION" = "Allow"
          ~ "DESCRIPTION"    = "changing things up" -> "foo bar"
            "NAME"           = "test-webacl"
            "RULES"          = jsonencode([])
            "SCOPE"          = "REGIONAL"
            "TAGS"           = jsonencode(
                [
                    {
                        Key   = "baz"
                        Value = "etc"
                    },
                    {
                        Key   = "foo"
                        Value = "bar"
                    },
                ]
            )
        }
        id                = "bqausnv0eb17akv9tm1g"
        output            = {
            "arn"        = "arn:aws:wafv2:us-east-1:999999999999:regional/webacl/test-webacl/bee6287e-0cc5-403f-846b-726dca7e95ad"
            "id"         = "bee6287e-0cc5-403f-846b-726dca7e95ad"
            "lock_token" = "8bc4608b-e337-4d84-b2be-0e529346ec8c"
        }
        triggers          = {
            "name"  = "test-webacl"
            "scope" = "REGIONAL"
        }
        working_directory = "."

        lifecycle_commands {
            create = <omitted>
            delete = <omitted>
            read   = <omitted>
            update = <<~EOT
                #!/bin/bash

                # TODO: Check tags

                set -e
                set -o pipefail

                eval $(jq -r '@sh "ID=\(.id) ARN=\(.arn)"')

                LOCK_TOKEN=$(
                  aws wafv2 get-web-acl \
                    --name "$NAME" \
                    --scope "$SCOPE" \
                    --id "$ID" | jq -r '.LockToken'
                )

                aws wafv2 update-web-acl \
                  --scope "$SCOPE" \
                  --name "$NAME" \
                  --id "$ID" \
                  --lock-token "$LOCK_TOKEN" \
                  --default-action "$DEFAULT_ACTION={}" \
                  --visibility-config "SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=$NAME" \
                  --rules "$RULES" \
                  --tags "$TAGS" \
                  ${DESCRIPTION:+ --description "$DESCRIPTION"} | jq \
                    --arg id "$ID" \
                    --arg arn "$ARN" \
                    '{
                    arn: .Summary.ARN,
                    id: .Summary.Id,
                    lock_token: .Summary.LockToken
                  }'
            EOT
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
# excerpt from Terraform DEBUG log

2020-04-14T11:10:18.722-0600 [INFO]  plugin.terraform-provider-shell_v1.2.0: configuring server automatic mTLS: timestamp=2020-04-14T11:10:18.722-0600
2020-04-14T11:10:18.750-0600 [DEBUG] plugin: using plugin: version=5
2020-04-14T11:10:18.750-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: plugin address: address=/var/folders/hj/6wpp8x5s0lbc1jvtzp9np2rw0000gn/T/plugin469611253 network=unix timestamp=2020-04-14T11:10:18.750-0600
2020-04-14T11:10:18.752-0600 [INFO]  plugin: configuring client automatic mTLS
module.acl.shell_script.web_acl: Modifying... [id=bqausnv0eb17akv9tm1g]
2020/04/14 11:10:18 [DEBUG] module.acl.shell_script.web_acl: applying the planned Update change
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] Updating shell script resource...
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 -------------------------
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] Current stack:
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] -- update
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 -------------------------
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] Locking "shellScriptMutexKey"
2020-04-14T11:10:18.870-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] Locked "shellScriptMutexKey"
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] shell script command old state: "&{[DEFAULT_ACTION=Allow DESCRIPTION=changing things up NAME=test-webacl SCOPE=REGIONAL TAGS=[{"Key":"baz","Value":"etc"},{"Key":"foo","Value":"bar"}] RULES=[]] map[lock_token:8bc4608b-e337-4d84-b2be-0e529346ec8c arn:arn:aws:wafv2:us-east-1:999999999999:regional/webacl/test-webacl/bee6287e-0cc5-403f-846b-726dca7e95ad id:bee6287e-0cc5-403f-846b-726dca7e95ad]}"
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] shell script going to execute: /bin/sh -c
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    #!/bin/bash
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    # TODO: Check tags
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    set -e
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    set -o pipefail
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    eval $(jq -r '@sh "ID=\(.id) ARN=\(.arn)"')
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    LOCK_TOKEN=$(
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      aws wafv2 get-web-acl \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        --name "$NAME" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        --scope "$SCOPE" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        --id "$ID" | jq -r '.LockToken'
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    )
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18    aws wafv2 update-web-acl \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --scope "$SCOPE" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --name "$NAME" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --id "$ID" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --lock-token "$LOCK_TOKEN" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --default-action "$DEFAULT_ACTION={}" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --visibility-config "SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=$NAME" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --rules "$RULES" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      --tags "$TAGS" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      ${DESCRIPTION:+ --description "$DESCRIPTION"} | jq \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        --arg id "$ID" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        --arg arn "$ARN" \
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        '{
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        arn: .Summary.ARN,
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        id: .Summary.Id,
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18        lock_token: .Summary.LockToken
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18      }'
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 -------------------------
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 [DEBUG] Starting execution...
2020-04-14T11:10:18.871-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:18 -------------------------
2020-04-14T11:10:20.183-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20   usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
2020-04-14T11:10:20.183-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20   To see help text, you can run:
2020-04-14T11:10:20.183-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20
2020-04-14T11:10:20.183-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20     aws help
2020-04-14T11:10:20.184-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20     aws <command> help
2020-04-14T11:10:20.184-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20     aws <command> <subcommand> help
2020-04-14T11:10:20.184-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20
2020-04-14T11:10:20.184-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20   Unknown options: --tags, [{"Key":"baz","Value":"etc"},{"Key":"foo","Value":"bar"}]
2020-04-14T11:10:20.265-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20 [DEBUG] Unlocking "shellScriptMutexKey"
2020-04-14T11:10:20.265-0600 [DEBUG] plugin.terraform-provider-shell_v1.2.0: 2020/04/14 11:10:20 [DEBUG] Unlocked "shellScriptMutexKey"
2020/04/14 11:10:20 [DEBUG] module.acl.shell_script.web_acl: apply errored, but we're indicating that via the Error pointer rather than returning it: Error occured in Command: '#!/bin/bash

# TODO: Check tags

set -e
set -o pipefail

eval $(jq -r '@sh "ID=\(.id) ARN=\(.arn)"')

LOCK_TOKEN=$(
  aws wafv2 get-web-acl \
    --name "$NAME" \
    --scope "$SCOPE" \
    --id "$ID" | jq -r '.LockToken'
)

aws wafv2 update-web-acl \
  --scope "$SCOPE" \
  --name "$NAME" \
  --id "$ID" \
  --lock-token "$LOCK_TOKEN" \
  --default-action "$DEFAULT_ACTION={}" \
  --visibility-config "SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=$NAME" \
  --rules "$RULES" \
  --tags "$TAGS" \
  ${DESCRIPTION:+ --description "$DESCRIPTION"} | jq \
    --arg id "$ID" \
    --arg arn "$ARN" \
    '{
    arn: .Summary.ARN,
    id: .Summary.Id,
    lock_token: .Summary.LockToken
  }'
' Error: 'exit status 252'
 StdOut:

 StdErr:
 usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]To see help text, you can run:  aws help  aws <command> help  aws <command> <subcommand> helpUnknown options: --tags, [{"Key":"baz","Value":"etc"},{"Key":"foo","Value":"bar"}]
2020/04/14 11:10:20 [ERROR] module.acl: eval: *terraform.EvalApplyPost, err: Error occured in Command: '#!/bin/bash

# TODO: Check tags

set -e
set -o pipefail

eval $(jq -r '@sh "ID=\(.id) ARN=\(.arn)"')

LOCK_TOKEN=$(
  aws wafv2 get-web-acl \
    --name "$NAME" \
    --scope "$SCOPE" \
    --id "$ID" | jq -r '.LockToken'
)

aws wafv2 update-web-acl \
  --scope "$SCOPE" \
  --name "$NAME" \
  --id "$ID" \
  --lock-token "$LOCK_TOKEN" \
  --default-action "$DEFAULT_ACTION={}" \
  --visibility-config "SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=$NAME" \
  --rules "$RULES" \
  --tags "$TAGS" \
  ${DESCRIPTION:+ --description "$DESCRIPTION"} | jq \
    --arg id "$ID" \
    --arg arn "$ARN" \
    '{
    arn: .Summary.ARN,
    id: .Summary.Id,
    lock_token: .Summary.LockToken
  }'
' Error: 'exit status 252'
 StdOut:

 StdErr:
 usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]To see help text, you can run:  aws help  aws <command> help  aws <command> <subcommand> helpUnknown options: --tags, [{"Key":"baz","Value":"etc"},{"Key":"foo","Value":"bar"}]
2020/04/14 11:10:20 [ERROR] module.acl: eval: *terraform.EvalSequence, err: Error occured in Command: '#!/bin/bash

# TODO: Check tags

set -e
set -o pipefail

eval $(jq -r '@sh "ID=\(.id) ARN=\(.arn)"')

LOCK_TOKEN=$(
  aws wafv2 get-web-acl \
    --name "$NAME" \
    --scope "$SCOPE" \
    --id "$ID" | jq -r '.LockToken'
)

aws wafv2 update-web-acl \
  --scope "$SCOPE" \
  --name "$NAME" \
  --id "$ID" \
  --lock-token "$LOCK_TOKEN" \
  --default-action "$DEFAULT_ACTION={}" \
  --visibility-config "SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=$NAME" \
  --rules "$RULES" \
  --tags "$TAGS" \
  ${DESCRIPTION:+ --description "$DESCRIPTION"} | jq \
    --arg id "$ID" \
    --arg arn "$ARN" \
    '{
    arn: .Summary.ARN,
    id: .Summary.Id,
    lock_token: .Summary.LockToken
  }'
' Error: 'exit status 252'
 StdOut:

 StdErr:
 usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]To see help text, you can run:  aws help  aws <command> help  aws <command> <subcommand> helpUnknown options: --tags, [{"Key":"baz","Value":"etc"},{"Key":"foo","Value":"bar"}]
2020-04-14T11:10:20.574-0600 [DEBUG] plugin: plugin process exited: path=/Users/jcarlson/.terraform.d/plugins/darwin_amd64/terraform-provider-aws_v2.50.0_x4 pid=30463
2020-04-14T11:10:20.574-0600 [DEBUG] plugin: plugin exited

Error: Error occured in Command: '#!/bin/bash

# TODO: Check tags

set -e
set -o pipefail

eval $(jq -r '@sh "ID=\(.id) ARN=\(.arn)"')

LOCK_TOKEN=$(
  aws wafv2 get-web-acl \
    --name "$NAME" \
    --scope "$SCOPE" \
    --id "$ID" | jq -r '.LockToken'
)

aws wafv2 update-web-acl \
  --scope "$SCOPE" \
  --name "$NAME" \
  --id "$ID" \
  --lock-token "$LOCK_TOKEN" \
  --default-action "$DEFAULT_ACTION={}" \
  --visibility-config "SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=$NAME" \
  --rules "$RULES" \
  --tags "$TAGS" \
  ${DESCRIPTION:+ --description "$DESCRIPTION"} | jq \
    --arg id "$ID" \
    --arg arn "$ARN" \
    '{
    arn: .Summary.ARN,
    id: .Summary.Id,
    lock_token: .Summary.LockToken
  }'
' Error: 'exit status 252'
 StdOut:

 StdErr:
 usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]To see help text, you can run:  aws help  aws <command> help  aws <command> <subcommand> helpUnknown options: --tags, [{"Key":"baz","Value":"etc"},{"Key":"foo","Value":"bar"}]

2020-04-14T11:10:20.604-0600 [DEBUG] plugin: plugin process exited: path=/Users/jcarlson/.terraform.d/plugins/terraform-provider-shell_v1.2.0 pid=30462
2020-04-14T11:10:20.604-0600 [DEBUG] plugin: plugin exited
  on ../../modules/waf-acl/main.tf line 1, in resource "shell_script" "web_acl":
   1: resource "shell_script" "web_acl" {

Feature request: Timeouts / Retry

Greetings!

I'm facing an annoying race condition upon creating my Google Compute Engine project. Even though I activate the compute APIs and the dependency from the shell resource is clearly stated, it requires some extra time to propagate. I'm going to add a sleep as a workaround, but that's unsatisfactory for two reasons : first, it introduces extra delay (could be ready before sleep ends), second, it might just not be enough (could be still pending by the time sleep ends).

Resource > Operation timeouts indicates that some resources support custom timeouts. I'm not sure what the defaults are - but it's certainly something I'm interested in tweaking when adding sleep to my script.

SDK Docs > Resources > Retry indicates that some better support is available to providers for eventually consistent resources. I would love to be able to indicate that exit code N of my script means retry after M seconds, like so:

resource "shell_script" "eventually_consistent_thing" {
  lifecycle_commands {
    # ...
  }

  retry {
    on_exit_code = 30
    retry_delay = 5 # in seconds
  }
}

Such retry logic would allow me to speed things up at the expense of extra API calls. It would also make my config tolerant of edge cases where the propagation delay might just be greater than my hardcoded sleep workaround. I suppose that other eventually-consistent resources would benefit in just the same way.

Thanks for the great plugin, it's a life saver and a better alternative to local-exec when I need state, and a much better alternative to external data source because it .. keeps state. Good work!

Raw Output instead of Output

Instead of forcing people to output JSON, it would be nice to have the option of outputting a string. e.g. echo "hello" would return just the string "hello". No need to wrap it in a JSON data structure. If someone is using this approach, only the last line of output will be considered.

JSON is still supported, but instead of flattening it on your behalf, just let people use decodejson() which is what people were doing anyways for deeply nested JSON outputs.

In the short term will support both, and mark output as deprecated. Output can be deprecated in a 2.0 release.

Errors from `delete` command are swallowed

I'm using this provider to wrap AWS API commands for resources that Terraform does not yet support natively in the AWS provider.

On destroy, my delete command threw an error, but the shell script provider swallowed it and marked the resource as destroyed.

It would appear that, unlike the create, read and update operations, delete does not check the exit code of the script.

IMHO, the delete operation should check the exit code and fail destruction of the resource if the script did not exit cleanly.

Output .values are converted to string. This is not desired.

Hi!

Example, following output:

{
  "application": {
    "api": {
      "arg1": null,
      "arg2": [],
      "arg3": [
        {
          "id": "305ed81c-35c0-47dc-900a-3abe368127d3",
          "sub3": true
        }
      ],
      "preAuthorizedApplications": []
    }
  }
}

gets converted in the state to this:

{
  "output": {
    "application": "{\"api\":{\"arg1\":null,\"arg2\":[],\"arg3\":[{\"id\":\"84cfe009-7f2b-4e13-9f99-62b60aabf7ac\",\"sub3\":true}],\"preAuthorizedApplications\":[]}"
  }
}

Is there a way to disable this conversion?

Random characters getting marked as sensitive (and sensitive inputs not being obscured)

I'm using v1.3.1 of the provider and tf 0.12.24.

In my uat environment it looks like the letter m is being treated as a secret and masked as ***** when put to stdout. Because the output of the resource comes from stdout, this then means that I'm getting incorrect output. None of the sensitive_environment values are just the letter m.

In addition, the actual private key/client cert themselves aren't being obscured. If I print them out to stdout the only thing obscured in them is the letter m.

Here is the resource

resource "shell_script" "create_kafka_stores" {
  lifecycle_commands {
    create = file("${path.module}/create_stores.sh")
    delete = "echo {}"
  }

  environment = {
    ID = random_integer.id.result
  }

  sensitive_environment = {
    KAFKA_PRIVATE_KEY        = var.access_key
    KAFKA_CLIENT_CERTIFICATE = var.access_cert
    KEYSTORE_PASSWORD        = random_string.ssl_password.result
    TRUSTSTORE_PASSWORD       = random_string.ssl_password.result
    KAFKA_SERVICE_CERTIFICATE = var.kafka_ca_certificate
  }
}

and here is my script

#!/bin/sh

#Exit if there are any errors.
abort()
{
    echo >&2 '
***************
*** ABORTED ***
***************
'
    echo "An error occurred. Exiting..." >&2
    exit 1
}

trap 'abort' 0
set -e

echo "$KAFKA_SERVICE_CERTIFICATE" > /tmp/ca.txt

echo "$KAFKA_PRIVATE_KEY" > /tmp/compositefile-$ID.txt
echo "$KAFKA_CLIENT_CERTIFICATE" >> /tmp/compositefile-$ID.txt
openssl pkcs12 -export -in /tmp/compositefile-$ID.txt -out /tmp/keyStore-$ID.p12 -password pass:$KEYSTORE_PASSWORD

#`python -m base64 -d` base64 decoding to be able to run on Mac and Linux the same way
echo "$KAFKA_SERVICE_CERTIFICATE" | python -m base64 -d | keytool -keystore /tmp/trustStore-$ID.jks -alias CARoot12384912 -import -storepass $TRUSTSTORE_PASSWORD -noprompt

KEYSTORE_BINARY=$(base64 /tmp/keyStore-$ID.p12)
TRUSTSTORE_BINARY=$(base64 /tmp/trustStore-$ID.jks)

trap : 0

jq -n --arg keyStoreBinary "$KEYSTORE_BINARY" --arg trustStoreBinary "$TRUSTSTORE_BINARY" '{"base64_encoded_keystore":$keyStoreBinary, "base64_encoded_truststore":$trustStoreBinary}' >&1

I can send you the contents of the sensitive variables to your email (I'd rather not post them publicly to github even though it's only uat and I'm going to rotate them once this is resolved). I'm not quite sure where the sensitivity around the letter m is coming from.

Update support

Your project seems to solve some of the problems i'm having with using local-exec to shell out to get stuff done that terraform doesn't have providers etc for. So thanks!

One use case we have is for a change to a script resource to not trigger a destroy and create but an update, as destruction and creation is expensive, but updates are cheap and simple (shelling out to a process that already supports idempotent updates). It would be great to be able to apply updates and avoid the destroy/create cycle.

Happy to work on a PR, got any tips?

Terraform refresh stage dumping {} o unique lines for each shell resource

Not sure if this is intended, but not seen any docs related to it.

Using 1.7.2 from the registry, each shell_script resource is being refreshed and below it is printed "{}" in non-bold (the other refresh output is emboldened.

e.g.:

module.bs_gr.shell_script.securityhub_invite_accepter[2]: Refreshing state... [id=bqc46gujtk81e0bh2ur0]
module.bs_gr.shell_script.securityhub_invite_accepter[1]: Refreshing state... [id=bqc46gujtk81e0bh2urg]
module.bs_gr.shell_script.securityhub_invite_accepter[0]: Refreshing state... [id=bqc46h6jtk81e0bh2us0]
{}
{}
{}

Feature request: Sensitive environment block

There is a problem if You want to pass sensitive value to shell provider, usually environment variable ( templatefile() ), its visible in plan/apply.

Would be nice to have something like: sensitive_environemnt {} block for hidding environment variables in terraform plan/apply overview.

Feature request: add support for complex JSON structures

I think it would be highly useful if the JSON structures could be more complex, ie contain nested maps and lists.

A real world example:
az provider list --subscription ######################### | jq '.[]|select(.namespace == "Microsoft.RedHatOpenShift")'
Output looks like this:

{
  "authorizations": [
    {
      "applicationId": "#########################",
      "managedByAuthorization": {
        "allowManagedByInheritance": true
      },
      "managedByRoleDefinitionId": "########################",
      "roleDefinitionId": "###########################"
    }
  ],
  "id": "/subscriptions/####################/providers/Microsoft.RedHatOpenShift",
  "namespace": "Microsoft.RedHatOpenShift",
  "registrationPolicy": "RegistrationRequired",
  "registrationState": "Registered",
  "resourceTypes": [
    {
      "aliases": null,
      "apiVersions": [
        "2019-12-31-preview"
      ],
      "capabilities": "None",
      "locations": [],
      "properties": null,
      "resourceType": "operations"
    }
  ]
}

I t would probably be a huge amount of work to implement parsing of that structure..... but I don't think you need to as Hashicorp has already done that:
https://www.terraform.io/docs/configuration/functions/jsondecode.html

Running that function on the output above ( output "json_namespace" { value = jsondecode(file("./scripts/namespace.json")) } ) results in this variable

json_namespace = {
  "authorizations" = [
    {
      "applicationId" = "##########################"
      "managedByAuthorization" = {
        "allowManagedByInheritance" = true
      }
      "managedByRoleDefinitionId" = "##############################"
      "roleDefinitionId" = "################################"
    },
  ]
  "id" = "/###################################/providers/Microsoft.RedHatOpenShift"
  "namespace" = "Microsoft.RedHatOpenShift"
  "registrationPolicy" = "RegistrationRequired"
  "registrationState" = "Registered"
  "resourceTypes" = [
    {
      "apiVersions" = [
        "2019-12-31-preview",
      ]
      "capabilities" = "None"
      "locations" = []
      "resourceType" = "operations"
    },
  ]
}

which can then be used like any other map structure in the rest of the code.

Taint functionality missing with update command enabled

Hello,
thank You for your awesome work! This basically is what Terraform should have been from the beginning (pluggable shell scripts with state).

Reason for this feature, is to make shell reasource "experience" more "null_resource like". Currently the only way to taint shell resource (I know of) is to include trigger env value in lifecycle_commands section.
example: delete = "bash -eu scripts/delete.sh; #${var.tainted}'"

How I would envision resource tainting is by:

  • "triggers" variable (null_resource style), this also enables use case where resource can be tainted based on external input, without exposing values to shell environment
  • separate tainted_environment block (basically the same as environment but change in one of those values forces resource recreation even with update command present)

Sidequestion:
any plans on hitting official plugin status?

Applause for excellent work!

Track exit code returned by the script

Hi,

Would you be happy for me to put together a PR to track the exit codes coming back from the scripts?

The use case here is to prevent additional calls to scripts when a failure occurs. Currently I think what will happen (based on what I've seen) is that if read will exit with non-zero code and not output any state (say a network issue has prevented a curl call) the provider does a diff with the current state and observes a change, to rectify this it calls update. My expectation would be to have the error cause the terraform apply to terminate instead and let the user review the error then re-run as needed.

This change would mean that scripts should always return a 0 exit code if they want their state output to be persisted or a non-zero code if they want to terminate and display the error.

An added bonus here is that we'd be able to take the cmd output from the error'd script and return it to Terraform so it'd displayed to the user without setting TF_LOG=debug (would only happen when the script error'd but still useful).

I think I need to change this section:

log.Printf("-------------------------")
err = cmd.Start()
if err == nil {
err = cmd.Wait()
}
// Close the write-end of the pipe so that the goroutine mirroring output
// ends properly.
pwStdout.Close()

To use a pattern like this:

https://stackoverflow.com/a/10385867/3437018

Sound like a useful change?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.