Git Product home page Git Product logo

terraform-provider-docker's Introduction

Docker logo Terraform logo Kreuzwerker logo

Terraform Provider for Docker

Release Installs Registry License
Go Status Lint Status Go Report Card

Documentation

The documentation for the provider is available on the Terraform Registry.

Do you want to migrate from v2.x to v3.x? Please read the migration guide

Example usage

Take a look at the examples in the documentation of the registry or use the following example:

# Set the required provider and versions
terraform {
  required_providers {
    # We recommend pinning to the specific version of the Docker Provider you're using
    # since new versions are released frequently
    docker = {
      source  = "kreuzwerker/docker"
      version = "3.0.2"
    }
  }
}

# Configure the docker provider
provider "docker" {
}

# Create a docker image resource
# -> docker pull nginx:latest
resource "docker_image" "nginx" {
  name         = "nginx:latest"
  keep_locally = true
}

# Create a docker container resource
# -> same as 'docker run --name nginx -p8080:80 -d nginx:latest'
resource "docker_container" "nginx" {
  name    = "nginx"
  image   = docker_image.nginx.image_id

  ports {
    external = 8080
    internal = 80
  }
}

# Or create a service resource
# -> same as 'docker service create -d -p 8081:80 --name nginx-service --replicas 2 nginx:latest'
resource "docker_service" "nginx_service" {
  name = "nginx-service"
  task_spec {
    container_spec {
      image = docker_image.nginx.repo_digest
    }
  }

  mode {
    replicated {
      replicas = 2
    }
  }

  endpoint_spec {
    ports {
      published_port = 8081
      target_port    = 80
    }
  }
}

Building The Provider

Go 1.18.x (to build the provider plugin)

$ git clone [email protected]:kreuzwerker/terraform-provider-docker
$ make build

Contributing

The Terraform Docker Provider is the work of many of contributors. We appreciate your help!

To contribute, please read the contribution guidelines: Contributing to Terraform - Docker Provider

License

The Terraform Provider Docker is available to everyone under the terms of the Mozilla Public License Version 2.0. Take a look the LICENSE file.

Stargazers over time

Stargazers over time

terraform-provider-docker's People

Contributors

appilon avatar baboune avatar bhuisgen avatar captn3m0 avatar colinhebert avatar dmportella avatar dubo-dubon-duponey avatar edgarpoce avatar grubernaut avatar hmcgonig avatar innovate-invent avatar jefferai avatar jen20 avatar junkern avatar katbyte avatar kyhavlov avatar lvjp avatar mavidser avatar mavogel avatar mitchellh avatar mkuzmin avatar paulbellamy avatar phinze avatar radeksimko avatar renovate-bot avatar renovate[bot] avatar ryane avatar stack72 avatar suzuki-shunsuke avatar xanderflood avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-docker's Issues

Non-root containers replaced everytime

Hi there,

Looks like "env" and "user" vars are forcing a container replacement each time, even though they don't change

Terraform Version

Terraform v0.14.2

  • provider registry.terraform.io/kreuzwerker/docker v2.8.0

In brief:

docker_container.ork-haproxy must be replaced
-/+ resource "docker_container" "ork-haproxy" {
      + bridge            = (known after apply)
      ~ command           = [
          - "haproxy",
          - "-f",
          - "/usr/local/etc/haproxy/haproxy.cfg",
        ] -> (known after apply)
      + container_logs    = (known after apply)
      - cpu_shares        = 0 -> null
      - dns               = [] -> null
      - dns_opts          = [] -> null
      - dns_search        = [] -> null
      ~ entrypoint        = [
          - "/docker-entrypoint.sh",
        ] -> (known after apply)
      + env               = (known after apply) # forces replacement
      + exit_code         = (known after apply)
      ~ gateway           = "172.19.0.1" -> (known after apply)
      - group_add         = [] -> null
      ~ hostname          = "b20122e5bc19" -> (known after apply)
      ~ id                = "b20122e5bc1925196208191ecf4d117749ae50dd95b41135112b5e367358f06b" -> (known after apply)
      ~ ip_address        = "172.19.0.5" -> (known after apply)
      ~ ip_prefix_length  = 16 -> (known after apply)
      ~ ipc_mode          = "shareable" -> (known after apply)
      - links             = [] -> null
      - log_opts          = {} -> null
      - max_retry_count   = 0 -> null
      - memory            = 0 -> null
      - memory_swap       = 0 -> null
        name              = "haproxy"
      ~ network_data      = [
          - {
              - gateway                   = "172.19.0.1"
              - global_ipv6_address       = ""
              - global_ipv6_prefix_length = 0
              - ip_address                = "172.19.0.5"
              - ip_prefix_length          = 16
              - ipv6_gateway              = ""
              - network_name              = "mail_internal"
            },
        ] -> (known after apply)
      - network_mode      = "default" -> null
      - privileged        = false -> null
      - publish_all_ports = false -> null
      + remove_volumes    = true
      ~ shm_size          = 64 -> (known after apply)
      - sysctls           = {} -> null
      - tmpfs             = {} -> null
      - user              = "haproxy" -> null # forces replacement

Alias fails when passed to child module.

This issue was originally opened by @hongkongkiwi as hashicorp/terraform-provider-docker#290. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



Terraform does not correctly handle aliases for this provider when passing to child module.

Terraform v0.13.0
+ provider registry.terraform.io/terraform-providers/docker v2.7.2

For example, this will fail with error during terraform validate:
Error: missing provider provider["registry.terraform.io/hashicorp/docker"].foo

provider "docker" {
  alias = "foo"
  host = "tcp://127.0.0.1:2376/"
}

module "mycustommodule" {
  source = "./test2"
  providers = {
    docker = docker.foo
  }
}

Image argument after v2.6.0 always replaces resources (mismatch between image name vs sha image id)

This issue was originally opened by @johnlane as hashicorp/terraform-provider-docker#294. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



There is a difference between the handling of the image argument of v2.6.0 and v2.7.2.

The image specification shown below works with the 2.6.0 Docker provider but not with version 2.7.2.

resource "docker_container" "portainer" {
  image   = "portainer/portainer:1.23.0"
  ...

The documentation now shows having a reference to an image resource:

   image = "${docker_image.ubuntu.latest}"

and configuring that image resource to specify the image name.

With 2.7.2 there is a perpetual mismatch between the image name spcified in the config and the image id that terraform plan and apply identifies.

I can't find any documentation covering this breaking change between versions, I don't know if this break is intentional or something that would be fixed. I see many examples illustrating the succinct way of referencing an image that works in version 2.6.0. It would be good if specifying the image directly in image continued to work rather than requiring an additional image resource block containing the image name.

If the form that worked in 2.6.0 is not supported any more than an error during plan or apply to prevent its use would make this clear. Currently it's accepted as valid but causes resource replacement on every apply.

Terraform Version

Terraform 0.13.3 with Docker provider 2.6.0 and 2.7.2

Affected Resource(s)

docker_container

Expected Behavior

The resource should be matched with previous state so unnecessary changes are not made.

Actual Behavior

The resource is detected as a change because the given image value is matched against SHA and therefore is always detected as a change, requiring replacement of the resource.

Steps to Reproduce

  1. terraform apply

References

hashicorp/terraform#26382

Also hashicorp/terraform-provider-docker#291 is similar.

Example with ECR and docker_registry_image

Hi everyone!

Can someone show a working example that builds and pushes an image to ECR using the docker_registry_image resource?

Terraform Version

Terraform v0.13.5

  • provider registry.terraform.io/hashicorp/aws v3.17.0
  • provider registry.terraform.io/kreuzwerker/docker v2.8.0

Affected Resource(s)

Please list the resources as a list, for example:

  • docker_registry_image

Terraform Configuration Files

provider "docker" {
  registry_auth {
    address  = "152235879155.dkr.ecr.us-east-1.amazonaws.com"
    username = data.aws_ecr_authorization_token.repo.user_name
    password = data.aws_ecr_authorization_token.repo.password

    config_file_content = jsonencode({
      "auths" = {
        "152235879155.dkr.ecr.us-east-1.amazonaws.com" = {}
      }
      "credHelpers" = {
        "152235879155.dkr.ecr.us-east-1.amazonaws.com" = "ecr-login"
      }
    })
  }
}

/*
# Also tried this config
      config_file_content = jsonencode({
       "auths" = {
        "152235879155.dkr.ecr.us-east-1.amazonaws.com" = {
          "auth": data.aws_ecr_authorization_token.repo.authorization_token,
          "email": ""
        }
      }
      "credsStore" = "ecr-login"
*/

data "aws_ecr_authorization_token" "repo" {}

resource "docker_registry_image" "helloworld" {
  name = "something-amazing/helloworld:2.0"

  build {
    context = "context"
  }
}

Expected Behavior

An image seems to be building correctly but it can't be pushed to ECR.

Actual Behavior

Error: Error pushing docker image: Error response from daemon: Bad parameters and missing X-Registry-Auth: EOF

Proposal: Add options for SSH identity file and passphrase to Docker provider

Terraform Version

โ‰ฅ 0.12.25

Use-cases

The provider has a feature to deploy a docker container using the SSH protocol

provider "docker" {
  host = "ssh://user@remote-host:22"
}

However there seems to be no explicit way to provide the path to an SSH identity file (private key).
The only implicit workaround seems to be creating an SSH config file on the machine running Terraform to specify the identity file.

Proposal

Add two options:

identity_file
passphrase

Use this information in the SSH systemcall.

References

Issue in archived repo: hashicorp/terraform-provider-docker#268
Related: #18

AuxAddress is not read from network and trigger a re-apply every time

Example:

-/+ resource "docker_network" "dubo-vlan" {
        attachable      = false
        check_duplicate = true
        driver          = "ipvlan"
      ~ id              = "2e5dda2e5a90ce693456bad29cc3c5afd6c8f9a58568a72325e6569d375cea78" -> (known after apply)
      - ingress         = false -> null
        internal        = false
        ipam_driver     = "default"
        ipv6            = false
        name            = "dubo-lan-vlan"
        options         = {
            "ipvlan_mode" = "l2"
            "parent"      = "wlan0"
        }
      ~ scope           = "local" -> (known after apply)

      + ipam_config { # forces replacement
          + aux_address = {
              + "dns"  = "10.0.4.42"
              + "link" = "10.0.4.43"
            }
          + gateway     = "10.0.4.1"
          + ip_range    = "10.0.4.42/31"
          + subnet      = "10.0.4.1/24"
        }
      - ipam_config { # forces replacement
          - aux_address = {} -> null
          - gateway     = "10.0.4.1" -> null
          - ip_range    = "10.0.4.42/31" -> null
          - subnet      = "10.0.4.1/24" -> null
        }
    }

Here is the existing docker network that was created earlier:

pi@nightingale:~ $ docker inspect dubo-lan-vlan
[
    {
        "Name": "dubo-lan-vlan",
        "Id": "2e5dda2e5a90ce693456bad29cc3c5afd6c8f9a58568a72325e6569d375cea78",
        "Created": "2020-12-02T03:01:25.262796547Z",
        "Scope": "local",
        "Driver": "ipvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.4.1/24",
                    "IPRange": "10.0.4.42/31",
                    "Gateway": "10.0.4.1",
                    "AuxiliaryAddresses": {
                        "dns": "10.0.4.42",
                        "link": "10.0.4.43"
                    }
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "ipvlan_mode": "l2",
            "parent": "wlan0"
        },
        "Labels": {}
    }
]

Image value in docker_service state doesn't stick

This issue was originally opened by @ragurakesh as hashicorp/terraform-provider-docker#291. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

terraform:0.12.29
provider-docker version: 2.5.0

Affected Resource(s)

  • docker_service

Terraform Configuration Files

provider "docker" {
  version = "~> 2.5.0"
  alias   = "default"
}

resource "docker_service" "foo" {
  name = "foo-service"

  task_spec {
    container_spec {
      image = "nginx:latest"
    }
  }

  endpoint_spec {
    ports {
      target_port = "8080"
    }
  }
}

Apply Output

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # docker_service.foo will be created
  + resource "docker_service" "foo" {
      + id     = (known after apply)
      + labels = (known after apply)
      + name   = "foo-service"

      + endpoint_spec {
          + mode = (known after apply)

          + ports {
              + protocol     = "tcp"
              + publish_mode = "ingress"
              + target_port  = 8080
            }
        }

      + mode {
          + global = (known after apply)

          + replicated {
              + replicas = (known after apply)
            }
        }

      + task_spec {
          + force_update   = (known after apply)
          + restart_policy = (known after apply)
          + runtime        = (known after apply)

          + container_spec {
              + image             = "nginx:latest"
              + isolation         = "default"
              + stop_grace_period = (known after apply)

              + dns_config {
                  + nameservers = (known after apply)
                  + options     = (known after apply)
                  + search      = (known after apply)
                }

              + healthcheck {
                  + interval     = (known after apply)
                  + retries      = (known after apply)
                  + start_period = (known after apply)
                  + test         = (known after apply)
                  + timeout      = (known after apply)
                }
            }

          + placement {
              + constraints = (known after apply)
              + prefs       = (known after apply)

              + platforms {
                  + architecture = (known after apply)
                  + os           = (known after apply)
                }
            }

          + resources {
              + limits {
                  + memory_bytes = (known after apply)
                  + nano_cpus    = (known after apply)

                  + generic_resources {
                      + discrete_resources_spec = (known after apply)
                      + named_resources_spec    = (known after apply)
                    }
                }

              + reservation {
                  + memory_bytes = (known after apply)
                  + nano_cpus    = (known after apply)

                  + generic_resources {
                      + discrete_resources_spec = (known after apply)
                      + named_resources_spec    = (known after apply)
                    }
                }
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

docker_service.foo: Creating...
docker_service.foo: Creation complete after 5s [id=dgyjey68z60ke8wploo9jpfme]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$

Expected Behavior

Once the above configuration is applied, docker service shall run nginx container, and applying the same terraform configuration again shall not cause the running container to be recycled.

Actual Behavior

Plan shows change in the image configured in the docker_service. If applied again, it causes the running container to get recycled.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

docker_service.foo: Refreshing state... [id=dgyjey68z60ke8wploo9jpfme]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # docker_service.foo will be updated in-place
  ~ resource "docker_service" "foo" {
        id     = "dgyjey68z60ke8wploo9jpfme"
        labels = {}
        name   = "foo-service"

        endpoint_spec {
            mode = "vip"

            ports {
                protocol       = "tcp"
                publish_mode   = "ingress"
                published_port = 0
                target_port    = 8080
            }
        }

        mode {
            global = false

            replicated {
                replicas = 1
            }
        }

      ~ task_spec {
            force_update   = 0
            networks       = []
            restart_policy = {
                "condition"    = "any"
                "max_attempts" = "0"
            }
            runtime        = "container"

          ~ container_spec {
                args              = []
                command           = []
                env               = {}
                groups            = []
               ~ image             = "nginx:latest@sha256:b0ad43f7ee5edbc0effbc14645ae7055e21bc1973aee5150745632a24a752661" -> "nginx:latest"
                isolation         = "default"
                labels            = {}
                read_only         = false
                stop_grace_period = "0s"

                dns_config {}

                healthcheck {
                    interval     = "0s"
                    retries      = 0
                    start_period = "0s"
                    test         = []
                    timeout      = "0s"
                }
            }

            placement {
                constraints = []
                prefs       = []

                platforms {
                    architecture = "amd64"
                    os           = "linux"
                }
            }

            resources {
            }
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
$

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                        PORTS               NAMES
07522dc88570        nginx:latest        "/docker-entrypoint.โ€ฆ"   23 seconds ago      Up 19 seconds                 80/tcp              foo-service.1.v7t3mqux9w3h4xdwft8hxpx9p
8e2cc03cf8e1        nginx:latest        "/docker-entrypoint.โ€ฆ"   9 minutes ago       Exited (137) 22 seconds ago                       foo-service.1.4humubt05iongzdxf5xvjsm79
$ 

Steps to Reproduce

  1. terraform apply above configuraion
  2. terraform apply again the same configuration
  3. docker ps -a

Swarm Secret Data Source

This issue was originally opened by @taiidani as hashicorp/terraform-provider-docker#302. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

0.12.29

Affected Resource(s)

New resource request, for a docker_secret data source

Expected Behavior

My team uses an external automation to load Secrets into our Swarm. However, we're also trying to use the docker_service resource. That resource requires the secret_id which is an output of the docker_secret resource...but that resource does not support importing.

Expected behavior of the provider would be a docker_secret data source to match the docker_secret resource. The data source would be able to take the name of the secret and output its id.

Actual Behavior

We have to either not use Secrets in our Swarm when using this provider, or manually look up and then hardcode their ids into our docker_service definitions.

Steps to Reproduce

Attempt to define a docker_service resource that uses a secret that was not created by a docker_secret resource.

Important Factoids

I'd be happy to give a PR a shot if there is an interest from the maintainers!

Unable to remove Docker image: image is referenced in multiple repositories

This issue was originally opened by @stoically as hashicorp/terraform-provider-docker#301. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

Terraform v0.13.3

Affected Resource(s)

  • docker_image

Terraform Configuration Files

data "docker_registry_image" "traefik" {
  name = "traefik:latest"
}

resource "docker_image" "traefik" {
  name          = data.docker_registry_image.traefik.name
  pull_triggers = [data.docker_registry_image.traefik.sha256_digest]
}

Expected Behavior

Should always silently upgrade the image / container

Actual Behavior

Error: Unable to remove Docker image: Error response from daemon: conflict: unable to delete 1a3f0281f41e (must be forced) - image is referenced in multiple repositories

Steps to Reproduce

Unfortunately not sure how to reproduce

Notes

Would it be safe to always force-remove images?

docker_image keep_locally not working as expected

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

โฏ terraform version 
Terraform v0.14.2
+ provider registry.terraform.io/kreuzwerker/docker v2.8.0

Affected Resource(s)

Please list the resources as a list, for example:

  • docker_image

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

terraform {
  required_providers {
    docker = {
      source  = "kreuzwerker/docker"
      version = "2.8.0"
    }
  }
}

variable "image" {
  default = "ghcr.io/andyshinn/wbld:latest"
}

variable "github_token" {
  description = "GitHub token for GHCR auth"
}

variable "discord_token" {
  description = "Discord auth token"
}

provider "docker" {
  host = "tcp://localhost:2375"

  registry_auth {
    address  = "ghcr.io"
    username = "andyshinn"
    password = var.github_token
  }
}

resource "docker_container" "wbld" {
  name    = "wbld"
  image   = docker_image.wbld.latest
  command = ["python3", "-m", "wbld.bot"]
  env     = ["DISCORD_TOKEN=${var.discord_token}"]

  volumes {
    volume_name    = "wbld_platformio"
    container_path = "/root/.platformio"
  }

  volumes {
    volume_name    = "wbld_buildcache"
    container_path = "/root/.buildcache"
  }
}

data "docker_registry_image" "wbld" {
  name = var.image
}

resource "docker_image" "wbld" {
  name          = data.docker_registry_image.wbld.name
  pull_triggers = [data.docker_registry_image.wbld.sha256_digest]
  keep_locally  = true
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

https://gist.github.com/andyshinn/2638d4ad091910b55d055b01301bc69a

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

The Docker image should not be removed on Destroy. Since the image is in use by the container it fails tring to remove the image.

Actual Behavior

The Docker image is being removed even though keep_locally is specified.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

Running through GitHub Actions. But tested locally and same behavior occurs.

support network_advanced block under docker_service resource

Hi there, I am writing for a feature request to support network alias in docker_service resource. Currently docker_service only support a list of network ids. And as docker_container already support network_advanced block, I think is a simple code change to support alias in docker_service.

Affected Resource(s)

  • docker_service

feat: consider new best practices

Replacement triggered for docker network with ipam_config

This issue was originally opened by @project0 as hashicorp/terraform-provider-docker#279. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



I experience strange behavior that it tries to replace my docker network after rebooting the server.
Looks like the ipam_config stanza cannot be properly merged and then triggeres the replacement.

Terraform v0.12.28
+ provider.docker v2.7.1
resource "docker_network" "server" {
  name            = local.network_name
  check_duplicate = true

  driver = "bridge"
  ipv6   = true

  ipam_config {
    subnet = var.subnets.ipv4_cidr
  }

  ipam_config {
    subnet = var.subnets.ipv6_cidr
  }
}
  # module.base.docker_network.server must be replaced
-/+ resource "docker_network" "server" {
      - attachable      = false -> null
        check_duplicate = true
        driver          = "bridge"
      ~ id              = "f565f7f1a0db0c184f69c24e8f113ce7d88c12e43ae4bf862b0f51c15940a633" -> (known after apply)
      - ingress         = false -> null
      ~ internal        = false -> (known after apply)
        ipam_driver     = "default"
        ipv6            = true
        name            = "server-vagrant"
      ~ options         = {} -> (known after apply)
      ~ scope           = "local" -> (known after apply)

      - ipam_config { # forces replacement
          - aux_address = {} -> null
          - gateway     = "172.16.10.1" -> null
          - subnet      = "172.16.10.0/24" -> null
        }
      - ipam_config { # forces replacement
          - aux_address = {} -> null
          - gateway     = "2001:1:6d:99f:400::1" -> null
          - subnet      = "2001:1:006d:099f:0100:0000:0000:0000/80" -> null
        }
      + ipam_config { # forces replacement
          + subnet = "172.16.10.0/24"
        }
      + ipam_config { # forces replacement
          + subnet = "2001:1:006d:099f:0100:0000:0000:0000/80"
        }
    }

docker inspect after creation:

    {
        "Name": "server-vagrant",
        "Id": "7347683ef27d04299465172f017fc89b352e7a1546ba50ec2bd7b3223a9721fa",
        "Created": "2020-07-09T16:53:23.763556354Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.16.10.0/24"
                },
                {
                    "Subnet": "2001:1:006d:099f:0100:0000:0000:0000/80"
                }
            ]
        }
  }

docker inspect after reboot:

    {
        "Name": "server-vagrant",
        "Id": "f565f7f1a0db0c184f69c24e8f113ce7d88c12e43ae4bf862b0f51c15940a633",
        "Created": "2020-07-06T18:43:47.35749164Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.16.10.0/24",
                    "Gateway": "172.16.10.1"
                },
                {
                    "Subnet": "2001:1:006d:099f:0100:0000:0000:0000/80",
                    "Gateway": "2001:1:6d:99f:100::1"
                }
            ]
        }
}

Docker service import crash - unexpected eof

This issue was originally opened by @brandonbumgarner as hashicorp/terraform-provider-docker#258. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform version

0.11.14

Affected Resource(s)

  • docker_service

Terraform Configuration Files

terraform {
  backend "s3" {
    bucket                      = "bucket"
    region                      = "region"
    profile                     = "profile"
    shared_credentials_file     = "credentials"
  }
}

provider "docker" {
  version = "~> 2.7.0"
  host                          = "tcp://${var.environment}.${var.domain}:9999/"
}

variable "domain" {
  default           = "domain"
}

variable "environment" {}

resource "docker_service" "service" {}

Panic Output

https://gist.github.com/brandonbumgarner/163a3145f7d1613a4d55dadc5dd0d0f2

Expected Behavior

The service should be imported into a state file stored in S3.

Actual Behavior

Terraform crashes. It seems like the state is captured, but when the state is refreshed, it crashes. The crash.log shows the docker inspect command output(stripped from gist, but in the actual it is there). So it seems that it is communicating with the docker node and getting the data.

Please keep in mind I have stripped the service id and any "sensitive" information from the logs and output. In the actual logs and output it has the specific environment, domain, service id, etc. For example: service-id is actually the id of a service that exists in that environment

terraform import -var "environment=env" docker_service.service service-id
docker_service.service: Importing from ID "service-id"...
docker_service.service: Import complete!
  Imported docker_service (ID: service-id)
docker_service.service: Refreshing state... (ID: service-id)

Error: docker_service.service (import id: service-id): 1 error occurred:
	* import docker_service.service result: service-id: docker_service.service: unexpected EOF

Steps to Reproduce

terraform init -var "environment=env" -backend-config="key=folder/service.state"
terraform import -var "environment=env" docker_service.service service-id

Important Factoids

Right now I have the Docker provider pinned to 2.7.0, but have also tried 2.6.0. This had been working for most of our other services, but stopped working last week without any changes to the tf file.

Pull an image on a remote host using credentials helper

This issue was originally opened by @michcio1234 as hashicorp/terraform-provider-docker#273. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


The problem

I want to make a certain Docker image from my ECR repository present on an EC2 instance, using Terraform.

Terraform version:

Terraform v0.12.24
+ provider.aws v2.64.0
+ provider.docker v2.7.0
+ provider.null v2.1.2

What I did

Permissions and credentials helper

I have configured permissions (so that instance profile role can log in and pull images) and installed credentials helper on the instance. I put {"credsStore": "ecr-login"} in /home/ec2-user/.docker/config.json. I can SSH into the instance and do docker pull image:tag - this works, no need to do docker login.

I can also do the same using my local docker client by doing docker -H ssh://[email protected]:22 pull image:tag - the image gets pulled onto the instance.

Terraform configuration

I was trying to do it in Terraform using Docker provider. Here's what I have:

provider "docker" {
  version = "~> 2.7"
  host = "ssh://ec2-user@${aws_instance.main.public_dns}:22"
}

data "docker_registry_image" "backend" {
  name = "image:tag"
}

resource "docker_image" "remote_backend" {
  name = data.docker_registry_image.backend.name
  pull_triggers = [data.docker_registry_image.backend.sha256_digest]
}

IIUC, this should pull the image onto the remote machine. However, Terraform exits with this error:

Error: Got error when attempting to fetch image version from registry: Bad credentials: 401 Unauthorized

  on swarm.tf line 1, in data "docker_registry_image" "backend":
   1: data "docker_registry_image" "backend" {

I verified that Terraform actually connects to the remote Docker host (I could create docker service) - it just won't authenticate to the registry.

I also tried defining config like this (not sure if this is a good approach since I want to use credentials helper on a remote machine, not my local one):

provider "docker" {
  version = "~> 2.7"
  host = "ssh://ec2-user@${aws_instance.main.public_dns}:22"
  registry_auth {
    address = local.docker_registry_url
    config_file_content = "{\"credsStore\": \"ecr-login\"}"
  }
}

But this in turn results in a following error:

Error: Error loading registry auth config: Error parsing docker registry config json: json: cannot unmarshal string into Go value of type docker.dockerConfig

The question

How can I make this work, so that an image is pulled on the remote machine, using credentials provided by ecr-login helper which runs on that machine?
Or maybe it's a bug of Docker Terraform provider?

feat: evaluate documentation generation

Docker authentication configurations without explicit registry addresses and credentials aren't parsed properly

This issue was originally opened by @rolandcrosby as hashicorp/terraform-provider-docker#275. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

Terraform v0.12.26
+ provider.docker v2.7.1

Affected Resource(s)

Docker provider

Terraform Configuration Files

locals {
    docker_registry_url = "example.com"
}

provider "docker" {
    registry_auth {
        address = local.docker_registry_url
        config_file_content = jsonencode({
            "credsStore" = "ecr-login"
        })
    }
}

resource "docker_image" "hello_world" {
    name = "${local.docker_registry_url}/hello_world:latest"
}

Debug Output

Gist here

Panic Output

n/a

Expected Behavior

The Docker provider should parse the Docker auth config file and fetch the appropriate credentials the same way the native Docker CLI does. Specifically:

  • If a global credsStore helper is set in the config file contents, the provider should use that creds store to fetch authentication data for the registry address specified in the configuration, regardless of whether there is anything in the auths key in the config file contents.
  • If specific per-registry credHelpers are specified as described in the Docker documentation here, the appropriate helper should be detected and used to fetch credentials for the specified registry, again without regard to whether the registry is present in auths.

Actual Behavior

If a configuration like the above one is passed (where credsStore is present but auths is empty or not present), the provider does not do anything with the credsStore property (see lines 295-303 of provider.go). Instead, if the provider sees no auths, it will attempt to parse the configuration as if it were in the following legacy format:

{
    "some-registry-url": {"auth": "some credential", "email": "some credential here"}
}

When this fails, the user is presented with the following confusing and misleading error: json: cannot unmarshal string into Go value of type docker.dockerConfig (see #273).

(The credHelpers section of Docker's config.json doesn't work at all with this provider and also leads to the same misleading error; this fact is not documented anywhere.)

Steps to Reproduce

In a directory with the above Terraform file present, run terraform init to download the Docker provider, then terraform apply.

Important Factoids

  • This issue surfaced for me when attempting to use the AWS ECR credential helper, but based on the source code the issue doesn't appear to be specific to that helper.

References

  • #273 is another instance of this issue.
  • The registry_auth section of the Docker provider documentation doesn't say anything one way or the other about which fields and settings in the config.json file are supported or unsupported.

Unsupported block type build on docker_image

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.12.24

  • provider.docker v2.7.2

Affected Resource(s)

Please list the resources as a list, for example:

  • docker_image

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "docker_image" "app" {
  name = "appondocker"
  build {
    path = "."
  }
}

Expected Behavior

Build a local Dockerfile.

Actual Behavior

Error: Unsupported block type

  on myfirstdocker.tf line 17, in resource "docker_image" "app":
  17:   build {

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Have any Dockerfile
  2. Create a .tf file and try to build it using the Docker Provider.
  3. terraform plan

Provider can't connect to Docker daemon in WSL 2

This issue was originally opened by @mattwelke as hashicorp/terraform-provider-docker#303. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

Terraform v0.12.29 (using old version intentionally because I'm following a tutorial that references particular modules that don't yet support 0.13)

Affected Resource(s)

n/a

Terraform Configuration Files

versions.tf:

terraform {
  required_version = "~> 0.12"
  required_providers {
    google  = "~> 2.16"
    random  = "~> 2.2"
    docker  = "~> 2.3"
  }
}

providers.tf:

provider "google" {
  credentials = file("account.json")
  project     = var.gcp.project_id
  region      = var.gcp.region
}

provider "docker" {
  host = "tcp://127.0.0.1:2375/"
}

variables.tf:

variable "gcp" {
  type = object({
    project_id = string
    region     = string
  })
}

terraform.tfvars:

gcp = {
  project_id = "REDACTED"
  region     = "us-east1"
}

outputs.tf:

output "addresses" {
  value = {
    gcp1         = module.gcp1.network_address
    gcp2         = module.gcp2.network_address
    loadbalancer = module.loadbalancer.network_address
  }
}

main.tf:

module "gcp1" {
  source     = "scottwinkler/vm/cloud//modules/gcp"
  project_id = var.gcp.project_id
  environment = {
    name             = "GCP 1"
    background_color = "red"
  }
}

module "gcp2" {
  source     = "scottwinkler/vm/cloud//modules/gcp"
  project_id = var.gcp.project_id
  environment = {
    name             = "GCP 2"
    background_color = "blue"
  }
}

module "loadbalancer" {
  source = "scottwinkler/vm/cloud//modules/loadbalancer"
  addresses = [
    module.gcp1.network_address,
    module.gcp2.network_address,
  ]
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://gist.github.com/mattwelke/ce34d58c1281d49930f81caaa257800e

Panic Output

n/a

Expected Behavior

A docker container being created.

Actual Behavior

An error applying Terraform config when it tried to use the Docker provider.

Steps to Reproduce

  1. Start Docker Desktop in Windows, wait til it's ready
  2. Ensure the Docker daemon is reachable from within WSL 2 (ex. run docker ps)
  3. Add Docker provider to config
  4. Run terraform apply

Important Factoids

I ensured I had Docker set up to be useable from within WSL 2 first. I was able to run commands like docker ps:

> docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

But then, when running terraform apply, it displayed that error, saying it couldn't reach the daemon. I tried using the port 2376 instead of 2375 in the Terraform config, but that didn't work. I also tried enabling this option in Docker Desktop in Windows:

image

But this also made no difference (even when using port 2375 in the Terraform config).

I'm using Ubuntu 20.10 in WSL 2.

References

When troubleshooting, I tried the steps in the issue hashicorp/terraform-provider-docker#210, but it didn't help.

terraform destroy does not respect stop_grace_period for docker swarm services running on DOCKER_HOST

Hi,

When creating a docker swarm service and the container happens to run on the DOCKER_HOST (parameter host in provider) then the container started as part of the service receives after terraform destroy a SIGKILL immediately rather than stop_signal (and optionally SIGKILL after stop_grace_period if container has not shutdown until then). For containers which happen to be created on worker nodes the code works as expected (and likely also for containers which are created on manager nodes which are not the DOCKER_HOST).

Note: docker CLI (ie. docker service rm does work as expected)

This is a show stopper for all database applications and those containers which contain databases within (such as Nexus3) and can lead to catastrophic data loss as we have learned the hard way.

I already have written to here - but did not receive any answers at all (maybe wrong place to post issues):
hashicorp/terraform-provider-docker#313

Terraform Version

Terraform v0.13.5

docker = {
      source = "terraform-providers/docker"
      version = "2.7.2"
}

Note: kreuzwerker/terraform-provider-docker V2.8.0 also has this problem.

Affected Resource(s)

docker_service

Terraform Configuration Files (test.tf)

variable "docker_host" {
  type    = string
}

variable "node" {
  type = string
}

terraform {
  required_providers {
    docker = {
      source = "terraform-providers/docker"
      version = "2.7.2"
    }
  }
}

provider "docker" {
  host = var.docker_host
}

resource "docker_service" "test" {
  name = "test"
  
  task_spec {
    container_spec {
      image = "ubuntu"
      
      stop_signal       = "SIGTERM"
      stop_grace_period = "30s"
      
      command = [ "/bin/bash" ]
      args    = [ "-c", "echo test; trap 'echo SIGTERM && exit 0' SIGTERM; while true; do sleep 1; done" ]
    }
    
    placement {
        constraints = [
            "node.role==${var.node}"       
            ]
    }
  }
}

Debug Output

none

Panic Output

none

Expected Behavior

Container receives upon shutdown first SIGTERM:

xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0    | test          (Note: output after start)
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0    | SIGTERM       (Note: output after receiving SIGTERM signal)

Actual Behavior

When container runs on DOCKER_HOST it receives immediately SIGKILL:

xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-manager-node-0    | test  (Note: output after start. Output after receiving SIGTERM is missing)

Steps to Reproduce

Create a docker swarm cluster with just 1 manager and 1 worker (this just makes it easy to get the test case working). Then:

  1. terraform init
  2. terraform apply -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
  3. in a 2nd console enter: DOCKER_HOST=tcp://<place here the DOCKER_HOST ip>:2375 docker service logs -f test
  4. back to original console enter:
    terraform destroy -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
  5. watch output on 2nd console

To run the working test case:
Do the same as before but replace -var 'node=manager' with -var 'node=worker'

Important Factoids

Looking at the source code (I am not a go expert!) I found the following:
https://github.com/terraform-providers/terraform-provider-docker/blob/master/docker/resource_docker_service_funcs.go
Line 270ff
func deleteService(serviceID string, d *schema.ResourceData, client *client.Client)

Line 297 actually removes the service (and as I believe it is the only line needed and everything further down is just for historic reasons - docker now does all needed by himself):
if err := client.ServiceRemove(context.Background(), serviceID); err != nil {

Note: you can see in the docker daemon debug log of the DOCKER_HOST a line such as:
time="2020-11-18T18:05:10.871048408Z" level=debug msg="Calling DELETE /v1.40/services/so1zejuqgruzyrsz9c5bz1isn"
... which is the only rest api call when doing a docker service rm so1zejuqgruzyrsz9c5bz1isn using docker CLI

Line 309 is supposed to wait until the container has been removed - but actually always returns immediately:
exitCode, _ := client.ContainerWait(ctx, containerID, container.WaitConditionRemoved)

2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Found container ['running'] for destroying: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Deleting service: 'i1b30zzgweff3he63r35ky3i3'
2020-11-19T16:46:45.994+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Waiting for container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736' to exit: max 30s
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Container exited with code [0xc000094660]: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Removing container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'

Note: between waiting and removing container passes just 33ms.
I don't know why it always returned immediately.

Line 318 is supposed to remove the container (why? docker does this by himself!).
if err := client.ContainerRemove(context.Background(), containerID, removeOpts); err != nil {

There are 2 problems here:

a) this line has the same effect as docker container rm --force <containerID> which will as the result of --force immediately send a SIGKILL to the container. Due to the first problem (no waiting is happening) this will happen before docker had a chance to remove the service and send SIGTERM to all the containers. Therefore the effect of killing the containers on the manager node.

time="2020-11-18T17:56:36.644210735Z" level=debug msg="Calling DELETE /v1.40/services/dll5o060pagmxr0sg9lasjit3"
time="2020-11-18T17:56:36.689439754Z" level=debug msg="Calling POST /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a/wait?condition=remo
ved"
time="2020-11-18T17:56:36.769538131Z" level=debug msg="Calling DELETE /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a?force=1&v=1"
time="2020-11-18T17:56:36.769598227Z" level=debug msg="Sending kill signal 9 to container b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a"

Note: have a look at the time code!

b) The code does not seem to anticipate that when we issue container commands (rather than service commands) that they have to be send to the exact node which runs the container. This is the reason why this is only happening on the DOCKER_HOST: the workers (or any other node - this I haven't checked) will simply never receive the ContainerRemove command and the DOCKER_HOST says that this container is unknown to him.

Disclaimer: these are just my findings by looking hard at the source code and I wanted to share this with you in the hope to be useful and maybe saves some time.

References

none

Proposal: Container file upload and docker host behind bastion

Hi team.

I have the following feature proposal:

Resource Docker Container

  • Currently i can only upload a single file per upload directive. So when i upload multiple files it renders the tf file unreadable. Allow upload of directory would be a major upgrade.

Provider

  • The host directive allows an ssh connection to the host. My host is behind a bastion and i can reach it if i have a config file for ssh and reference that connection on the host directive. The dependecy of this config should be removed and passed to the provider if it had a forward/connection option like, for example, the provisioner file has.

Thanks for your time!

Docker pull command pulls from cache - not getting latest image

This issue was originally opened by @wereinse as hashicorp/terraform-provider-docker#267. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.12.21

Affected Resource(s)

Please list the resources as a list, for example:
Docker image version

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

See changes below:

image

Changes made are below:
This is from : https://github.com/retaildevcrews/helium-iac/blob/master/src/modules/db/main.tf

resource "azurerm_cosmosdb_sql_container" "cosmosdb-movies" {
  for_each            = var.INSTANCES
  name                = "movies"
  resource_group_name = var.COSMOS_RG_NAME
  account_name        = azurerm_cosmosdb_account.cosmosdb.name
  database_name       = azurerm_cosmosdb_sql_database.cosmosdb-imdb[each.key].name
  partition_key_path  = "/partitionKey"
}

data "docker_registry_image" "imdb-import" {
  name = "retaildevcrew/imdb-import"
}

resource "docker_image" "imdb-import" {
  name          = data.docker_registry_image.imdb-import.name
  pull_triggers = ["${data.docker_registry_image.imdb-import.sha256_digest}"]

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?
resource "docker_image" "imdb-import" should have pulled the latest image

Actual Behavior

What actually happened?
The above command pulled a cached image instead of going back for the original.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply with the resource "docker_image" "imdb-import" command in place .
  2. Run once
  3. Change the image
  4. Run again and you should get the older image

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

Replacement of contaier on every run

Terraform Version

  • terraform 0.14.1
  • kreuzwerker/docker 2.8.0

Affected Resource(s)

  • docker_container

Terraform Configuration Files

locals {
  image_drone_runner = "drone/drone-runner-docker:1.6.1" # renovate: depName=drone-runner-docker
  image_docuum       = "stephanmisc/docuum:0.16.0"       # renovate: depName=docuum
}

resource "docker_container" "drone_runner" {
  name    = "drone_runner"
  image   = "harbor.visualon.de/docker-hub/${local.image_drone_runner}"
  restart = "unless-stopped"

  env = [
    "DRONE_RPC_PROTO=https",
    "DRONE_RPC_HOST=drone.visualon.de",
    "DRONE_RPC_SECRET=${var.passwords.drone.rpc_sk}",
    "DRONE_RUNNER_CAPACITY=${var.drone.capacity}",
    "DRONE_RUNNER_NAME=${var.hostname}"
  ]

  volumes {
    container_path = "/var/run/docker.sock"
    host_path      = "/var/run/docker.sock"
  }
}

Variables don't change on run.

Debug Output


# module.docker_d02.docker_container.drone_runner must be replaced | 51s
-- | --
356 | -/+ resource "docker_container" "drone_runner" { | 51s
357 | + bridge            = (known after apply) | 51s
358 | ~ command           = [] -> (known after apply) | 51s
359 | + container_logs    = (known after apply) | 51s
360 | - cpu_shares        = 0 -> null | 51s
361 | - dns               = [] -> null | 51s
362 | - dns_opts          = [] -> null | 51s
363 | - dns_search        = [] -> null | 51s
364 | ~ entrypoint        = [ | 51s
365 | - "/bin/drone-runner-docker", | 51s
366 | ] -> (known after apply) | 51s
367 | + exit_code         = (known after apply) | 51s
368 | ~ gateway           = "172.17.0.1" -> (known after apply) | 51s
369 | - group_add         = [] -> null | 51s
370 | ~ hostname          = "67ac24aa10c1" -> (known after apply) | 51s
371 | ~ id                = "67ac24aa10c1e28702f7c30e1688e87b25bb8c2d5722604efdfbacfe1c17f3e5" -> (known after apply) | 51s
372 | ~ image             = "sha256:d70629b463daa6ae84d62fa5f7b42e5e04d58be75792f984feeab2fa3413a103" -> "harbor.visualon.de/docker-hub/drone/drone-runner-docker:1.6.1" # forces replacement | 51s
373 | ~ ip_address        = "172.17.0.4" -> (known after apply) | 51s
374 | ~ ip_prefix_length  = 16 -> (known after apply) | 51s
375 | ~ ipc_mode          = "private" -> (known after apply) | 51s
376 | - links             = [] -> null | 51s
377 | - log_opts          = {} -> null | 51s
378 | - max_retry_count   = 0 -> null | 51s
379 | - memory            = 0 -> null | 51s
380 | - memory_swap       = 0 -> null | 51s
381 | name              = "drone_runner" | 51s
382 | ~ network_data      = [ | 51s
383 | - { | 51s
384 | - gateway                   = "172.17.0.1" | 51s
385 | - global_ipv6_address       = "" | 51s
386 | - global_ipv6_prefix_length = 0 | 51s
387 | - ip_address                = "172.17.0.4" | 51s
388 | - ip_prefix_length          = 16 | 51s
389 | - ipv6_gateway              = "" | 51s
390 | - network_name              = "bridge" | 51s
391 | }, | 51s
392 | ] -> (known after apply) | 51s
393 | - network_mode      = "default" -> null | 51s
394 | - privileged        = false -> null | 51s
395 | - publish_all_ports = false -> null | 51s
396 | ~ shm_size          = 64 -> (known after apply) | 51s
397 | - sysctls           = {} -> null | 51s
398 | - tmpfs             = {} -> null | 51s
399 | # (10 unchanged attributes hidden) | 51s
400 | ย  | 51s
401 | + labels { | 51s
402 | + label = (known after apply) | 51s
403 | + value = (known after apply) | 51s
404 | } | 51s
405 | ย  | 51s
406 | - volumes { | 51s
407 | - container_path = "/var/run/docker.sock" -> null | 51s
408 | - host_path      = "/var/run/docker.sock" -> null | 51s
409 | - read_only      = false -> null | 51s
410 | } | 51s
411 | + volumes { | 51s
412 | + container_path = "/var/run/docker.sock" | 51s
413 | + host_path      = "/var/run/docker.sock" | 51s
414 | } | 51s
415 | }


Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

Simply do not recreate docker container on every terraform apply

Actual Behavior

Docker containers are recreated on every terraform apply

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. configure docker provider via ssh (not sure if this is matter
  2. configure docker_containre resource
  3. terraform apply

Important Factoids

Not sure. Default docker installation on ubuntu focal (btrfs and xfs baking store tested

Terraform destroy not work on docker container

This issue was originally opened by @hashibot[bot] as hashicorp/terraform-provider-docker#280. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



This issue was originally opened by @greenmns as hashicorp/terraform#25552. It was migrated here as a result of the provider split. The original body of the issue is below.


I have a terrafom file with name main.tf that resource is only docker_container
my terraform version is
Terraform v0.12.28

main.tf file is:

provider "docker" {
}

#data "docker_registry_image" "baresip" {
#   name          =     "registry.gitlab.com/greenmns/test"
#}


#resource "docker_image" "baresip" {
# name           =     "registry.gitlab.com/greenmns/test"
# pull_triggers  =     ["${data.docker_registry_image.baresip.sha256_digest}"]
#}

resource "docker_container" "baresipcallee" {
  name           =     "baresipcallee"
  image          =     "registry.gitlab.com/greenmns/test"
  command        =     ["1"]
  rm             =     true
}

image registry.gitlab.com/greenmns/test is locally in my computer.

when i run terraform apply to generate a container i saw with docker ps my container is up but when i run terraform destroy container test is run and it didn't destory
I think this is a buge of terraform

Terraform crash when using docker_network with aux_address = {foo: ip}

Terraform Version

0.13.5

Affected Resource(s)

docker_network

Terraform Configuration Files

resource "docker_network" "vlan" {
  ipam_config {
    aux_address = {
      foo: "1.2.3.4"
   }
  }
}

Panic Output

I can't do that right now.
The config is massive and does leak a lot of important info that I would have to scrub out first.

The bug is easy to reproduce though.

Expected Behavior

Not crash.

Actual Behavior

Crash.

Steps to Reproduce

  1. terraform apply

Notes

I'm out of my depth here.

Feels to me though that:

auxAddressRaw := ipamConfigRaw["aux_address"].(map[string]interface{})

^ is problematic

Override entrypoint in docker_service resource

This issue was originally opened by @b4nst as hashicorp/terraform-provider-docker#261. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

Terraform v0.12.24

  • provider.docker v2.7.0

Affected Resource(s)

  • docker_service

Expected Behavior

Resource accept entrypoint option, as specified in docker service_create doc

Actual Behavior

Resource does not accept an entrypoint option

Multiple networks in network_advanced creates invalid plan

When expanding the plan for module.dns-lan-backup.docker_container.container
to include new values learned so far during apply, provider
"registry.terraform.io/terraform-providers/docker" produced an invalid new
value for .networks_advanced: block set length changed from 1 to 2.

^ this error happens when the networks are being created
Re-applying (with the networks already created from the first run), will then work fine.

I'm using inside a docker_container:

  dynamic "networks_advanced" {
    for_each = local.container_networks
    content {
      name = networks_advanced.key
      ipv4_address = networks_advanced.value
    }
  }

And the bottom-line is, this happens when I pass in multiple networks (macvlan + bridge).

@mavogel usually I figure my problems out and submit a patch but this one is out of my league.
Any idea where I should start with this?

terraform destroy does not respect stop_grace_period for docker swarm services running on DOCKER_HOST

This issue was originally opened by @SebastianFaulborn as hashicorp/terraform-provider-docker#313. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



Hi,

When creating a docker swarm service and the container happens to run on the DOCKER_HOST (parameter host in provider) then the container started as part of the service receives after terraform destroy a SIGKILL immediately rather than stop_signal (and optionally SIGKILL after stop_grace_period if container has not shutdown until then). For containers which happen to be created on worker nodes the code works as expected (and likely also for containers which are created on manager nodes which are not the DOCKER_HOST).

This is a show stopper for all database applications and those containers which contain databases within (such as Nexus3) and can lead to catastrophic data loss as we have learned the hard way.

Terraform Version

Terraform v0.13.5

Affected Resource(s)

  • docker_service

Terraform Configuration Files (test.tf)

variable "docker_host" {
  type    = string
}

variable "node" {
  type = string
}

terraform {
  required_providers {
    docker = {
      source = "terraform-providers/docker"
      version = "2.7.2"
    }
  }
}

provider "docker" {
  host = var.docker_host
}

resource "docker_service" "test" {
  name = "test"
  
  task_spec {
    container_spec {
      image = "ubuntu"
      
      stop_signal       = "SIGTERM"
      stop_grace_period = "30s"
      
      command = [ "/bin/bash" ]
      args    = [ "-c", "echo test; trap 'echo SIGTERM && exit 0' SIGTERM; while true; do sleep 1; done" ]
    }
    
    placement {
        constraints = [
            "node.role==${var.node}"       
            ]
    }
  }
}

Debug Output

none

Panic Output

none

Expected Behavior

Container receives upon shutdown first SIGTERM:

xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0    | test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-worker-node-0    | SIGTERM

Actual Behavior

When container runs on DOCKER_HOST it receives immediately SIGKILL:

xyz$ docker service logs -f test
test.1.z243xk5gqgk0@ci-jct-swarm-cluster-manager-node-0    | test

Steps to Reproduce

Create a docker swarm cluster with just 1 manager and 1 worker (this just makes it easy to get the test case working). Then:

  1. terraform init
  2. terraform apply -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
  3. in a 2nd console enter: DOCKER_HOST=tcp://<place here the DOCKER_HOST ip>:2375 docker service logs -f test
  4. back to original console enter:
    terraform destroy -var 'node=manager' -var 'docker_host=tcp://<place here the DOCKER_HOST ip>:2375'
  5. watch output on 2nd console

To run the working test case:
Do the same as before but replace -var 'node=manager' with -var 'node=worker'

Important Factoids

Looking at the source code (I am not a go expert!) I found the following:
https://github.com/terraform-providers/terraform-provider-docker/blob/master/docker/resource_docker_service_funcs.go
Line 270ff
func deleteService(serviceID string, d *schema.ResourceData, client *client.Client)

Line 297 actually removes the service (and as I believe it is the only line needed and everything further down is just for historic reasons - docker now does all needed by himself):
if err := client.ServiceRemove(context.Background(), serviceID); err != nil {

Note: you can see in the docker daemon debug log of the DOCKER_HOST a line such as:
time="2020-11-18T18:05:10.871048408Z" level=debug msg="Calling DELETE /v1.40/services/so1zejuqgruzyrsz9c5bz1isn"
... which is the only rest api call when doing a docker service rm so1zejuqgruzyrsz9c5bz1isn using docker CLI

Line 309 is supposed to wait until the container has been removed - but actually always returns immediately:
exitCode, _ := client.ContainerWait(ctx, containerID, container.WaitConditionRemoved)

2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Found container ['running'] for destroying: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:45.960+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Deleting service: 'i1b30zzgweff3he63r35ky3i3'
2020-11-19T16:46:45.994+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:45 [INFO] Waiting for container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736' to exit: max 30s
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Container exited with code [0xc000094660]: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'
2020-11-19T16:46:46.027+0100 [DEBUG] plugin.terraform-provider-docker_v2.7.2_x4: 2020/11/19 16:46:46 [INFO] Removing container: '7b00d2424f16fa19c4cd6e39dcd148f1337f9c383c0a3373091aa7c6b2f11736'

Note: between waiting and removing container passes just 33ms.

Line 318 is supposed to remove the container (why? docker does this by himself!).
if err := client.ContainerRemove(context.Background(), containerID, removeOpts); err != nil {

There are 2 problems here:

a) this line has the same effect as docker container rm --force <containerID> which will as the result of --force immediately send a SIGKILL to the container. Due to the first problem (no waiting is happening) this will happen before docker had a chance to remove the service and send SIGTERM to all the containers. Therefore the effect of killing the containers on the manager node.

time="2020-11-18T17:56:36.644210735Z" level=debug msg="Calling DELETE /v1.40/services/dll5o060pagmxr0sg9lasjit3"
time="2020-11-18T17:56:36.689439754Z" level=debug msg="Calling POST /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a/wait?condition=remo
ved"
time="2020-11-18T17:56:36.769538131Z" level=debug msg="Calling DELETE /v1.40/containers/b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a?force=1&v=1"
time="2020-11-18T17:56:36.769598227Z" level=debug msg="Sending kill signal 9 to container b00490988356851e84099901a2011264c22024f3bed977b1059fbe0591aa6b5a"

Note: have a look at the time code!

b) The code does not seem to anticipate that when we issue container commands (rather than service commands) that they have to be send to the exact node which runs the container. This is the reason why this is only happening on the DOCKER_HOST: the workers (or any other node - this I haven't checked) will simply never receive the ContainerRemove command and the DOCKER_HOST says that this container is unknown to him.

Disclaimer: these are just my findings by looking hard at the source code and I wanted to share this with you in the hope to be useful and maybe saves some time.

References

none

Proposal: Introduce golangci-lint

Many lint errors are detected by golangci-lint.
So I propose to fix them and introduce golangci-lint in CI.

$ golangci-lint run
docker/resource_docker_network.go:252:6: `suppressIfIPAMConfigWithIpv6Changes` is unused (deadcode)
func suppressIfIPAMConfigWithIpv6Changes() schema.SchemaDiffSuppressFunc {
     ^
docker/resource_docker_service_funcs.go:1411:2: `longestState` is unused (deadcode)
	longestState int
	^
docker/resource_docker_network_test.go:320:6: `testAccNetworkLabel` is unused (deadcode)
func testAccNetworkLabel(network *types.NetworkResource, name string, value string) resource.TestCheckFunc {
     ^
docker/resource_docker_volume_test.go:95:6: `testAccVolumeLabel` is unused (deadcode)
func testAccVolumeLabel(volume *types.Volume, name string, value string) resource.TestCheckFunc {
     ^
docker/data_source_docker_network.go:99:7: Error return value of `d.Set` is not checked (errcheck)
	d.Set("name", network.Name)
	     ^
docker/data_source_docker_network.go:100:7: Error return value of `d.Set` is not checked (errcheck)
	d.Set("scope", network.Scope)
	     ^
docker/data_source_docker_network.go:101:7: Error return value of `d.Set` is not checked (errcheck)
	d.Set("driver", network.Driver)
	     ^
docker/resource_docker_container_funcs.go:562:33: Error return value of `resourceDockerContainerDelete` is not checked (errcheck)
			resourceDockerContainerDelete(d, meta)
			                             ^
docker/resource_docker_container_funcs.go:571:32: Error return value of `resourceDockerContainerDelete` is not checked (errcheck)
		resourceDockerContainerDelete(d, meta)
		                             ^
docker/resource_docker_container_test.go:670:16: Error return value of `fbuf.ReadFrom` is not checked (errcheck)
		fbuf.ReadFrom(tr)
		             ^
docker/resource_docker_container_test.go:732:16: Error return value of `fbuf.ReadFrom` is not checked (errcheck)
		fbuf.ReadFrom(tr)
		             ^
docker/resource_docker_container_test.go:834:17: Error return value of `fbuf.ReadFrom` is not checked (errcheck)
			fbuf.ReadFrom(tr)
			             ^
docker/resource_docker_image_funcs.go:42:12: Error return value of `m.Display` is not checked (errcheck)
		m.Display(buf, false)
		         ^
docker/resource_docker_image_funcs.go:221:14: Error return value of `buf.ReadFrom` is not checked (errcheck)
	buf.ReadFrom(out)
	            ^
docker/resource_docker_image_test.go:209:18: Error return value of `ioutil.WriteFile` is not checked (errcheck)
	ioutil.WriteFile(dfPath, []byte(testDockerFileExample), 0o644)
	                ^
docker/resource_docker_registry_image_funcs.go:265:14: Error return value of `hasher.Write` is not checked (errcheck)
	hasher.Write(s)
	            ^
docker/resource_docker_registry_image_funcs.go:298:17: Error return value of `json.Unmarshal` is not checked (errcheck)
		json.Unmarshal(streamBytes, &errorMessage)
		              ^
docker/resource_docker_service_test.go:32:16: Error return value of `checkAttribute` is not checked (errcheck)
	checkAttribute(t, "Username", foundAuthConfig.Username, "myuser")
	              ^
docker/resource_docker_service_test.go:33:16: Error return value of `checkAttribute` is not checked (errcheck)
	checkAttribute(t, "Password", foundAuthConfig.Password, "mypass")
	              ^
docker/resource_docker_service_test.go:34:16: Error return value of `checkAttribute` is not checked (errcheck)
	checkAttribute(t, "Email", foundAuthConfig.Email, "")
	              ^
docker/data_source_docker_network.go:113:2: ineffectual assignment to `err` (ineffassign)
	err = d.Set("ipam_config", ipam)
	^
docker/resource_docker_image_funcs.go:129:3: ineffectual assignment to `imageName` (ineffassign)
		imageName = imageName + ":latest"
		^
docker/resource_docker_registry_image_funcs.go:180:22: ineffectual assignment to `err` (ineffassign)
	dockerBuildContext, err := os.Open(dockerContextTarPath)
	                    ^
docker/resource_docker_registry_image_funcs.go:403:15: ineffectual assignment to `err` (ineffassign)
			oauthResp, err := client.Do(req)
			           ^
docker/resource_docker_service_funcs.go:1332:2: ineffectual assignment to `auth` (ineffassign)
	auth := types.AuthConfig{}
	^
docker/resource_docker_container_migrate.go:100:46: S1019: should use make(map[string]interface{}) instead (gosimple)
		outputPort := make(map[string]interface{}, 0)
		                                           ^
docker/resource_docker_network_funcs.go:209:29: S1019: should use make([]interface{}, len(in)) instead (gosimple)
	out := make([]interface{}, len(in), len(in))
	                           ^
docker/structures_service.go:59:29: S1019: should use make([]interface{}, 0) instead (gosimple)
	out := make([]interface{}, 0, 0)
	                           ^
docker/structures_service.go:72:29: S1019: should use make([]interface{}, 0) instead (gosimple)
	out := make([]interface{}, 0, 0)
	                           ^
docker/structures_service.go:89:29: S1019: should use make([]interface{}, 0) instead (gosimple)
	out := make([]interface{}, 0, 0)
	                           ^
docker/structures_service.go:181:29: S1019: should use make([]interface{}, 1) instead (gosimple)
	out := make([]interface{}, 1, 1)
	                           ^
docker/structures_service.go:185:35: S1019: should use make([]interface{}, 1) instead (gosimple)
		credSpec := make([]interface{}, 1, 1)
		                                ^
docker/structures_service.go:193:41: S1019: should use make([]interface{}, 1) instead (gosimple)
		seLinuxContext := make([]interface{}, 1, 1)
		                                      ^
docker/structures_service.go:208:29: S1019: should use make([]interface{}, len(in)) instead (gosimple)
	out := make([]interface{}, len(in), len(in))
	                           ^
docker/structures_service.go:217:52: S1019: should use make(map[string]interface{}) instead (gosimple)
			bindOptionsItem := make(map[string]interface{}, 0)
			                                                ^
docker/structures_service.go:229:54: S1019: should use make(map[string]interface{}) instead (gosimple)
			volumeOptionsItem := make(map[string]interface{}, 0)
			                                                  ^
docker/structures_service.go:283:29: S1019: should use make([]interface{}, len(in)) instead (gosimple)
	out := make([]interface{}, len(in), len(in))
	                           ^
docker/config.go:41:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
	if caPEMCert == nil || len(caPEMCert) == 0 {
	   ^
docker/structures_service.go:397:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
	if in != nil && len(in) > 0 {
	   ^
docker/structures_service.go:454:5: S1009: should omit nil check; len() for nil slices is defined as zero (gosimple)
	if in == nil || len(in) == 0 {
	   ^
docker/structures_service.go:558:5: S1009: should omit nil check; len() for nil maps is defined as zero (gosimple)
	if in == nil || len(in) == 0 {
	   ^
docker/validators.go:52:10: S1034: assigning the result of this type assertion to a variable (switch v := v.(type)) could eliminate type assertions in switch cases (gosimple)
		switch v.(type) {
		       ^
docker/resource_docker_service_funcs.go:1138:4: SA4004: the surrounding loop is unconditionally terminated (staticcheck)
			return &logDriver, nil
			^
docker/structures_service.go:28:5: SA4003: every value of type uint64 is >= 0 (staticcheck)
	if in.ForceUpdate >= 0 {
	   ^
docker/resource_docker_registry_image_funcs.go:213:2: SA4006: this value of `err` is never used (staticcheck)
	err = filepath.Walk(buildContext, func(file string, info os.FileInfo, err error) error {
	^
docker/resource_docker_registry_image_funcs.go:181:2: SA5001: should check returned error before deferring dockerBuildContext.Close() (staticcheck)
	defer dockerBuildContext.Close()
	^
docker/resource_docker_network_test.go:262:6: func `testAccNetworkIPv6` is unused (unused)

Enabled rules

At first, we can start the default rules.

Not defining "working_dir" triggers container replacement on every run

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.13.5
+ provider registry.terraform.io/digitalocean/digitalocean v2.2.0
+ provider registry.terraform.io/kreuzwerker/docker v2.8.0

Affected Resource(s)

docker_container

Terraform Configuration Files

data "docker_registry_image" "container" {
  name = "${var.registry_address}/xxxxxxx/container"
}

resource "docker_image" "container" {
  name          = "${data.docker_registry_image.container.name}@${data.docker_registry_image.container.sha256_digest}"
  keep_locally  = true
  pull_triggers = [data.docker_registry_image.container.sha256_digest]
}

resource "docker_container" "container" {
  name        = "container"
  image       = docker_image.container.latest
}

Debug Output

docker_container.container must be replaced
-/+ resource "docker_container" "container" {
        attach            = false
      + bridge            = (known after apply)
      ~ command           = [
          - "npm",
          - "run",
          - "--prefix",
          - "./packages/container/backend",
          - "start",
        ] -> (known after apply)
      + container_logs    = (known after apply)
      - cpu_shares        = 0 -> null
      - dns               = [] -> null
      - dns_opts          = [] -> null
      - dns_search        = [] -> null
      ~ entrypoint        = [
          - "docker-entrypoint.sh",
        ] -> (known after apply)
      ~ env               = [] -> (known after apply)
      + exit_code         = (known after apply)
      ~ gateway           = "172.18.0.1" -> (known after apply)
      - group_add         = [] -> null
      ~ hostname          = "81d8ace62633" -> (known after apply)
      ~ id                = "81d8ace62633e78d620321b7168d46059af6011864b34958eab2614c3d89e0c5" -> (known after apply)
        image             = "sha256:b7653b3e02763582647ea37bc6220db767478e8b484a47702114d8d8c256b560"
      ~ ip_address        = "172.18.0.3" -> (known after apply)
      ~ ip_prefix_length  = 16 -> (known after apply)
      ~ ipc_mode          = "private" -> (known after apply)
      - links             = [] -> null
        log_driver        = "json-file"
        log_opts          = {
            "max-file" = "10"
            "max-size" = "100m"
        }
        logs              = false
      - max_retry_count   = 0 -> null
      - memory            = 0 -> null
      - memory_swap       = 0 -> null
        must_run          = true
        name              = "container"
      ~ network_data      = [
          - {
              - gateway                   = "172.18.0.1"
              - global_ipv6_address       = ""
              - global_ipv6_prefix_length = 0
              - ip_address                = "172.18.0.3"
              - ip_prefix_length          = 16
              - ipv6_gateway              = ""
              - network_name              = "network"
            },
        ] -> (known after apply)
      - network_mode      = "default" -> null
      - privileged        = false -> null
      - publish_all_ports = false -> null
        read_only         = false
        remove_volumes    = true
        restart           = "no"
        rm                = false
      ~ shm_size          = 64 -> (known after apply)
        start             = true
      - sysctls           = {} -> null
      - tmpfs             = {} -> null
      - working_dir       = "/workdir" -> null # forces replacement
   }

Expected Behavior

The fact that I am not defining "working_dir" in "docker_container" should not always trigger a forced replacement.

Actual Behavior

Every time I run terraform apply the container gets replaced because I did not define "working_dir" in "docker_container"

Support NVIDIA GPUs

This issue was originally opened by @captn3m0 as hashicorp/terraform-provider-docker#257. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

v0.12.23

Affected Resource(s)

Please list the resources as a list, for example:

  • docker_container

Terraform Configuration Files

resource docker_container a {
  name = ""
  gpus = "all"
}

Debug Output

docker_container.a: : invalid or unknown key: gpus

Expected Behavior

The --gpus flag should be supported.

Actual Behavior

The --gpus flag is not supported.

Steps to Reproduce

Try to run a container with NVIDIA Docker support, which is now natively available in Docker 19.03

Important Factoids

  • NVIDIA has a FAQ on the feature here
  • The docker Engine API implementation sends a DeviceRequest to the update/create container endpoints. See this PR for details.
    • POST /containers/create now accepts DeviceRequests as part of HostConfig.
      Can be used to set Nvidia GPUs

Windows: docker_container host_path must be an absolute path

This issue was originally opened by @Morodar as hashicorp/terraform-provider-docker#277. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

v0.12.28 on Windows 10 Pro Build 18363

Affected Resource(s)

  • docker_container

Terraform Configuration Files

provider "docker" {
  host    = "tcp://127.0.0.1:2375/"
  version = "2.7.1"
}

resource "docker_image" "nginx_alpine" {
  name = "nginx:1.19.0-alpine"
}

resource "docker_container" "reverse_proxy" {
  image = docker_image.nginx_alpine.latest
  name  = "reverse_proxy"
  ports {
    internal = 80
    external = var.port
  }
  volumes {
    container_path = "/etc/nginx/conf.d"
    host_path      = abspath(path.root)
    read_only      = true
  }
}

Expected Behavior

What should have happened?

  # module.docker_reverse_proxy_setup.docker_container.reverse_proxy will be created
  + resource "docker_container" "reverse_proxy" {

     [...](omitted)

      + volumes {
          + container_path = "/etc/nginx/conf.d"
          + host_path      = "C:/absolute/root/path/"
          + read_only      = true
        }
    }

Actual Behavior

Error: "volumes.0.host_path" must be an absolute path

  on modules\docker-nginx\resources.tf line 7, in resource "docker_container" "reverse_proxy":
   7: resource "docker_container" "reverse_proxy" {

Steps to Reproduce

  1. terraform apply

Important Factoids

On windows:

host_path = "G:\\ABC" # --> absolute path
host_path = "/G/ABC" # --> absolute path
host_path = abspath(path.root) # --> not an absolute path
host_path = path.root # --> not an absolute path
host_path = G:/ABC # --> not an absolute path

abspath(path.root) returns C:/path/to/root which is an absolute path on Windows and Terraform but not for the docker_container resource.

Am I missing something or is this a bug?
I try to write terraform code which should run both on Windows and Linux.
But I fail to handle absolute paths on Windows correctly.

References

Crash Import service docker module with hosts include

This issue was originally opened by @martinezhenry as hashicorp/terraform-provider-docker#304. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.



Hi, when I execute a terraform import command of a docker service using a resource docker_service the provider throw a crash.

module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]: Importing from ID "o0i2rg24dflqequp6pz3pivm9"...
module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]: Import prepared!
  Prepared docker_service for import
module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]: Refreshing state... [id=o0i2rg24dflqequp6pz3pivm9]

Error: rpc error: code = Unavailable desc = transport is closing

Terraform Version

image

Affected Resource(s)

  • docker_service

Terraform Configuration Files

data "docker_registry_image" "reg_image_replicated" {
  for_each = {for image in local.images:  replace(image, ":", "") => image}
  name = each.value


}


resource "docker_image" "image_replicated" {
  for_each = {for data in data.docker_registry_image.reg_image_replicated:  replace(data.name, ":", "") => data}
  name = each.value.name
  pull_triggers = [
    each.value.sha256_digest]

}


resource "docker_service" "create_replicated_service" {

  for_each = {for stack in var.replicated_services:  stack.name => stack}
  name = each.value.name

  task_spec {
    container_spec {
      image = join(":", [
        each.value.image,
        each.value.version])
      hostname = each.value.hostname


      dynamic "mounts" {
        for_each = [for mount in each.value.mounts: {
          source = mount.source
          target = mount.target
          type = mount.type
          external = mount.external
        }]

        content {
          source = mounts.value.external ? mounts.value.source : join(".", [each.key, mounts.value.source])
          target = mounts.value.target
          type = mounts.value.type

        }
      }


      dynamic "hosts" {
        for_each = [for host in each.value.hosts: {
          host = host.host
          ip = host.ip

        }]

        content {
          host = hosts.value.host
          ip = hosts.value.ip
        }
      }

      env = {for environment in each.value.environments:  environment.name => environment.value }

    }

    resources {
      limits {
        #nano_cpus = 1000000
        memory_bytes = each.value.memory

      }
    }


    restart_policy = var.restart_policy

  }

  mode {
    replicated {
        replicas = each.value.replicas_nr
    }

  }


  endpoint_spec {

    dynamic "ports" {
      for_each = [for port in each.value.ports: {
        published = port.published
        target = port.target
      }]

      content {
        target_port = ports.value.target
        published_port = ports.value.published

      }
    }

  }

}

resource "docker_service" "create_global_service" {

  for_each = {for stack in var.global_services:  stack.name => stack}
  name = each.value.name

  task_spec {
    container_spec {
      image = join(":", [
        each.value.image,
        each.value.version])
      hostname = each.value.hostname


      dynamic "mounts" {
        for_each = [for mount in each.value.mounts: {
          source = mount.source
          target = mount.target
          type = mount.type
          external = mount.external
        }]

        content {
          source = mounts.value.external ? mounts.value.source : join(".", [each.key, mounts.value.source])
          target = mounts.value.target
          type = mounts.value.type

        }
      }

      dynamic "hosts" {
        for_each = [for host in each.value.hosts: {
          host = host.host
          ip = host.ip

        }]

        content {
          host = hosts.value.host
          ip = hosts.value.ip
        }
      }

      env = {for environment in each.value.environments:  environment.name => environment.value }

    }

    networks = [for network in each.value.networks:  network.name]

    resources {
      limits {
        #nano_cpus = 1000000
        memory_bytes = each.value.memory

      }
    }


  }

  mode {
    global = each.value.global
  }

  endpoint_spec {

    dynamic "ports" {
      for_each = [for port in each.value.ports: {
        published = port.published
        target = port.target
      }]

      content {
        target_port = ports.value.target
        published_port = ports.value.published

      }
    }

  }

}

Debug Output

https://gist.github.com/martinezhenry/2584b772cf25abcd4388c83e3ece43b4

Panic Output

https://gist.github.com/martinezhenry/2584b772cf25abcd4388c83e3ece43b4

Expected Behavior

Import service docker currently running

Actual Behavior

terrafrom crash and import failed

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform import 'module.services.module.novotrans.docker_service.create_replicated_service["jenkins"]' $(docker service inspect --format="{{.ID}}" jenkins)

Important Factoids

N/A

References

N/A

Unable to find or pull image with digest

This issue was originally opened by @viceice as hashicorp/terraform-provider-docker#282. It was migrated here as a result of the community provider takeover from @kreuzwerker. The original body of the issue is below.


Terraform Version

Terraform v0.12.29
+ provider.docker v2.7.1
+ provider.helm v1.2.4
+ provider.kubernetes v1.11.4

Affected Resource(s)

Please list the resources as a list, for example:

  • docker_image
  • docker_container

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

terraform {
  required_providers {
    docker = "2.7.1"
  }
}

provider "docker" {
  host = "ssh://root@docker-worker2"
}

resource "docker_image" "nginx" {
  name     = "nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4"
}

resource "docker_container" "nginx" {
  name     = "nginx-example"
  image    = "nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4"
  restart  = "unless-stopped"
  start    = true
}

Debug Output

https://gist.github.com/viceice/b1a3b6a462988cf53c49a247b4d090da

Expected Behavior

pull docker image and create container

Actual Behavior

fails with error

  • Unable to read Docker image into resource: Unable to find or pull image nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4
  • Unable to create container with image nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4: Unable to find or pull image nginx:1.18.0-alpine@sha256:29dc24ed982665eb88598e0129e4ec88c2049fafc63125a4a640dd67529dc6d4

Steps to Reproduce

  1. terraform apply -auto-approve -parallelism=1

Important Factoids

Simple docker install on ubuntu 18.04 with btrfs

Server:
 Containers: 13
  Running: 0
  Paused: 0
  Stopped: 13
 Images: 82
 Server Version: 19.03.12
 Storage Driver: btrfs
  Build Version: Btrfs v4.15.1
  Library Version: 102
 Logging Driver: journald
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.3.0-1032-azure
 Operating System: Ubuntu 18.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 3.793GiB
 Name: XXXX
 ID: XXXXX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.