Git Product home page Git Product logo

terraform-provider-vsphere's Introduction

Terraform

Terraform Provider for VMware vSphere

GitHub tag (latest SemVer) License

The Terraform Provider for VMware vSphere is a plugin for Terraform that allows you to interact with VMware vSphere, notably vCenter Server and ESXi. This provider can be used to manage a VMware vSphere environment, including virtual machines, host and cluster management, inventory, networking, storage, datastores, content libraries, and more.

Learn more:

Requirements

Using the Provider

The Terraform Provider for VMware vSphere is an official provider. Official providers are maintained by the Terraform team at HashiCorp and are listed on the Terraform Registry.

To use a released version of the Terraform provider in your environment, run terraform init and Terraform will automatically install the provider from the Terraform Registry.

Unless you are contributing to the provider or require a pre-release bugfix or feature, use an officially released version of the provider.

See Installing the Terraform Provider for VMware vSphere for additional instructions on automated and manual installation methods and how to control the provider version.

For either installation method, documentation about the provider configuration, resources, and data sources can be found on the Terraform Registry.

Upgrading the Provider

The provider does not upgrade automatically. After each new release, you can run the following command to upgrade the provider:

terraform init -upgrade

Contributing

The Terraform Provider for VMware vSphere is the work of many contributors and the project team appreciates your help!

If you discover a bug or would like to suggest an enhancement, submit an issue. Once submitted, your issue will follow the lifecycle process.

If you would like to submit a pull request, please read the contribution guidelines to get started. In case of enhancement or feature contribution, we kindly ask you to open an issue to discuss it beforehand.

Learn more in the Frequently Asked Questions.

License

The Terraform Provider for VMware vSphere is available under the Mozilla Public License, version 2.0 license.

terraform-provider-vsphere's People

Contributors

aareet avatar aheeren avatar appilon avatar bill-rich avatar chrislovecnm avatar dependabot[bot] avatar dkalleg avatar furukawataka02 avatar girishramnani-crest avatar grubernaut avatar hashicorp-copywrite[bot] avatar ibrandyjackson avatar idmsubs avatar jen20 avatar jorgenunez avatar koikonom avatar mjrider avatar pablo-ruth avatar phinze avatar radeksimko avatar rmbrad avatar rowanjacobs avatar stack72 avatar sumitagrawal007 avatar tenthirtyam avatar thetuxkeeper avatar vancluever avatar vasilsatanasov avatar waquidvp avatar zxinyu08 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-vsphere's Issues

vSphere provider: sparse VMDK is not completely cleaned up with vsphere_file

This issue was originally opened by @sputnik13 as hashicorp/terraform#8058. It was migrated here as part of the provider split. The original body of the issue is below.


When a sparse VMDK is uploaded with vsphere_file and used with a vsphere_virtual_machine, additional files get generated due to the VMDK being a sparse VMDK. If I have keep_on_remove = "true" on the VM so that terraform destroy can delete the vsphere_file, the additional files generated for the sparse VMDK is not cleaned up properly. This results in subsequent runs of terrform apply failing.

Sample terraform template below...

resource "vsphere_file" "k8s-master-bootdisk" {
    datacenter = "${var.datacenter}"
    datastore = "${var.datastore}"
    source_file = "${var.linux_vmdk_source}"
    destination_file = "${var.prefix}/master-bootdisk.vmdk"
}

resource "vsphere_virtual_machine" "k8s-master" {
    name = "${var.prefix}-master"
    #folder = "${vsphere_folder.k8s_folder.path}"
    datacenter = "${var.datacenter}"
    cluster = "${var.cluster}"

    connection {
        user = "ubuntu"
        key_file = "${var.key_file}"
    }

    network_interface {
        label = "${var.network}"
    }

    cdrom {
        datastore = "${var.datastore}"
        path = "${vsphere_file.cidata.destination_file}"
    }

    disk {
        controller_type = "ide"
        datastore = "${var.datastore}"
        vmdk = "${vsphere_file.k8s-master-bootdisk.destination_file}"
        bootable = "true"
        keep_on_remove = "true"
    }
}

provider/vsphere: network interface order changes

This issue was originally opened by @thetuxkeeper as hashicorp/terraform#6520. It was migrated here as part of the provider split. The original body of the issue is below.


Config:

  network_interface {
    label = "cld_tst1_access"
    ipv4_address = "10.30.8.240"
    ipv4_prefix_length = 23
    ipv4_gateway = "10.30.8.1"
  }

  network_interface {
    label = "cld_tst1_storage"
  }

Results in (removed ipv6 parts):

    network_interface.0.ip_address:         "" => "<computed>"
    network_interface.0.ipv4_address:       "10.30.0.36" => "10.30.8.240"
    network_interface.0.ipv4_gateway:       "" => "10.30.8.1" (forces new resource)
    network_interface.0.ipv4_prefix_length: "23" => "23"
    network_interface.0.label:              "cld_tst1_storage" => "cld_tst1_access" (forces new resource)
    network_interface.0.subnet_mask:        "" => "<computed>"
    network_interface.1.ip_address:         "" => "<computed>"
    network_interface.1.ipv4_address:       "10.30.8.240" => "<computed>"
    network_interface.1.ipv4_gateway:       "10.30.8.1" => ""
    network_interface.1.ipv4_prefix_length: "23" => "<computed>"
    network_interface.1.label:              "cld_tst1_access" => "cld_tst1_storage" (forces new resource)
    network_interface.1.subnet_mask:        "" => "<computed>"

As you can see, interface 0 in the config becomes interface 1 in vmware (confirmed over webinterface of vsphere API `<vsphere-server/mob/?moid=vm-&doPath=guest.net)

Perhaps a different approach for the index of the interfaces would be good (perhaps deviceConfigId?)?

vSphere network label inconsistency with VDS in vCenter folder

This issue was originally opened by @qvallance-ctc as hashicorp/terraform#8584. It was migrated here as part of the provider split. The original body of the issue is below.


Running into an issue with networking in the vsphere provider:

Terraform cannot locate the network interface label when the virtual distributed switch (vds) is located in a folder, unless the folder is added to the label path (e.g., 'folder/label'). Once the VM is deployed, if another Terraform plan/apply is run against the resource it will identify the network interface label as not containing the folder, thus requiring the resource to be recreated or config changed.

Terraform Version

Terraform v0.7.2

vSphere Version

vSphere 6.0

Affected Resource(s)

  • vsphere_virtual_machine

Error Example

When the config includes just the label it cannot find the label:

  network_interface {
      label = "label"
      ipv4_address = "10.0.0.10"
      ipv4_prefix_length = "24"
      ipv4_gateway = "10.0.0.1"
  }
terraform apply
[...]
Error applying plan:

1 error(s) occurred:

* vsphere_virtual_machine.rhl7: network '*label' not found

When the label includes folder, it deploys fine but it does not match the label returned when planning/applying:

  network_interface {
      label = "folder/label"
      ipv4_address = "10.0.0.10"
      ipv4_prefix_length = "24"
      ipv4_gateway = "10.0.0.1"
  }
terraform plan
[...]
    network_interface.0.label:              "label" => "folder/label" (forces new resource)

vsphere: Multiple cores per vcpu

This issue was originally opened by @datroup as hashicorp/terraform#8446. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

I cannot find this in any of the documentation, but when creating vm's in vsphere with terraform we're unable to set the number of cores per vcpu. Is this possible, or can this be a feature request? I know from a technical point of view it doesn't matter but we have a requirement for this to be set.

D

provider/vSphere: VM resource vs. virtual_disk resource - Thick Eager Zeroed disk naming convention

This issue was originally opened by @dkalleg as hashicorp/terraform#8141. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

The latest vSphere vm and virtual_disk resources reference the disk type of 'Thick eager zeroed' with different naming conventions. The vm resource uses "eager_zeroed" while the disk resource uses "eagerZeroedThick". This can cause a headache for systems using both resources and I believe it should be standardized.

vsphere additional disk datastore selection

This issue was originally opened by @leonkyneur as hashicorp/terraform#3870. It was migrated here as part of the provider split. The original body of the issue is below.


I'm trying to create a machine in vsphere - so far all works however I then went to add additional disks (vmdk) to the machine and set a specific datastore to target this new disk on.

Result was it created the new disk in the first assign disks (where template was deployed).

e.g.

resource "vsphere_virtual_machine" "oss-mailstore02" {
   name = "mailstore02"
   datacenter = "DSS"
   cluster = "ALT/Apps"
   disk {
      datastore = "datastore-01"
      template = "CentOS 7.1-x86_64 - Min"
   }
   disk {
       datastore = "datastore-02"
       size = 20
   }
   vcpu = 2
   time_zone = "Australia/Sydney"
   memory = 8196
   dns_servers = ["8.8.8.8","8.8.4.4"]
   dns_suffixes = ["mydomain.com"]
   domain = "mydomain.com"
   network_interface {
     label = "serverfarm"
     ip_address = "10.99.1.2"
     subnet_mask = "255.255.224.0"  
   }
}

The result is disk I targeted to datastore-02 is created on datastore-01, nothing is hinted in apply / plan that this will happen.

vsphere_virtual_machine.oss-mailstore02
cluster:                         "" => "ALT/Apps"
datacenter:                      "" => "DSS"
disk.#:                          "" => "3"
disk.0.datastore:                "" => "datastore-01"
disk.0.template:                 "" => "Templates/CentOS 7.1-x86_64 - Min"
disk.1.datastore:                "" => "datastore-02"
disk.1.size:                     "" => "20"
dns_servers.#:                   "" => "2"
dns_servers.0:                   "" => "8.8.8.8"
dns_servers.1:                   "" => "8.8.4.4"
dns_suffixes.#:                  "" => "1"
dns_suffixes.0:                  "" => "mydomain.com"
domain:                          "" => "mydomain.com"
gateway:                         "" => "203.24.100.1"
memory:                          "" => "8196"
name:                            "" => "mailstore02"
network_interface.#:             "" => "1"
network_interface.0.ip_address:  "" => "10.99.1.2"
network_interface.0.label:       "" => "serverfarm"
network_interface.0.subnet_mask: "" => "255.255.224.0"
time_zone:                       "" => "Australia/Sydney"

using vsphere provider from 0.6.6:
$ terraform version
Terraform v0.6.6

Also noted when I also just wanted to add a new disk - it destroyed the entire machine and re-created it with the disk - desired would be just to add the disk.

provider/vSphere: Stuck in a retry loop when destroying machines

This issue was originally opened by @kristinn as hashicorp/terraform#7615. It was migrated here as part of the provider split. The original body of the issue is below.


I built a system at work that utilizes Terraform a lot. Therefore I found a version that worked for me and patched it with my patches (some of them have been merged into upstream, others are still waiting as pull requests).

However, now my system needs Azure support so I was going to use the Azure provider. For that I needed to update the Terraform version I'm using.

However, after doing that, Terraform gets stuck in a retry loop when trying to destroy virtual machines that are running in vmware (using the vsphere provider), machines that are running our custom made appliance (without vmware tools).

This is a typical debug output (TF_LOG=trace) when this happens:

vsphere_virtual_machine.ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067: Refreshing state... (ID: ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067)
2016/07/12 17:49:17 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/12 17:49:17 [DEBUG] virtual machine resource data: &schema.ResourceData{schema:map[string]*schema.Schema{"disk":(*schema.Schema)(0xc8203ec380), "cdrom":(*schema.Schema)(0xc8203ec460), "cluster":(*schema.Schema)(0xc8203e96c0), "linked_clone":(*schema.Schema)(0xc8203e9880), "dns_suffixes":(*schema.Schema)(0xc8203e9c00), "uuid":(*schema.Schema)(0xc8203ec000), "network_interface":(*schema.Schema)(0xc8203ec2a0), "name":(*schema.Schema)(0xc8203e9180), "datacenter":(*schema.Schema)(0xc8203e95e0), "resource_pool":(*schema.Schema)(0xc8203e97a0), "custom_configuration_parameters":(*schema.Schema)(0xc8203ec0e0), "skip_customization":(*schema.Schema)(0xc8203e9dc0), "windows_opt_config":(*schema.Schema)(0xc8203ec1c0), "folder":(*schema.Schema)(0xc8203e9260), "vcpu":(*schema.Schema)(0xc8203e9340), "memory_reservation":(*schema.Schema)(0xc8203e9500), "dns_servers":(*schema.Schema)(0xc8203e9ce0), "enable_disk_uuid":(*schema.Schema)(0xc8203e9ea0), "memory":(*schema.Schema)(0xc8203e9420), "gateway":(*schema.Schema)(0xc8203e9960), "domain":(*schema.Schema)(0xc8203e9a40), "time_zone":(*schema.Schema)(0xc8203e9b20)}, config:(*terraform.ResourceConfig)(nil), state:(*terraform.InstanceState)(0xc82045ed00), diff:(*terraform.InstanceDiff)(nil), meta:map[string]string(nil), multiReader:(*schema.MultiLevelFieldReader)(nil), setWriter:(*schema.MapFieldWriter)(nil), newState:(*terraform.InstanceState)(nil), partial:false, partialMap:map[string]struct {}(nil), once:sync.Once{m:sync.Mutex{state:0, sema:0x0}, done:0x0}, isNew:false}
2016/07/12 17:49:22 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067
2016/07/12 17:49:27 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067
2016/07/12 17:49:32 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067
2016/07/12 17:49:37 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067
2016/07/12 17:49:42 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067

This is an issue with both the 0.6.X (not sure when it started) and 0.7 Terraform branches.

This is the Terraform config file in question. The files are always much more complex, but I simplified it a lot while trying to track down the cause of this bug.

{
    "provider": {
        "vsphere": {
            "user": "user@vsphere",
            "password": "password",
            "vsphere_server": "172.17.90.6",
            "allow_unverified_ssl": true
        }
    },
    "resource": {
        "vsphere_virtual_machine": {
            "ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067": {
                "name": "ctrl-5d9a70e8-48b2-4733-580e-aa5fc648c067",
                "vcpu": 3,
                "memory": 10240,
                "datacenter": "CZ-DevOps",
                "gateway": "172.17.250.254",
                "dns_servers": [
                    "172.17.93.10"
                ],
                "domain": "devops",
                "network_interface": [
                    {
                        "label": "DevOps 250",
                        "ipv4_address": "172.17.250.1",
                        "ipv4_prefix_length": 24,
                        "ipv4_gateway": "172.17.250.254"
                    }
                ],
                "disk": [
                    {
                        "clone": {
                            "source": "envy-template",
                            "linked": true
                        },
                        "datastore": "DX90:SAS:R5:V01:D0-23"
                    },
                    {
                        "name": "osdisk",
                        "datastore": "DX90:SAS:R5:V01:D0-23",
                        "size": 30,
                        "type": "thin"
                    }
                ]
            }
        }
    }
}

I use json since it's programmatically generated.

The clone part of the disk is a patch from me that is still waiting to be merged - so you will need to cherry-pick the commit in this pull request and build Terraform to be able to use this config file hashicorp/terraform#7575.

I finally figured out the reason while Terraform got stuck in that loop while trying to destroy the virtual machine. The reason is that our appliance doesn't have vmware tools installed. It's stuck on the refresh state step and is stuck there probably because vsphere isn't able to pull all the required info.

The vSphere version I'm using is 5.5 (not the newest one). It has worked very well so far.

This is the custom made version of Terraform that I have been using successfully since February this year (an old version with the newest commit from 3rd of February 2016 plus few pull requests on top of it) https://github.com/kristinn/terraform/commits/custom-envy-build2

Destroying virtual machines using the vSphere provider works fine with this version.

If I spawn up a vm using Terraform that has vmware tools installed, and then I stop the vmware tools service and instruct Terraform to destroy the vm, Terraform will get stuck in that retry loop. However, when I start the vmware tools service again (while Terraform destroy is still stuck in the loop), Terraform kicks in again and finishes the task of destroying the vm.

Vsphere datastore cluster support

This issue was originally opened by @hugoboos as hashicorp/terraform#3721. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

It looks like the Vsphere provider does not have support for datastore clusters.
When I provide a datastore cluster name as datastore under the disk section, Terraform reports back that it can't find the datastore.

With govc from https://github.com/vmware/govmomi I can list the the datastores with the command govc ls.

Thanks!

terraform 0.6.16 - Vmware Vsphere Provider - domain saved as "vsphere.local" in some network configurations

This issue was originally opened by @totojack as hashicorp/terraform#6658. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

terraform 0.6.16

Affected Resource(s)

  • vsphere_virtual_machine

Terraform Configuration Files

variable "vsphere_user" {}
variable "vsphere_password" {}
variable "vsphere_server" {}
variable "vsphere_cluster" {}
variable "vsphere_datacenter" {}
variable "vsphere_domain" {}
variable "vsphere_dns" {}
variable "vsphere_timezone" {}
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
allow_unverified_ssl = true
}
resource "vsphere_virtual_machine" "web" {
name = "terraform-web"
folder = "${vsphere_folder.testfolder.path}"
vcpu = 2
memory = 2048
datacenter = "${var.vsphere_datacenter}"
cluster = "${var.vsphere_cluster}"
domain = "my.custom.domain.com"
dns_servers = ["${var.vsphere_dns}"]
#resource_pool = ""
time_zone = "${var.vsphere_timezone}"

network_interface {
label = "virtualwire_Template"
ipv4_address = "172.16.1.19"
ipv4_prefix_length = "28"
ipv4_gateway = "172.16.1.17"
}

disk {
template = "TEMPLATE/Centos7_tmpl_20160422_01"
datastore = "VPLEX-SAS"
}
disk {
size = "20"
datastore = "VPLEX-SAS"
}
disk {
size = "10"
datastore = "VPLEX-SAS"
}
}

Debug Output

https://gist.github.com/totojack/4db892bf6d9b03b3b982401fedd8c2f6

Expected Behavior

domain value saved in files:

  • /etc/resolv.conf , with line "search my.custom.domain.com"
  • /etc/sysconfig/network-scripts/ifcfg-eth0 , with line "domain my.custom.domain.com"

Actual Behavior

domain value saved as default value 'vsphere.local' in files:

  • /etc/resolv.conf, with line "search vsphere.local"
  • /etc/sysconfig/network-scripts/ifcfg-eth0 , with line "domain vsphere.local"

Steps to Reproduce

add gateway to configuration
terraform apply

Important Factoids

I'm testing with a Centos 7 VM on vsphere 6.

vSphere Provider Roadmap Outline

This issue was originally opened by @chrislovecnm as hashicorp/terraform#6565. It was migrated here as part of the provider split. The original body of the issue is below.


vSphere components started

Will update what is completed:

  1. Virtual machines - mostly done
    • Update functionatlity is underway
  2. Disks - Need to double check what is doen
    • vmx attachable disks
    • cdrom iso images
  3. Misc
    • folder - update is missing
    • cdrom
  4. Finders
    • some basic finders are implemented to support object creation

Overview of vSphere components not started

  1. Base servers crud P3
    • single vsphere server (not going to be addressed)
    • cluster
      • add, update, create, no delete
  2. Datacenter P2
    • cluster and non-clustered
    • CRD are in govc ... do we have update?
  3. Datastores - P1 - some work done
    • non-ha datastores
    • ha cluster based datastores (SDRS)
    • basic file commands cp,mv,rm
    • CRD are in govc ... do we have update?
    • basic file commands
  4. Networks - P1
    • portgroups
      • CRD - no update
    • distributed virtual switch
    • firewalls
    • vnics
    • finders for "Networks" but no CUD available R only - TODO verify
  5. Resource pools - prority??
    • Full CRUD
  6. ESX Hosts - P4
    • portgroups
    • vswitch
    • storage partition
    • maintenance mode
    • host options
    • vnic
  7. Misc - P4
    • Events
    • Extensions
    • Host Info
    • Licenses
    • Logs

Error setting up client: dial tcp: no suitable address found when connecting to vsphere

This issue was originally opened by @byronrau as hashicorp/terraform#8592. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

I am having an issue with Terraform versions 0.7.1 and 0.7.2 when connecting to vsphere server. It worked in 0.7.0. When I upgraded to 0.7.1 or 0.7.2, whenever I run any of the terraform commands plan, refresh, apply it gives an error:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

Error refreshing state: 1 error(s) occurred:

I am using local variables.tf in the directory running terraform from and including a global.tfvars one directory up.

global.tfvars:

vSphere Global Variables

export TF_VAR_vsphere_password=

export TF_VAR_vsphere_user=

vsphere_server = "server.domain.local"
vsphere_folder = "Folder"

Networking Vlans

lab_vlans = "vlan"

Templates variables

template_centos = "CentOS7Template"
template_winrm2k8 = "Win2008R2"
template_winrm2k12 = "Win2012Template"
template_datastore = "datastore"

local variables.tf:

vSphere Global Variables

export TF_VAR_vsphere_password=

export TF_VAR_vsphere_user=

variable "vsphere_user" {}
variable "vsphere_password" {}

variable "vsphere_server" {}
variable "vsphere_folder" {}

Networking Vlans

variable "lab_vlans" {}

Templates variables

variable "template_centos" {}
variable "template_winrm2k8" {}
variable "template_winrm2k12" {}
variable "template_datastore" {}

Windows domain credentials for vm

variable "domain_user" {}
variable "domain_user_password" {}

Now when I change the vsphere_server in global.tfvars to use an IP address it works, but when using the domain name is when the error occurs.

Debug Output

https://gist.github.com/byronrau/8323ee06c8e9c0ab2ad1659e5d07afc6

Panic Output

NA

Expected Behavior

Terraform commands should work.

Actual Behavior

Error setting up client

Steps to Reproduce

Run any of the terraform commands connecting to a vsphere server.

Important Factoids

NA

References

NA

vsphere provider fails to create multiple nested folders

This issue was originally opened by @jasonk as hashicorp/terraform#8106. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.7.0

Affected Resource(s)

  • vsphere_folder

Terraform Configuration Files

resource "vsphere_folder" "ci-dev" {
  path = "DevOps/CI/Development"
  datacenter = "Datacenter"
} 

resource "vsphere_folder" "ci-prod" {
  path = "DevOps/CI/Production"
  datacenter = "Datacenter"
} 

resource "vsphere_folder" "ci-test" {
  path = "DevOps/CI/Test" 
  datacenter = "Datacenter"
} 

Expected Behavior

Prior to applying this configuration, the DevOps folder already existed, and I expected that terraform would create the CI folder and then the three subfolders.

Actual Behavior

The CI folder was created, and the Test folder was created underneath it, but errors were thrown for the other two, because apparently terraform tried to create the parent folder three times:

2 error(s) occurred:

* vsphere_folder.ci-prod: Failed to create folder at DevOps/CI; ServerFaultCode: The name 'CI' already exists.
* vsphere_folder.ci-dev: Failed to create folder at DevOps/CI; ServerFaultCode: The name 'CI' already exists.

Steps to Reproduce

  1. terraform apply

Important Factoids

Provisioning against vSphere 5.5, but otherwise a pretty standard setup.

Terraform 0.7.0-dev built in vsphere network interface out of order

This issue was originally opened by @pryorda as hashicorp/terraform#7673. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

TERRAFORM VERSION

Terraform 0.7.0-dev

Affected Resource(s)

vsphere instance -> network interface out of order.

Terraform Configuration Files

provider "vsphere" {
  user                 = "${var.vsphere_username}"
  password             = "${var.vsphere_password}"
  vsphere_server       = "${var.vsphere_server}"
  allow_unverified_ssl = "${var.vsphere_allow_unverified_ssl}"
}

resource "vsphere_folder" "simple_cluster" {
  path       = "${var.vsphere_folder_path}"
  datacenter = "${var.vsphere_datacenter}"
}

resource "vsphere_virtual_machine" "simple_cluster" {
  count        = "${var.cluster_size}"
  name         = "${lower(var.cluster_env)}-${lower(var.cluster_app)}${count.index + 1}"
  folder       = "${vsphere_folder.simple_cluster.path}"
  datacenter   = "${var.vsphere_datacenter}"
  cluster      = "${var.vsphere_cluster}"
  domain       = "${var.cluster_domain}"
  dns_servers  = ["${split(",",var.vsphere_network_domain_resolvers)}"]
  dns_suffixes = ["${split(",",var.vsphere_network_domain_search)}"]
  vcpu         = "${var.vsphere_vcpu}"
  memory       = "${var.vsphere_memory}"
  time_zone    = "${var.cluster_timezone}"

  network_interface {
    label              = "${var.vsphere_network_label}"
    ipv4_address       = "${element(split(",", var.vsphere_network_ipv4_addresses), count.index)}"
    ipv4_prefix_length = "${var.vsphere_network_ipv4_prefix_length}"
    ipv4_gateway       = "${var.vsphere_network_ipv4_gateway}"
  }

  network_interface {
    label              = "${var.vsphere_network_label_2}"
    ipv4_address       = "${element(split(",", var.vsphere_network_ipv4_addresses_2), count.index)}"
    ipv4_prefix_length = "${var.vsphere_network_ipv4_prefix_length_2}"
  }


  disk {
    datastore = "${var.vsphere_datastore}"
    template  = "${var.vsphere_template}"
    type      = "thin"
  }
}

Actual Behavior

Net interface became 0 instead of 1

Steps to Reproduce

  1. Create a vsphere instance with two nics. 2nd nic will take over as network_interface 0 instead of network interface 1.

tfstate data

                            "network_interface.#": "2",
                            "network_interface.0.adapter_type": "",
                            "network_interface.0.ip_address": "",
                            "network_interface.0.ipv4_address": "10.0.0.1",
                            "network_interface.0.ipv4_gateway": "",
                            "network_interface.0.ipv4_prefix_length": "24",
                            "network_interface.0.ipv6_address": "fe80::250:56ff:feb4:20c5",
                            "network_interface.0.ipv6_gateway": "",
                            "network_interface.0.ipv6_prefix_length": "64",
                            "network_interface.0.label": "Storage Network",
                            "network_interface.0.mac_address": "00:50:56:b4:20:c5",
                            "network_interface.0.subnet_mask": "",
                            "network_interface.1.adapter_type": "",
                            "network_interface.1.ip_address": "",
                            "network_interface.1.ipv4_address": "172.16.7.102",
                            "network_interface.1.ipv4_gateway": "",
                            "network_interface.1.ipv4_prefix_length": "24",
                            "network_interface.1.ipv6_address": "fe80::250:56ff:feb4:97e",
                            "network_interface.1.ipv6_gateway": "",
                            "network_interface.1.ipv6_prefix_length": "64",
                            "network_interface.1.label": "dvPortGroup_qa",
                            "network_interface.1.mac_address": "00:50:56:b4:09:7e",
                            "network_interface.1.subnet_mask": "",

Terraform output

  network_interface.#:                    "" => "2"
  network_interface.0.ip_address:         "" => "<computed>"
  network_interface.0.ipv4_address:       "" => "172.16.7.102"
  network_interface.0.ipv4_gateway:       "" => "172.16.7.1"
  network_interface.0.ipv4_prefix_length: "" => "24"
  network_interface.0.ipv6_address:       "" => "<computed>"
  network_interface.0.ipv6_gateway:       "" => "<computed>"
  network_interface.0.ipv6_prefix_length: "" => "<computed>"
  network_interface.0.label:              "" => "dvPortGroup_qa"
  network_interface.0.mac_address:        "" => "<computed>"
  network_interface.0.subnet_mask:        "" => "<computed>"
  network_interface.1.ip_address:         "" => "<computed>"
  network_interface.1.ipv4_address:       "" => "10.0.0.1"
  network_interface.1.ipv4_gateway:       "" => "<computed>"
  network_interface.1.ipv4_prefix_length: "" => "24"
  network_interface.1.ipv6_address:       "" => "<computed>"
  network_interface.1.ipv6_gateway:       "" => "<computed>"
  network_interface.1.ipv6_prefix_length: "" => "<computed>"
  network_interface.1.label:              "" => "Storage Network"
  network_interface.1.mac_address:        "" => "<computed>"
  network_interface.1.subnet_mask:        "" => "<computed>"

vsphere_virtual_machine does not output a set of network_interface to tfstate.

This issue was originally opened by @pomu0326 as hashicorp/terraform#7209. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

vsphere_virtual_machine does not output a set of network_interface to tfstate.

Terraform Version

Terraform v0.6.16

Affected Resource(s)

  • vsphere_virtual_machine

Terraform Configuration Files

resource "vsphere_virtual_machine" "test" {
  ...
  network_interface {
    label = "LABEL1"
    ipv4_address = "192.168.1.100"
    ipv4_prefix_length = 24
    ipv4_gateway = "192.168.1.1"
  }
  network_interface {
    label = "LABEL2"
    ipv4_address = "192.168.2.100"
    ipv4_prefix_length = 24
  }
  ...
}

Expected Behavior

# cat ./terraform.tfstate
{
    ...
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "vsphere_virtual_machine.test": {
                    "type": "vsphere_virtual_machine",
                    "primary": {
                        "id": "test",
                        "attributes": {
                            ...
                            "network_interface.#": "2",
                            "network_interface.0.adapter_type": "",
                            "network_interface.0.ip_address": "",
                            "network_interface.0.ipv4_address": "192.168.1.100",
                            "network_interface.0.ipv4_gateway": "192.168.1.1",
                            "network_interface.0.ipv4_prefix_length": "24",
                            "network_interface.0.ipv6_address": "",
                            "network_interface.0.ipv6_gateway": "",
                            "network_interface.0.ipv6_prefix_length": "0",
                            "network_interface.0.label": "LABEL1",
                            "network_interface.0.subnet_mask": "",
                            "network_interface.1.adapter_type": "",
                            "network_interface.1.ip_address": "",
                            "network_interface.1.ipv4_address": "192.168.2.100",
                            "network_interface.1.ipv4_gateway": "",
                            "network_interface.1.ipv4_prefix_length": "24",
                            "network_interface.1.ipv6_address": "",
                            "network_interface.1.ipv6_gateway": "",
                            "network_interface.1.ipv6_prefix_length": "0",
                            "network_interface.1.label": "LABEL2",
                            "network_interface.1.subnet_mask": "",
                            ...
                        }
                    }
                }
            }
        },
        ...
}

Actual Behavior

# cat ./terraform.tfstate
{
    ...
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "vsphere_virtual_machine.test": {
                    "type": "vsphere_virtual_machine",
                    "primary": {
                        "id": "test",
                        "attributes": {
                            ...
                            "network_interface.#": "0",
                            ...
                        }
                    }
                }
            }
        },
        ...
}

Steps to Reproduce

  1. terraform apply

Important Factoids

  • Client: Windows 10 x64
  • vCenter Server: 6.0.0
  • ESXi: 5.5.0
  • Guest OS: Ubuntu 16.04

vSphere provider - check for template has vmtools installed

This issue was originally opened by @chrislovecnm as hashicorp/terraform#3835. It was migrated here as part of the provider split. The original body of the issue is below.


Are we able to use the following test to determine if vmtools is installed on Linux images?

mo.VirtualMachine.Guest.ToolsVersionStatus2 == "guestToolsNotInstalled"

I did not have vmtools configured correction or deployPkg installed correctly in the vm template, and ran into all kinds of oddness.

Either we can check or we can update documentation.

vsphere provider - stability enhancements - improve validation of schema

This issue was originally opened by @chrislovecnm as hashicorp/terraform#6448. It was migrated here as part of the provider split. The original body of the issue is below.


Many of the Schema elements are not being validated properly. From instance if cdrom is set to an empty string, we are not failing gracefully. Validation needs to be added throughout the code base primarily through the use of ValidateFunc. Specifically if a Schema is marked Optional: true then we need to add validation. It could come in as an empty String.

Example https://github.com/hashicorp/terraform/blob/master/builtin/providers/vsphere/resource_vsphere_virtual_machine.go#L380

cdrom is not required, and a user can set either of the values to empty strings.

vSphere: discussion - how to make hard disks evolve

This issue was originally opened by @ChloeTigre as hashicorp/terraform#8711. It was migrated here as part of the provider split. The original body of the issue is below.


Hello,

I have been working on being able to make disks evolve in vSphere (I focus on size change at first since it covers the most obvious need, HDD growth).
When hard disks move around in vSphere (they occasionally do during maintainance), the last part of their filename changes. Hence, their logical "name" in Terraform changes (since name is constructed as the filename without the .vmdk ext).

Therefore, we cannot rely on the name to provide a key that will allow us to know that a disk constructed from the vSphere API, a disk from the tfstate and a disk from the .tf are the same disk and this poses a big problem: we simply can't identify what config should be applied to what disk.

We can match disk structures between the .tfstate and what's returned by the API by using the uuid or even the key (controller key, I added it w/ one of my upcoming patches because it is quite useful). But this is of no use for the configurations unless we add the uuid to the config and exclude it from the disk {} hash computation.

An alternative is to provide a free-form key for the users in the .tf, and to store this in the VM notes but that is an utterly sh!tty hack of a solution IMO and one prone to user triggering mayhem.

What do you vSphere contributors think? I have come to something interesting with the vSphere provider but here we hit one limit of the vSphere API, namely the lack of an opaque, user maintained field for hard disks bound to a VM.

vSphere provider - Network in 'disconnected' state after booting from a template

This issue was originally opened by @dkalleg as hashicorp/terraform#6757. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Terraform Version

Terraform v0.7.0-dev (fae3529da245e66e8df9eadefa0513881f49308d+CHANGES).
Clean tip of master as of time of posting this.

Affected Resource(s)

  • vSphere provider, vm resource creation

Terraform Configuration Files

resource "vsphere_virtual_machine" "TerraformVm" {

    # Required vars
    name                                = "tfTestVmDisk"
    vcpu                                  = 1
    memory                             = 1024
    datacenter                          = "Datacenter"

    network_interface {
        label = "melody"
    }

    disk {
        template = "ubuntuTemplate"
    }

Debug Output

Provider stuck while waiting for an IP, but is stuck forever since the network does not connect at boot. If I manually connect the network, the vm will grab an IP and continue on happily and successfully.

Expected Behavior

Booting from a template, which has a network associated, should connect at boot.

Actual Behavior

VM boots and is associated with the network, but the network shows in 'disconnected' state and terraform hangs until network is manually connected.

Steps to Reproduce

  1. Create template that has an associated network
  2. Create .tf script and define a vm from that template
  3. terraform apply
  4. Watch tf waitForIp forever

Important Factoids

  • Template's network option for "Connect on boot" is enabled.
  • Template's boot option for "Update vmware tools before boot" is enabled.
  • Booting a vm from the same template manually will successfully connect to the network at boot (if I select the "Power on after creation" option) through the web ui

I'm suspecting either some problem with my template, or a problem with the buildNetowrkDevice() function. I've tinkered with swapping the adapterType between "vmxnet3" and "e1000" with no resolution. I don't see any additional options we can provide to types.VirtualDeviceConfigSpec / type.VirtualEthernetCard.
Hoping to hear if anyone else even see's this issue.

Schema update existing member of list vs update new member of list

This issue was originally opened by @dkalleg as hashicorp/terraform#6444. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Terraform Version

Terraform v0.6.16-dev (e0f1283ee4692cbdebbde5042d4c8bc66685b9aa+CHANGES)

Affected Resource(s)

  • vsphere resource_vsphere_virtual_machine.go
  • Probably every other provider

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Problem definition

I don't see a way to distinguish updating a resources attribute if:

  1. That attribute is a list
  2. You are adding to that list, not modifying an existing member of that list

Debug Output

Important lines are the "forces new resource" lines

dan@Mac:~/Documents/tfTest$ terraform plan
2016/05/02 14:52:26 [WARN] Invalid log level: "all". Defaulting to level: TRACE. Valid levels are: [TRACE DEBUG INFO WARN ERROR]
Refreshing Terraform state prior to plan...

vsphere_folder.dansTfFolder: Refreshing state... (ID: Datacenter/DansTfTest)
vsphere_virtual_machine.dansTerraformVm: Refreshing state... (ID: DansTfTest/tfTestVmDisk)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

-/+ vsphere_virtual_machine.dansTerraformVm
    datacenter:                             "Datacenter" => "Datacenter"
    disk.#:                                 "1" => "3"
    disk.0.bootable:                        "true" => "1"
    disk.0.type:                            "eager_zeroed" => "eager_zeroed"
    disk.0.vmdk:                            "images-update/ubuntu.vmdk" => "images-update/ubuntu.vmdk"
    disk.1.bootable:                        "" => "0" (forces new resource)
    disk.1.size:                            "" => "2" (forces new resource)
    disk.1.type:                            "" => "eager_zeroed" (forces new resource)
    disk.2.bootable:                        "" => "0" (forces new resource)
    disk.2.size:                            "" => "3" (forces new resource)
    disk.2.type:                            "" => "eager_zeroed" (forces new resource)
    domain:                                 "vsphere.local" => "vsphere.local"
    folder:                                 "DansTfTest" => "DansTfTest"
    linked_clone:                           "false" => "0"
    memory:                                 "1024" => "1024"
    memory_reservation:                     "0" => "0"
    name:                                   "tfTestVmDisk" => "tfTestVmDisk"
    network_interface.#:                    "1" => "1"
    network_interface.0.ip_address:         "" => "<computed>"
    network_interface.0.ipv4_address:       "192.168.41.154" => "<computed>"
    network_interface.0.ipv4_prefix_length: "24" => "<computed>"
    network_interface.0.ipv6_address:       "fe80::250:56ff:fe99:1f08" => "<computed>"
    network_interface.0.ipv6_prefix_length: "64" => "<computed>"
    network_interface.0.subnet_mask:        "" => "<computed>"

Expected Behavior

New member of Schema list should not trigger a ForceNew check

Actual Behavior

Forced new

Steps to Reproduce

Using vSphere as an example:

  1. Create tf template for a vm with one disk & terraform apply
  2. Add a disk in the .tf template & terraform apply

Important Factoids

This specific example occurs because two of the attributes of a vSphere disk have default values. If you see the output above, you'll see what happens when I create a vm with one disk then add two disks to the .tf template and run terraform apply once more. You see "disk.1" and "disk2" are new and has flagged bootable/size/type as forcing new.

Here's the real distinction: When targeting an existing disk that terraform has already created once, I would like the ForceNew to apply. When targeting a completely new disk in the list of disks, I don't want the ForceNew to apply.
I understand this is not the same use case for everyone, but I want to find a way to work around this issue of adding a member to an attribute list without having situationally warranted ForceNews to destroy my entire vm.

References

This request may provide a solution for this issue:
hashicorp/terraform#5895
Where we could say ForceNewWhen: { logic to apply to only previously existing list members }

I see a way I can work around this by removing the default values for the various disk definition attributes. If type, for example, didn't have a default of eager_zeroed, then it wouldn't be flagged, like so:
disk.1.type: "" => "eager_zeroed" (forces new resource)
But I'd like to see a way to accomplish this without having to sacrifice my default value.

vSphere provider doesn't work correctly if secondary/alias addresses are added

This issue was originally opened by @deasmi as hashicorp/terraform#6326. It was migrated here as part of the provider split. The original body of the issue is below.


I am using aliases/secondary IP addresses for some services which get added by puppet rather than configured but the vSphere template.

As such when I re-run terraform plan it detect the IP address and then think it needs to make a change.

I think this is a bug as these are not the primary addresses as can be seen here.

[deasmi@desktop:~/projects/terraform-zelotus] $ terraform plan zelotus.com/
Refreshing Terraform state prior to plan...

<.. CUT...>
vsphere_virtual_machine.orch1: Refreshing state... (ID: terraform/orch1)
<.. CUT ..>

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

~ vsphere_virtual_machine.orch1
    network_interface.0.ipv4_address:       "192.168.xx.50" => "192.168.xx.27"
    network_interface.0.ipv4_prefix_length: "32" => "24"


Plan: 0 to add, 1 to change, 0 to destroy.
[deasmi@desktop:~/projects/terraform-zelotus] $

On the server itself

[root@orch1:~] # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:9e:c5:85 brd ff:ff:ff:ff:ff:ff
    inet 192.168.xx.27/24 brd 192.168.xx.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet 192.168.xx.50/24 brd 192.168.xx.255 scope global secondary ens192
       valid_lft forever preferred_lft forever
    inet6 XXXX:XXXX:XXXX::27/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9e:c585/64 scope link
       valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:9e:0a:60 brd ff:ff:ff:ff:ff:ff
    inet 192.168.yy.27/24 brd 192.168.yy.255 scope global ens224
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9e:a60/64 scope link
       valid_lft forever preferred_lft forever
[root@orch1:~] #

So it's picking up the 192.168.xx.50 secondary as the primary address and wanting to change it.

vSphere Provider - VM Update Capability

This issue was originally opened by @chrislovecnm as hashicorp/terraform#6341. It was migrated here as part of the provider split. The original body of the issue is below.


Updating Virtual Machines Via vSphere provider

This is an issue for tracking update capability of the vSphere provide. At the time of writing this issue the provider does not include the capability to update via TF.

The following items need to be validated that the govomi API allows updating.

General VM Settings

  • vcpu - The number of virtual CPUs to allocate to the virtual machine completed
  • memory - The amount of RAM (in MB) to allocate to the virtual machine completed
  • memory_reservation - The amount of RAM (in MB) to reserve physical memory resource
  • datacenter / cluster move via vmotion
  • domain - A FQDN for the virtual machine; defaults to "vsphere.local"
  • time_zone - update
  • dns_suffixes - List of name resolution suffixes for the virtual network adapter
  • dns_servers - List of DNS servers for the virtual network adapte
  • boot_delay - Time in seconds to wait for machine network to be ready.
  • windows_opt_config - Extra options for clones of Windows machines.
  • custom_configuration_parameters - Update map of values that is set as virtual machine custom configurations. not going to even think about address this ...

network_interface updates

  • add new interface
  • remove interface
  • change ipv4_address on specific nic

windows_opt_config updates

  • admin_password - The password for the administrator account. Omit for passwordless admin (using "" does not work).
  • domain - (Optional) Domain that the new machine will be placed into. If domain, domain_user, and domain_user_password are not all set, all three will be ignored.
  • domain_user - (Optional) User that is a member of the specified domain.
  • domain_user_password - (Optional) Password for domain user, in plain text.

disk updates

  • add new disk
  • remove disk
  • resize disk

cdrom updates

  • add or remove cdrom

VSphere Provider keep-alive/reconnect? (0.7.2)

This issue was originally opened by @mrmarbury as hashicorp/terraform#8627. It was migrated here as part of the provider split. The original body of the issue is below.


We are bootstrapping a cluster containing multiple Chef nodes. The only problem is that all machines depend on each other and the first machine takes up to two hours to bootstrap. After the first machine finishes the VSphere connection is already closed and creating the second machine in VSphere fails with NotAutheticated

Is there a way to force reconnection/authentication with VSphere?

Crash Log with Vsphere + Terraform 0.6.11

This issue was originally opened by @pizzaops as hashicorp/terraform#5180. It was migrated here as part of the provider split. The original body of the issue is below.


Version Info:

lolcalhost :: ~ % which terraform
/usr/local/bin/terraform
lolcalhost :: ~ % terraform --version
Terraform v0.6.11

Crash log below:

2016/02/17 16:23:45 [INFO] Terraform version: 0.6.11  
2016/02/17 16:23:45 [DEBUG] Detected home directory from env var: /Users/zee
2016/02/17 16:23:45 [DEBUG] Discovered plugin: atlas = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-atlas
2016/02/17 16:23:45 [DEBUG] Discovered plugin: aws = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-aws
2016/02/17 16:23:45 [DEBUG] Discovered plugin: azure = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-azure
2016/02/17 16:23:45 [DEBUG] Discovered plugin: azurerm = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-azurerm
2016/02/17 16:23:45 [DEBUG] Discovered plugin: chef = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-chef
2016/02/17 16:23:45 [DEBUG] Discovered plugin: cloudflare = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-cloudflare
2016/02/17 16:23:45 [DEBUG] Discovered plugin: cloudstack = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-cloudstack
2016/02/17 16:23:45 [DEBUG] Discovered plugin: consul = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-consul
2016/02/17 16:23:45 [DEBUG] Discovered plugin: digitalocean = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-digitalocean
2016/02/17 16:23:45 [DEBUG] Discovered plugin: dme = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-dme
2016/02/17 16:23:45 [DEBUG] Discovered plugin: dnsimple = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-dnsimple
2016/02/17 16:23:45 [DEBUG] Discovered plugin: docker = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-docker
2016/02/17 16:23:45 [DEBUG] Discovered plugin: dyn = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-dyn
2016/02/17 16:23:45 [DEBUG] Discovered plugin: google = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-google
2016/02/17 16:23:45 [DEBUG] Discovered plugin: heroku = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-heroku
2016/02/17 16:23:45 [DEBUG] Discovered plugin: mailgun = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-mailgun
2016/02/17 16:23:45 [DEBUG] Discovered plugin: mysql = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-mysql
2016/02/17 16:23:45 [DEBUG] Discovered plugin: null = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-null
2016/02/17 16:23:45 [DEBUG] Discovered plugin: openstack = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-openstack
2016/02/17 16:23:45 [DEBUG] Discovered plugin: packet = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-packet
2016/02/17 16:23:45 [DEBUG] Discovered plugin: postgresql = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-postgresql
2016/02/17 16:23:45 [DEBUG] Discovered plugin: powerdns = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-powerdns
2016/02/17 16:23:45 [DEBUG] Discovered plugin: rundeck = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-rundeck
2016/02/17 16:23:45 [DEBUG] Discovered plugin: statuscake = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-statuscake
2016/02/17 16:23:45 [DEBUG] Discovered plugin: template = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-template
2016/02/17 16:23:45 [DEBUG] Discovered plugin: terraform = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-terraform
2016/02/17 16:23:45 [DEBUG] Discovered plugin: tls = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-tls
2016/02/17 16:23:45 [DEBUG] Discovered plugin: vcd = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-vcd
2016/02/17 16:23:45 [DEBUG] Discovered plugin: vsphere = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-vsphere
2016/02/17 16:23:45 [DEBUG] Discovered plugin: chef = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-chef
2016/02/17 16:23:45 [DEBUG] Discovered plugin: file = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-file
2016/02/17 16:23:45 [DEBUG] Discovered plugin: local-exec = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-local-exec
2016/02/17 16:23:45 [DEBUG] Discovered plugin: remote-exec = /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-remote-exec
2016/02/17 16:23:45 [DEBUG] Detected home directory from env var: /Users/zee
2016/02/17 16:23:45 [DEBUG] Attempting to open CLI config file: /Users/zee/.terraformrc
2016/02/17 16:23:45 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2016/02/17 16:23:45 [DEBUG] Detected home directory from env var: /Users/zee
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.ConfigTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.OrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.AddOutputOrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.MissingProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.ProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.DisableProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.MissingProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.ProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.VertexTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.FlattenTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.ProxyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.TargetsTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.PruneProviderTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.PruneProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.DestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.CreateBeforeDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.PruneDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.PruneNoopTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.CloseProviderTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.CloseProvisionerTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [TRACE] Graph after step *terraform.TransitiveReductionTransformer:

provider.vsphere
provider.vsphere (close)
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:45 [DEBUG] Starting graph walk: walkInput
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.remote-exec: walking
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.remote-exec: evaluating
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.file: walking
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.file: evaluating
2016/02/17 16:23:45 [TRACE] Entering eval tree: provisioner.file
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.local-exec: walking
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.local-exec: evaluating
2016/02/17 16:23:45 [TRACE] Entering eval tree: provisioner.local-exec
2016/02/17 16:23:45 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:45 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.chef: walking
2016/02/17 16:23:45 [TRACE] Entering eval tree: provisioner.remote-exec
2016/02/17 16:23:45 [DEBUG] vertex root.provider.vsphere: walking
2016/02/17 16:23:45 [DEBUG] vertex root.provider.vsphere: evaluating
2016/02/17 16:23:45 [TRACE] Entering eval tree: provider.vsphere
2016/02/17 16:23:45 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:45 [DEBUG] vertex root.provisioner.chef: evaluating
2016/02/17 16:23:45 [TRACE] Entering eval tree: provisioner.chef
2016/02/17 16:23:45 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:45 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:45 [DEBUG] root: eval: *terraform.EvalInitProvider
2016/02/17 16:23:45 [DEBUG] Starting plugin: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-file []string{"/usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-file"}
2016/02/17 16:23:45 [DEBUG] Starting plugin: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-vsphere []string{"/usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-vsphere"}
2016/02/17 16:23:45 [DEBUG] Waiting for RPC address for: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-file
2016/02/17 16:23:45 [DEBUG] Waiting for RPC address for: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-vsphere
2016/02/17 16:23:46 [DEBUG] terraform-provisioner-file: 2016/02/17 16:23:46 Plugin address: unix /var/folders/vh/wlcxbwqs5638x0j_tpr6sj3w0000gn/T/tf-plugin169868790
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] Starting plugin: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-local-exec []string{"/usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-local-exec"}
2016/02/17 16:23:46 [DEBUG] Waiting for RPC address for: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-local-exec
2016/02/17 16:23:46 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:46 Plugin address: unix /var/folders/vh/wlcxbwqs5638x0j_tpr6sj3w0000gn/T/tf-plugin248242153
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalBuildProviderConfig
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInputProvider
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provider.vsphere
2016/02/17 16:23:46 [DEBUG] vertex vsphere_virtual_machine.zeetest, got dep: provider.vsphere
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalCountFixZeroOneBoundary
2016/02/17 16:23:46 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: expanding/walking dynamic subgraph
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ResourceCountTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TargetsTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInstanceInfo
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere (close): walking
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere (close): evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalCloseProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] terraform-provisioner-local-exec: 2016/02/17 16:23:46 Plugin address: unix /var/folders/vh/wlcxbwqs5638x0j_tpr6sj3w0000gn/T/tf-plugin724424570
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] Starting plugin: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-remote-exec []string{"/usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-remote-exec"}
2016/02/17 16:23:46 [DEBUG] Waiting for RPC address for: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-remote-exec
2016/02/17 16:23:46 [DEBUG] terraform-provisioner-remote-exec: 2016/02/17 16:23:46 Plugin address: unix /var/folders/vh/wlcxbwqs5638x0j_tpr6sj3w0000gn/T/tf-plugin097917605
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] Starting plugin: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-chef []string{"/usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-chef"}
2016/02/17 16:23:46 [DEBUG] Waiting for RPC address for: /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-chef
2016/02/17 16:23:46 [DEBUG] terraform-provisioner-chef: 2016/02/17 16:23:46 Plugin address: unix /var/folders/vh/wlcxbwqs5638x0j_tpr6sj3w0000gn/T/tf-plugin478184676
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.chef
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.chef
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.file
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] vertex root.root: walking
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ConfigTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.OrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.AddOutputOrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.MissingProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.DisableProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.MissingProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.VertexTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.FlattenTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProxyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TargetsTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneProviderTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.DestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CreateBeforeDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneNoopTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CloseProviderTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CloseProvisionerTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TransitiveReductionTransformer:

provider.vsphere
provider.vsphere (close)
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [DEBUG] Starting graph walk: walkValidate
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.remote-exec: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.remote-exec: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.chef: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.chef: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.chef
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provider.vsphere
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.file: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.file: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvider
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.local-exec: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.local-exec: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalBuildProviderConfig
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalValidateProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.chef
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.chef
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSetProviderConfig
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provider.vsphere
2016/02/17 16:23:46 [DEBUG] vertex vsphere_virtual_machine.zeetest, got dep: provider.vsphere
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalValidateCount
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalCountFixZeroOneBoundary
2016/02/17 16:23:46 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: expanding/walking dynamic subgraph
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ResourceCountTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TargetsTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalValidateResource
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInstanceInfo
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere (close): walking
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere (close): evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalCloseProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.file
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] vertex root.root: walking
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ConfigTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.OrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.AddOutputOrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.MissingProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.DisableProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.MissingProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.VertexTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.FlattenTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProxyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TargetsTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneProviderTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.DestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CreateBeforeDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneNoopTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CloseProviderTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CloseProvisionerTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TransitiveReductionTransformer:

provider.vsphere
provider.vsphere (close)
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [DEBUG] Starting graph walk: walkRefresh
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.file: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.file: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provider.vsphere
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvider
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.local-exec: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.local-exec: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.remote-exec: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.remote-exec: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.chef: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.chef: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.chef
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalBuildProviderConfig
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSetProviderConfig
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalConfigProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.chef
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.chef
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.file
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.local-exec
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:46 [INFO] VMWare vSphere Client configured for URL: https://zee%40puppetlabs.com:[email protected]/sdk
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provider.vsphere
2016/02/17 16:23:46 [DEBUG] vertex vsphere_virtual_machine.zeetest, got dep: provider.vsphere
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalCountFixZeroOneBoundary
2016/02/17 16:23:46 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: expanding/walking dynamic subgraph
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ResourceCountTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TargetsTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:46 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInstanceInfo
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalReadState
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalRefresh
2016/02/17 16:23:46 [DEBUG] refresh: vsphere_virtual_machine.zeetest: no state, not refreshing
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalWriteState
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere (close): walking
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere (close): evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalCloseProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] vertex root, got dep: provider.vsphere (close)
2016/02/17 16:23:46 [DEBUG] vertex root.root: walking
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ConfigTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.OrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.AddOutputOrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.MissingProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.DisableProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.MissingProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.VertexTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.FlattenTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.ProxyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TargetsTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneProviderTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.DestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CreateBeforeDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.PruneNoopTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CloseProviderTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.CloseProvisionerTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [TRACE] Graph after step *terraform.TransitiveReductionTransformer:

provider.vsphere
provider.vsphere (close)
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:46 [DEBUG] Starting graph walk: walkPlan
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.file: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.file: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provider.vsphere: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provider.vsphere
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvider
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.local-exec: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.local-exec: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.chef: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.remote-exec: walking
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.remote-exec: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.remote-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [DEBUG] vertex root.provisioner.chef: evaluating
2016/02/17 16:23:46 [TRACE] Entering eval tree: provisioner.chef
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.file
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalBuildProviderConfig
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.local-exec
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSetProviderConfig
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:46 [DEBUG] root: eval: *terraform.EvalConfigProvider
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.chef
2016/02/17 16:23:46 [TRACE] Exiting eval tree: provisioner.remote-exec
2016/02/17 16:23:47 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:47 [INFO] VMWare vSphere Client configured for URL: https://zee%40puppetlabs.com:[email protected]/sdk
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provider.vsphere
2016/02/17 16:23:47 [DEBUG] vertex vsphere_virtual_machine.zeetest, got dep: provider.vsphere
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalCountFixZeroOneBoundary
2016/02/17 16:23:47 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: expanding/walking dynamic subgraph
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ResourceCountTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.TargetsTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.RootTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInstanceInfo
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalReadState
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalDiff
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalCheckPreventDestroy
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalIgnoreChanges
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalWriteState
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalDiffTainted
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalWriteDiff
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] vertex root.provider.vsphere (close): walking
2016/02/17 16:23:47 [DEBUG] vertex root.provider.vsphere (close): evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: provider.vsphere (close)
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalCloseProvider
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provider.vsphere (close)
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provider.vsphere (close)
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.local-exec
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.remote-exec
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.chef
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.file
2016/02/17 16:23:47 [DEBUG] vertex root.root: walking
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ConfigTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.OrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.AddOutputOrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.MissingProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.DisableProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.MissingProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.VertexTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.FlattenTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ProxyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.TargetsTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneProviderTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.DestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.CreateBeforeDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneNoopTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.CloseProviderTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.CloseProvisionerTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.TransitiveReductionTransformer:

provider.vsphere
provider.vsphere (close)
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ConfigTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.OrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.AddOutputOrphanTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.MissingProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.DisableProviderTransformer:

provider.vsphere
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.MissingProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.VertexTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.FlattenTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ProxyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.TargetsTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneProviderTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneProvisionerTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.DestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.CreateBeforeDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
  vsphere_virtual_machine.zeetest (destroy tainted)
  vsphere_virtual_machine.zeetest (destroy)
vsphere_virtual_machine.zeetest (destroy tainted)
  provider.vsphere
vsphere_virtual_machine.zeetest (destroy)
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneDestroyTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.PruneNoopTransformer:

provider.vsphere
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.CloseProviderTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.CloseProvisionerTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.RootTransformer:

provider.vsphere
provider.vsphere (close)
  provider.vsphere
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
  vsphere_virtual_machine.zeetest
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.TransitiveReductionTransformer:

provider.vsphere
provider.vsphere (close)
  vsphere_virtual_machine.zeetest
provisioner.chef
provisioner.file
provisioner.local-exec
provisioner.remote-exec
root
  provider.vsphere (close)
  provisioner.chef
  provisioner.file
  provisioner.local-exec
  provisioner.remote-exec
vsphere_virtual_machine.zeetest
  provider.vsphere
2016/02/17 16:23:47 [DEBUG] Starting graph walk: walkApply
2016/02/17 16:23:47 [DEBUG] vertex root.provider.vsphere: walking
2016/02/17 16:23:47 [DEBUG] vertex root.provider.vsphere: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: provider.vsphere
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInitProvider
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.file: walking
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.file: evaluating
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.chef: walking
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.chef: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: provisioner.chef
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:47 [TRACE] Entering eval tree: provisioner.file
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.local-exec: walking
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.local-exec: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: provisioner.local-exec
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.remote-exec: walking
2016/02/17 16:23:47 [DEBUG] vertex root.provisioner.remote-exec: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: provisioner.remote-exec
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInitProvisioner
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provisioner.chef
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalBuildProviderConfig
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSetProviderConfig
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalConfigProvider
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provisioner.file
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provisioner.remote-exec
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provisioner.local-exec
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.local-exec
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.remote-exec
2016/02/17 16:23:47 [DEBUG] vertex root, got dep: provisioner.chef
2016/02/17 16:23:47 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:47 [INFO] VMWare vSphere Client configured for URL: https://zee%40puppetlabs.com:[email protected]/sdk
2016/02/17 16:23:47 [TRACE] Exiting eval tree: provider.vsphere
2016/02/17 16:23:47 [DEBUG] vertex vsphere_virtual_machine.zeetest, got dep: provider.vsphere
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalCountFixZeroOneBoundary
2016/02/17 16:23:47 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: expanding/walking dynamic subgraph
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.ResourceCountTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.TargetsTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [TRACE] Graph after step *terraform.RootTransformer:

vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: walking
2016/02/17 16:23:47 [DEBUG] vertex root.vsphere_virtual_machine.zeetest: evaluating
2016/02/17 16:23:47 [TRACE] Entering eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInstanceInfo
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalOpFilter
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalSequence
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalReadDiff
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalIf
2016/02/17 16:23:47 [DEBUG] root: eval: terraform.EvalNoop
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalIf
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalInterpolate
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalReadState
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalDiff
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalReadDiff
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalCompareDiff
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalGetProvider
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalReadState
2016/02/17 16:23:47 [DEBUG] root: eval: *terraform.EvalApply
2016/02/17 16:23:47 [DEBUG] apply: vsphere_virtual_machine.zeetest: executing Apply
2016/02/17 16:23:47 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:47 [DEBUG] network_interface init: [{ eng  0  0 }]
2016/02/17 16:23:47 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:47 [DEBUG] disk init: [{50 0}]
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [DEBUG] resource pool: &object.ResourcePool{Common:object.Common{c:(*vim25.Client)(0xc82044ef00), r:types.ManagedObjectReference{Type:"ResourcePool", Value:"resgroup-295"}}, InventoryPath:"/opdx1/host/general1/Resources"}
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [DEBUG] folder: ""
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [DEBUG] virtual machine config spec: {{}  zeetest    [] []  0 0 <nil> <nil>   otherLinux64Guest   <nil> <nil> <nil> <nil> <nil> 2 1 1024 <nil> <nil> <nil> <nil> <nil> [0xc820513770] <nil> <nil> <nil> <nil> <nil> <nil> [] []  <nil> <nil> <nil> <nil> <nil> <nil> <nil>  0 <nil> <nil> <nil> <nil> <nil> <nil> [] <nil>}
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [DEBUG] virtual machine Extra Config spec start
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [DEBUG] datastore: &object.Datastore{Common:object.Common{c:(*vim25.Client)(0xc82044ef00), r:types.ManagedObjectReference{Type:"Datastore", Value:"datastore-944"}}, InventoryPath:"/opdx1/datastore/general2"}
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [DEBUG] datastore: "general2"
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 2016/02/17 16:23:50 [ERROR] ServerFaultCode: Permission to perform this operation was denied.
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: panic: runtime error: invalid memory address or nil pointer dereference
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: [signal 0xb code=0x1 addr=0x0 pc=0xfd1b9]
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 84 [running]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/vmware/govmomi/object.(*Task).WaitForResult(0x0, 0x1eba550, 0xc82005ec90, 0x0, 0x0, 0x49, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/vmware/govmomi/object/task.go:50 +0x39
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/vmware/govmomi/object.(*Task).Wait(0x0, 0x1eba550, 0xc82005ec90, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/vmware/govmomi/object/task.go:45 +0x4d
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/builtin/providers/vsphere.(*virtualMachine).createVirtualMachine(0xc820116120, 0xc8205a12b0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/builtin/providers/vsphere/resource_vsphere_virtual_machine.go:961 +0x212a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/builtin/providers/vsphere.resourceVSphereVirtualMachineCreate(0xc820514540, 0xf4c000, 0xc8205a12b0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/builtin/providers/vsphere/resource_vsphere_virtual_machine.go:399 +0x2865
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/helper/schema.(*Resource).Apply(0xc8205b37c0, 0xc820554a80, 0xc8205a1a00, 0xf4c000, 0xc8205a12b0, 0x10101, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/helper/schema/resource.go:145 +0x28e
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/helper/schema.(*Provider).Apply(0xc820014ab0, 0xc8200f7340, 0xc820554a80, 0xc8205a1a00, 0x1, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/helper/schema/provider.go:162 +0x1ed
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/rpc.(*ResourceProviderServer).Apply(0xc8202fed80, 0xc82056d740, 0xc8205a1de0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/resource_provider.go:323 +0x76
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: reflect.Value.call(0xbf7300, 0xefeac0, 0x13, 0xf68248, 0x4, 0xc820387ea8, 0x3, 0x3, 0x0, 0x0, ...)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/reflect/value.go:432 +0x120a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: reflect.Value.Call(0xbf7300, 0xefeac0, 0x13, 0xc820387ea8, 0x3, 0x3, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/reflect/value.go:300 +0xb1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*service).call(0xc8205b3840, 0xc8205b3800, 0xc8202dcd10, 0xc82027e700, 0xc8202feda0, 0x965740, 0xc82056d740, 0x16, 0x9657a0, 0xc8205a1de0, ...)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:383 +0x1c1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/rpc.(*Server).ServeCodec
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:477 +0x4ac
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 1 [IO wait]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.runtime_pollWait(0x1eba068, 0x72, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/netpoll.go:157 +0x60
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).Wait(0xc82021e060, 0x72, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:73 +0x3a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).WaitRead(0xc82021e060, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:78 +0x36
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*netFD).accept(0xc82021e000, 0x0, 0x1eba128, 0xc8201f8120)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_unix.go:408 +0x27c
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*UnixListener).AcceptUnix(0xc82020e1e0, 0xc8201bbcc0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/unixsock_posix.go:304 +0x53
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*UnixListener).Accept(0xc82020e1e0, 0x0, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/unixsock_posix.go:314 +0x41
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/rpc.(*Server).Accept(0xc820212150, 0x1eb90c0, 0xc82020e1e0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:33 +0x34
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/plugin.Serve(0xc820249f48)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/plugin/server.go:88 +0x7cc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: main.main()
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/builtin/bins/provider-vsphere/main.go:11 +0x40
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 33 [syscall]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: os/signal.loop()
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/os/signal/signal_unix.go:22 +0x18
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by os/signal.init.1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/os/signal/signal_unix.go:28 +0x37
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 34 [select, locked to thread]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: runtime.gopark(0x12ca5a0, 0xc82022e728, 0xf704c0, 0x6, 0x18, 0x2)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/proc.go:185 +0x163
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: runtime.selectgoImpl(0xc82022e728, 0x0, 0x18)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/select.go:392 +0xa64
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: runtime.selectgo(0xc82022e728)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/select.go:212 +0x12
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: runtime.ensureSigM.func1()
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/signal1_unix.go:227 +0x323
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: runtime.goexit()
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/asm_amd64.s:1721 +0x1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 35 [chan receive]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/plugin.Serve.func1(0xc8202200c0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/plugin/server.go:79 +0x66
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/plugin.Serve
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/plugin/server.go:85 +0x7aa
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 19 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Stream).Read(0xc820218270, 0xc82025f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/stream.go:125 +0x3f0
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).fill(0xc820220480)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:97 +0x1e9
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).Read(0xc820220480, 0xc8202122e0, 0x1, 0x9, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:207 +0x260
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: io.ReadAtLeast(0x1eba150, 0xc820220480, 0xc8202122e0, 0x1, 0x9, 0x1, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/io/io.go:298 +0xe6
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: io.ReadFull(0x1eba150, 0xc820220480, 0xc8202122e0, 0x1, 0x9, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/io/io.go:316 +0x62
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.decodeUintReader(0x1eba150, 0xc820220480, 0xc8202122e0, 0x9, 0x9, 0x0, 0x1, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decode.go:121 +0x92
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).recvMessage(0xc820216380, 0xc8202e1870)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:76 +0x5e
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).decodeTypeSequence(0xc820216380, 0x12ca600, 0xc820216380)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:140 +0x47
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).DecodeValue(0xc820216380, 0x9a9060, 0xc82020e840, 0x16, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:208 +0x15d
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).Decode(0xc820216380, 0x9a9060, 0xc82020e840, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:185 +0x289
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*gobServerCodec).ReadRequestHeader(0xc820214300, 0xc82020e840, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:403 +0x51
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).readRequestHeader(0xc8202102c0, 0x1eba350, 0xc820214300, 0x0, 0x0, 0xc82020e840, 0xc8202e1b00, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:576 +0x90
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).readRequest(0xc8202102c0, 0x1eba350, 0xc820214300, 0xc8202102c0, 0xc820212300, 0xc820216280, 0x0, 0x0, 0x0, 0x0, ...)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:543 +0x8b
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).ServeCodec(0xc8202102c0, 0x1eba350, 0xc820214300)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:462 +0x8c
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).ServeConn(0xc8202102c0, 0x1eba278, 0xc820218270)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:454 +0x4ee
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/rpc.(*Server).ServeConn(0xc820212150, 0x1e7df20, 0xc820070100)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:76 +0x528
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/rpc.(*Server).Accept
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:39 +0x180
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 20 [IO wait]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.runtime_pollWait(0x1eb9fa8, 0x72, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/netpoll.go:157 +0x60
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).Wait(0xc820196680, 0x72, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:73 +0x3a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).WaitRead(0xc820196680, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:78 +0x36
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*netFD).Read(0xc820196620, 0xc820189000, 0x1000, 0x1000, 0x0, 0x1e79028, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_unix.go:232 +0x23a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*conn).Read(0xc820070100, 0xc820189000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/net.go:172 +0xe4
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).fill(0xc82005c8a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:97 +0x1e9
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).Read(0xc82005c8a0, 0xc820212260, 0xc, 0xc, 0xc820212262, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:207 +0x260
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: io.ReadAtLeast(0x1eba150, 0xc82005c8a0, 0xc820212260, 0xc, 0xc, 0xc, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/io/io.go:298 +0xe6
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: io.ReadFull(0x1eba150, 0xc82005c8a0, 0xc820212260, 0xc, 0xc, 0xc, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/io/io.go:316 +0x62
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Session).recvLoop(0xc8201dd760, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:408 +0x11e
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Session).recv(0xc8201dd760)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:396 +0x21
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.newSession
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:104 +0x4b1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 21 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Session).send(0xc8201dd760)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:358 +0x5e1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.newSession
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:105 +0x4d3
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 22 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Session).keepalive(0xc8201dd760)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:292 +0x240
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.newSession
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:107 +0x506
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 36 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Session).AcceptStream(0xc8201dd760, 0x12c9b70, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/session.go:191 +0x19d
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/rpc.(*muxBroker).Run(0xc82020e2a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/mux_broker.go:107 +0x34
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/rpc.(*Server).ServeConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:65 +0x38f
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 69 [IO wait]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.runtime_pollWait(0x1eb9d68, 0x72, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/netpoll.go:157 +0x60
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).Wait(0xc820196a00, 0x72, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:73 +0x3a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).WaitRead(0xc820196a00, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:78 +0x36
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*netFD).Read(0xc8201969a0, 0xc8200fa000, 0x4000, 0x4000, 0x0, 0x1e79028, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_unix.go:232 +0x23a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*conn).Read(0xc82021a000, 0xc8200fa000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/net.go:172 +0xe4
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*block).readFromUntil(0xc8205540f0, 0x1e7e0d0, 0xc82021a000, 0x5, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:455 +0xcc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*Conn).readRecord(0xc8200602c0, 0x12ca617, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:540 +0x2d1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*Conn).Read(0xc8200602c0, 0xc8205e8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:901 +0x167
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.noteEOFReader.Read(0x1e7eb00, 0xc8200602c0, 0xc820502268, 0xc8205e8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:1370 +0x67
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*noteEOFReader).Read(0xc82058ff00, 0xc8205e8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     <autogenerated>:126 +0xd0
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).fill(0xc82041a2a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:97 +0x1e9
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).Peek(0xc82041a2a0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:132 +0xcc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*persistConn).readLoop(0xc820502210)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:876 +0xf7
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/http.(*Transport).dialConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:685 +0xc78
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 82 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux.(*Stream).Read(0xc820064ea0, 0xc820110000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/vendor/github.com/hashicorp/yamux/stream.go:125 +0x3f0
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).fill(0xc82001a4e0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:97 +0x1e9
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).Read(0xc82001a4e0, 0xc8202dccf0, 0x1, 0x9, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:207 +0x260
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: io.ReadAtLeast(0x1eba150, 0xc82001a4e0, 0xc8202dccf0, 0x1, 0x9, 0x1, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/io/io.go:298 +0xe6
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: io.ReadFull(0x1eba150, 0xc82001a4e0, 0xc8202dccf0, 0x1, 0x9, 0xc82000e300, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/io/io.go:316 +0x62
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.decodeUintReader(0x1eba150, 0xc82001a4e0, 0xc8202dccf0, 0x9, 0x9, 0x0, 0x1, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decode.go:121 +0x92
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).recvMessage(0xc82027eb00, 0xc8205f1800)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:76 +0x5e
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).decodeTypeSequence(0xc82027eb00, 0x12ca600, 0xc82027eb00)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:140 +0x47
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).DecodeValue(0xc82027eb00, 0x9a9060, 0xc8200e8a00, 0x16, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:208 +0x15d
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: encoding/gob.(*Decoder).Decode(0xc82027eb00, 0x9a9060, 0xc8200e8a00, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/encoding/gob/decoder.go:185 +0x289
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*gobServerCodec).ReadRequestHeader(0xc820014c60, 0xc8200e8a00, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:403 +0x51
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).readRequestHeader(0xc8205b3800, 0x1eba350, 0xc820014c60, 0x0, 0x0, 0xc8200e8a00, 0xc8205f1b00, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:576 +0x90
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).readRequest(0xc8205b3800, 0x1eba350, 0xc820014c60, 0xc8205b3800, 0xc8202dcd10, 0xc82027e700, 0x0, 0x0, 0x0, 0x0, ...)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:543 +0x8b
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).ServeCodec(0xc8205b3800, 0x1eba350, 0xc820014c60)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:462 +0x8c
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/rpc.(*Server).ServeConn(0xc8205b3800, 0x1eba278, 0xc820064ea0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/rpc/server.go:454 +0x4ee
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/rpc.serve(0x1eba278, 0xc820064ea0, 0x10843b0, 0x10, 0xefea20, 0xc8202fed80)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:146 +0x1f8
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: github.com/hashicorp/terraform/rpc.(*dispenseServer).ResourceProvider.func1(0xc82020e2c0, 0x5)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:102 +0x24a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by github.com/hashicorp/terraform/rpc.(*dispenseServer).ResourceProvider
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /private/tmp/terraform20160217-73210-145j4ow/terraform-0.6.11/src/github.com/hashicorp/terraform/rpc/server.go:103 +0x62
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 28 [IO wait]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.runtime_pollWait(0x1eb9ee8, 0x72, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/netpoll.go:157 +0x60
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).Wait(0xc820196920, 0x72, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:73 +0x3a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).WaitRead(0xc820196920, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:78 +0x36
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*netFD).Read(0xc8201968c0, 0xc8202e6000, 0x2000, 0x2000, 0x0, 0x1e79028, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_unix.go:232 +0x23a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*conn).Read(0xc8200701d8, 0xc8202e6000, 0x2000, 0x2000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/net.go:172 +0xe4
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*block).readFromUntil(0xc8201f7b30, 0x1e7e0d0, 0xc8200701d8, 0x5, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:455 +0xcc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*Conn).readRecord(0xc820061080, 0x12ca617, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:540 +0x2d1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*Conn).Read(0xc820061080, 0xc8202d8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:901 +0x167
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.noteEOFReader.Read(0x1e7eb00, 0xc820061080, 0xc8201dd868, 0xc8202d8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:1370 +0x67
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*noteEOFReader).Read(0xc8204155a0, 0xc8202d8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     <autogenerated>:126 +0xd0
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).fill(0xc82041b980)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:97 +0x1e9
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).Peek(0xc82041b980, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:132 +0xcc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*persistConn).readLoop(0xc8201dd810)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:876 +0xf7
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/http.(*Transport).dialConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:685 +0xc78
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 29 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*persistConn).writeLoop(0xc8201dd810)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:1009 +0x40c
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/http.(*Transport).dialConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:686 +0xc9d
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 13 [IO wait]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.runtime_pollWait(0x1eb9e28, 0x72, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/runtime/netpoll.go:157 +0x60
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).Wait(0xc82005aa00, 0x72, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:73 +0x3a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*pollDesc).WaitRead(0xc82005aa00, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_poll_runtime.go:78 +0x36
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*netFD).Read(0xc82005a9a0, 0xc8201a8000, 0x2000, 0x2000, 0x0, 0x1e79028, 0xc82005e0a0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/fd_unix.go:232 +0x23a
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net.(*conn).Read(0xc82002e2c0, 0xc8201a8000, 0x2000, 0x2000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/net.go:172 +0xe4
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*block).readFromUntil(0xc820214fc0, 0x1e7e0d0, 0xc82002e2c0, 0x5, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:455 +0xcc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*Conn).readRecord(0xc82000c2c0, 0x12ca617, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:540 +0x2d1
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: crypto/tls.(*Conn).Read(0xc82000c2c0, 0xc8205e9000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/crypto/tls/conn.go:901 +0x167
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.noteEOFReader.Read(0x1e7eb00, 0xc82000c2c0, 0xc8201dc268, 0xc8205e9000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:1370 +0x67
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*noteEOFReader).Read(0xc8202fed00, 0xc8205e9000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     <autogenerated>:126 +0xd0
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).fill(0xc82001a300)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:97 +0x1e9
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: bufio.(*Reader).Peek(0xc82001a300, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/bufio/bufio.go:132 +0xcc
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*persistConn).readLoop(0xc8201dc210)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:876 +0xf7
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/http.(*Transport).dialConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:685 +0xc78
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 14 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*persistConn).writeLoop(0xc8201dc210)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:1009 +0x40c
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/http.(*Transport).dialConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:686 +0xc9d
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: 
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: goroutine 70 [select]:
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: net/http.(*persistConn).writeLoop(0xc820502210)
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:1009 +0x40c
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere: created by net/http.(*Transport).dialConn
2016/02/17 16:23:50 [DEBUG] terraform-provider-vsphere:     /usr/local/Cellar/go/1.5.3/libexec/src/net/http/transport.go:686 +0xc9d
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalWriteState
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalApplyProvisioners
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalIf
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalWriteDiff
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalIf
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalWriteState
2016/02/17 16:23:50 [DEBUG] root: eval: *terraform.EvalApplyPost
2016/02/17 16:23:50 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:

* vsphere_virtual_machine.zeetest: unexpected EOF
2016/02/17 16:23:50 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

* vsphere_virtual_machine.zeetest: unexpected EOF
2016/02/17 16:23:50 [ERROR] root: eval: *terraform.EvalOpFilter, err: 1 error(s) occurred:

* vsphere_virtual_machine.zeetest: unexpected EOF
2016/02/17 16:23:50 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

* vsphere_virtual_machine.zeetest: unexpected EOF
2016/02/17 16:23:50 [TRACE] Exiting eval tree: vsphere_virtual_machine.zeetest
2016/02/17 16:23:50 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.zeetest
2016/02/17 16:23:50 [DEBUG] vertex root, got dep: provider.vsphere (close)
2016/02/17 16:23:50 [DEBUG] vertex root, got dep: provisioner.file
2016/02/17 16:23:50 [DEBUG] /usr/local/Cellar/terraform/0.6.11/bin/terraform-provider-vsphere: plugin process exited
2016/02/17 16:23:50 [DEBUG] waiting for all plugin processes to complete...
2016/02/17 16:23:50 [DEBUG] /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-file: plugin process exited
2016/02/17 16:23:50 [DEBUG] /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-local-exec: plugin process exited
2016/02/17 16:23:50 [DEBUG] /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-remote-exec: plugin process exited
2016/02/17 16:23:50 [DEBUG] /usr/local/Cellar/terraform/0.6.11/bin/terraform-provisioner-chef: plugin process exited

Error with vsphere provider - network not found

This issue was originally opened by @rbriancollins as hashicorp/terraform#6266. It was migrated here as part of the provider split. The original body of the issue is below.


Very new to terraform. Using version 0.6.14.

Affected Resource(s)

  • vsphere_virtual_machine

Terraform Configuration Files

provider "vsphere" {
  user           = "${var.vsphere_user}"
  password       = "${var.vsphere_password}"
  vsphere_server = "${var.vsphere_server}"
  allow_unverified_ssl = "true"
}


resource "vsphere_virtual_machine" "test" {
  name   = "terraform-test2"
  resource_pool = "VBLOCK1"
  vcpu   = 2
  memory = 4096

  network_interface {
    label = "vm63"
    ipv4_address = "10.141.63.239"
    ipv4_prefix_length = "24"
  }

  disk {
    template = "Templates/rhel6_base_tf"
    type = "thin"
    datastore = "VBLOCK_DATA"
  }
}

Debug Output

https://gist.github.com/rbriancollins/8e0308c372f16a2051a98f735ea9d998

Expected Behavior

terraform creates a new VM from a template

Actual Behavior

terraform exited with "vsphere_virtual_machine.test: network '*vm63' not found"

Steps to Reproduce

  1. terraform apply

Important Factoids

The VM networks we want to use are on a DV switch under a sub-folder, so:
MD(the datacenter)/VBLOCKVMS01/VBLOCKVSM01/vm63. I am able to see it with govc:

$ govc ls /MD/network/VBLOCKVSM01/ | egrep 63
/MD/network/VBLOCKVSM01/vm63

The first VBLOCKVSM01 is a folder, the second is a DV switch.
If I reference a network directly under the folder, it works. For example, there is a network called vblock_esx_build, and it is located like so:
MD/VBLOCKVSM01/vblock_esx_build
It is not on a DV switch.

To test further, I put the relative path to vm63 in the config file and managed to crash terraform with:
panic: reflect.Set: value of type mo.DistributedVirtualSwitch is not assignable to type mo.VmwareDistributedVirtualSwitch

vsphere_virtual_machine starts Chef provisioner before Windows VM is configured

This issue was originally opened by @sprokopiak as hashicorp/terraform#6727. It was migrated here as part of the provider split. The original body of the issue is below.


I'm running into issues trying to run the Chef provisioner after deploying a VM in vSphere. Essentially, it seems like vsphere_virtual_machine is determining that the VM has finished being (re)configured and starts Chef before the VM has actually finished (re)configuration.

In this case, I have a Windows 2012 R2 template in vSphere that was setup on a domain with a static IP of 10.208.140.29. I am trying to deploy a VM from the template that resides on the same domain, but is given a static IP address of 10.208.140.20.

When I watch the deployed VM get booted by Terraform in vSphere Client, I see the IP address of the VM is first 10.208.140.29 (the template's IP), which gets passed to Chef and Chef tries to connect. Less than a minute later, the VM IP address becomes blank for a short time, before changing over to something like 10.208.153.123 (an IP address in our DHCP bank). Again, about a minute later, the VM IP address becomes blank for a short time, and then comes back as 10.208.140.20, the IP I configured in my Terraform file. So the VM eventually gets to the desired state, but Chef has already been started and trying to connect to the wrong IP address.

Sometimes Chef is quick enough to connect to the VM before it starts being reloaded due to configuration changes, but then seems to hang as its connection is lost upon the VM reloading/configuring. Othertimes, Chef will fail to ever connect and eventually times out.

Looking in vSphere at the deployed VM, I can see two reconfiguration tasks that complete quickly, but in the events, the reconfiguration events seem to take place much later. I've attached the reconfiguration event vSphere log file.

Terraform Version

0.7.0 dev 52fb286766df7d25cfe8c18bf4b82e030b7faf06+CHANGES

Affected Resource(s)

  • vsphere_virtual_machine
  • chef

Terraform Configuration Files

# Configure the VMWare vSphere provider
provider "vsphere" {
    user            = "${var.vsphere_user}"
    password        = "${var.vsphere_password}"
    vsphere_server  = "${var.vsphere_server}"
}

# Create a folder
resource "vsphere_folder" "web" {
    path = "SCP Testing/terraform"
    datacenter = "DEVESX"
}

# Create a virtual machine within the folder
resource "vsphere_virtual_machine" "web" {
    name = "terraform-web"
    folder = "${vsphere_folder.web.path}"
    datacenter = "DEVESX"
    vcpu = 4
    memory = 8192
    cluster = "General"
    dns_servers = ["10.208.140.30", "10.208.130.20"]

    network_interface {
        label = "dvPortGroup_PrivateVlan"
        ipv4_gateway = "10.208.140.1"
        ipv4_address = "10.208.140.20"
        ipv4_prefix_length = "23"
    }

    windows_opt_config {
        product_key = ""
        domain = "perqalab.local"
        domain_user = "Administrator"
        domain_user_password = "${var.admin_password}"
    }

    disk {
        template = "SCP Testing/templates/CP - packer-virtualbox-iso-1462307446"
        datastore = "/DEVESX/datastore/VM Datastores/Nobackup"
    }

    provisioner "chef" {
        attributes_json = <<EOF
        {
            "tomcat-all": {
                "version": "8.0.23"
            }
        }
        EOF
        run_list = ["tomcat-all"]
        node_name = "terraform-vsphere"
        os_type = "windows"
        secret_key = "${file("H:/Chef/sprokopiak.pem")}"
        server_url = "https://chef01.com/"
        validation_client_name = "chef-validator"
        validation_key = "${file("H:/Chef/chef-validator.pem")}"
        version = "11.18.12"

        connection {
            type = "winrm"
            user = "Administrator"
            password = "${var.admin_password}"
        }
    }
}

Debug Output

Debug Output

Panic Output

N/A

Expected Behavior

The Chef provisioner should run after the VM has been fully reconfigured by vsphere_virtual_machine

Actual Behavior

The Chef provisioner is running the first time the VM comes online, but before the (re)configurations are able to take place and finish.

Steps to Reproduce

  1. terraform apply

Important Factoids

  • vSphere 5.5
  • Running Terraform on Windows 7 Pro
  • Deploying a VM from a Windows 2012 R2 template
  • Template had a static IP of 10.208.140.29

References

vSphere: Terraform apply command hangs for 5 minutes

This issue was originally opened by @Shruti29 as hashicorp/terraform#6300. It was migrated here as part of the provider split. The original body of the issue is below.


When I run terraform apply on my vSphere setup, it hangs for 5 minutes before actually starting the resource creation.
However terraform plan returns output immediately.

Unfortunately, I don't see any debug logs for this as well.
It hangs for 5 minutes and the resource creation logs are collected as it continues with the resource creation.

Need some help on this. I found the issue hashicorp/terraform#2059 quite similar to what I am facing.

Docs for VSphere provider don't specify which attributes force new resources

This issue was originally opened by @partamonov as hashicorp/terraform#8855. It was migrated here as part of the provider split. The original body of the issue is below.


I mentioned that a lot of values are forcing VM re-creation.

Some examples are dns servers list or windows domain user
windows_opt_config.0.domain_user: "" => "Domain\\partamonov" (forces new resource)

Do we have any vision on what should force new VM and what should not?

vsphere_virtual_machine resource provider getting confused with multiple disks

This issue was originally opened by @ricardclau as hashicorp/terraform#7372. It was migrated here as part of the provider split. The original body of the issue is below.


Hi

I am having issues when adding disks to vsphere vms, not sure if this is expected but I think I may have hit a bug here, happy to provide more info, etc...

Terraform Version

0.7.0-rc2 (also tested last master Terraform v0.7.0-dev (85ccf3b6b45478677aa3f10bb83edfaa246ee238))

Affected Resource(s)

Please list the resources as a list, for example:

  • vsphere_virtual_machine

Terraform Configuration Files

Adding an extra disk with something like

  disk {
    datastore = "SOME_DATASTORE"
    size = 50
    name = "d_drive"
  }

Debug Output

terraform plan will try to recreate the box but most importantly the state gets confused in existing disks showing this:

disk.2348006000.controller_type:        "scsi" => "scsi" (forces new resource)

No other changes in say the OS drive, just adding an extra disk

Expected Behavior

Not forcing a new resource or at least not show a problem in a disk that has not changed

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan after adding the extra disk to the tf file

VM not found on creating a VM with the vSphere provisioner

This issue was originally opened by @iarenzana as hashicorp/terraform#8495. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.7.1

Affected Resource(s)

Please list the resources as a list, for example:

  • vsphere_file
  • vsphere_virtual_machine

Terraform Configuration Files

# Configure the VMware vSphere Provider
provider "vsphere" {
  user           = "root"
  password           = "pass"
  vsphere_server         = "192.168.200.26"
  allow_unverified_ssl   = "true"

}

resource "vsphere_file" "matt-centos6_upload" {
         datastore = "slowHDD3TB"
         source_file = "~/templates/centos6/Matt-Cent6-disk1.vmdk"
         destination_file = "/vmfs/volumes/57bc3050-5394b941-b092-0cc47ac2a28c/iar-ngali6.vmdk"
         }

resource "vsphere_virtual_machine" "ngali" {
    name        = "iar-ngali6"
    vcpu        = 1
    memory      = 2048
    time_zone   = "America/New_York"

    network_interface {
        ipv4_gateway       = "192.168.175.1"
        label          = "VM Network"
        ipv4_address       = "192.168.175.9"
        ipv4_prefix_length = "26"
    }

    disk {
        datastore = "slowHDD3TB"
#        template  = "IAR-ngali5-192.168.175.8"
        vmdk  = "iar-ngali6.vmdk"



    }
}

Debug Output

https://gist.github.com/iarenzana/7ccb9fdeeba62707040ee06033a346e1

Panic Output

https://gist.github.com/iarenzana/aa633fd46cf8e4065551118519d9c8ed

Expected Behavior

Create a VM called iar-ngali6

Actual Behavior

VM not found error and crash.

Steps to Reproduce

  1. terraform apply

Important Factoids

References

vSphere provider - added disk to tf and vm is deleted and created - vm should not have been deleted

This issue was originally opened by @chrislovecnm as hashicorp/terraform#7217. It was migrated here as part of the provider split. The original body of the issue is below.


Expected behavior is a disk is added to the vm. Behavior is vm is deleted and created with two disks.

Testing this against: hashicorp/terraform#7031

But I am thinking this may be happening with other versions. Not certain.

Base TF

provider "vsphere" {
  vsphere_server = "*****"
  user = "*****"
  password = "*****"
  allow_unverified_ssl = "true"
}

module "vsphere-dc" {
  source = "./terraform/vsphere"
  long_name = "es-cl-mantl-engpipeline"
  short_name = "es-cl-mantl"
  datacenter = "Datacenter"
  cluster = "cluster"
  pool = "" # format is cluster_name/Resources/pool_name
  template = "mantl-tpl-cent7"
  network_label = "vlan409"
  domain = "mydomain.net"
  dns_server1 = "8.4.4.4"
  dns_server2 = "8.8.8.8"
  datastore = "POOL01"
  control_count = 1
  worker_count = 1
  edge_count = 1
  kubeworker_count = 0
  control_volume_size = 20 # size in gigabytes
  worker_volume_size = 20
  edge_volume_size = 20
  ssh_user = "root"
  # FIXME
  ssh_key = "foo.rsa"
  consul_dc = "dc2"

  #Optional Parameters
  #folder = ""
  #control_cpu = ""
  #worker_cpu = ""
  #edge_cpu = ""
  #control_ram = ""
  #worker_ram = ""
  #edge_ram = ""
  #disk_type = "" # thin or eager_zeored, default is thin
  #linked_clone = "" # true or false, default is false.  If using linked_clones and have problems installing Mantl, revert to full clones
}

Main TF that was applied and worked

variable "datacenter" {}
variable "cluster" {}
variable "pool" {}
variable "template" {}
variable "linked_clone" { default = false }
variable "ssh_user" {}
variable "ssh_key" {}
variable "consul_dc" {}
variable "datastore" {}
variable "disk_type" { default = "thin" }
variable "network_label" {}

variable "short_name" {default = "mantl"}
variable "long_name" {default = "mantl"}

variable "folder" {default = ""}
variable "control_count" {default = 3}
variable "worker_count" {default = 2}
variable "kubeworker_count" {default = 0}
variable "edge_count" {default = 2}
variable "control_volume_size" {default = 20}
variable "worker_volume_size" {default = 20}
variable "edge_volume_size" {default = 20}
variable "control_cpu" { default = 1 }
variable "worker_cpu" { default = 1 }
variable "edge_cpu" { default = 1 }
variable "control_ram" { default = 4096 }
variable "worker_ram" { default = 4096 }
variable "edge_ram" { default = 4096 }

variable "domain" { default = "" }
variable "dns_server1" { default = "" }
variable "dns_server2" { default = "" }

resource "vsphere_virtual_machine" "mi-control-nodes" {
  name = "${var.short_name}-control-${format("%02d", count.index+1)}"
  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.control_cpu}"
  memory = "${var.control_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.control_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-control-${format("%02d", count.index+1)}-disk1"
  }

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "control"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
      user = "${var.ssh_user}"
      key_file = "${var.ssh_key}"
      host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.control_count}"
}

resource "vsphere_virtual_machine" "mi-worker-nodes" {
  name = "${var.short_name}-worker-${format("%03d", count.index+1)}"
  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.worker_cpu}"
  memory = "${var.worker_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.worker_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-worker-${format("%02d", count.index+1)}-disk1"
}

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "worker"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
      user = "${var.ssh_user}"
      key_file = "${var.ssh_key}"
      host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.worker_count}"
}

resource "vsphere_virtual_machine" "mi-kubeworker-nodes" {
  name = "${var.short_name}-kubeworker-${format("%03d", count.index+1)}"

  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.worker_cpu}"
  memory = "${var.worker_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.worker_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-kubeworker-${format("%02d", count.index+1)}-disk1"
  }

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "kubeworker"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
      user = "${var.ssh_user}"
      key_file = "${var.ssh_key}"
      host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.kubeworker_count}"
}

resource "vsphere_virtual_machine" "mi-edge-nodes" {
  name = "${var.short_name}-edge-${format("%02d", count.index+1)}"
  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.edge_cpu}"
  memory = "${var.edge_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.edge_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-edge-${format("%02d", count.index+1)}-disk1"
  }

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "edge"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
    user = "${var.ssh_user}"
    key_file = "${var.ssh_key}"
    host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.edge_count}"
}

output "control_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-control-nodes.*.network_interface.0.ipv4_address)}"
}

output "worker_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-worker-nodes.*.network_interface.0.ipv4_address)}"
}

output "kubeworker_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-kubeworker-nodes.*.network_interface.ipv4_address)}"
}

output "edge_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-edge-nodes.*.network_interface.0.ipv4_address)}"
}

The updated main tf - "vsphere_virtual_machine" "mi-worker-nodes" has a new disk.

variable "datacenter" {}
variable "cluster" {}
variable "pool" {}
variable "template" {}
variable "linked_clone" { default = false }
variable "ssh_user" {}
variable "ssh_key" {}
variable "consul_dc" {}
variable "datastore" {}
variable "disk_type" { default = "thin" }
variable "network_label" {}

variable "short_name" {default = "mantl"}
variable "long_name" {default = "mantl"}

variable "folder" {default = ""}
variable "control_count" {default = 3}
variable "worker_count" {default = 2}
variable "kubeworker_count" {default = 0}
variable "edge_count" {default = 2}
variable "control_volume_size" {default = 20}
variable "worker_volume_size" {default = 20}
variable "edge_volume_size" {default = 20}
variable "control_cpu" { default = 1 }
variable "worker_cpu" { default = 1 }
variable "edge_cpu" { default = 1 }
variable "control_ram" { default = 4096 }
variable "worker_ram" { default = 4096 }
variable "edge_ram" { default = 4096 }

variable "domain" { default = "" }
variable "dns_server1" { default = "" }
variable "dns_server2" { default = "" }

resource "vsphere_virtual_machine" "mi-control-nodes" {
  name = "${var.short_name}-control-${format("%02d", count.index+1)}"
  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.control_cpu}"
  memory = "${var.control_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    ##TODO - make this a var and include default false, add note to vsphere-sample.tf
    use_sdrs = true
    #size = "${var.control_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-control-${format("%02d", count.index+1)}-disk1"
  }

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "control"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
      user = "${var.ssh_user}"
      key_file = "${var.ssh_key}"
      host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.control_count}"
}

resource "vsphere_virtual_machine" "mi-worker-nodes" {
  name = "${var.short_name}-worker-${format("%03d", count.index+1)}"
  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.worker_cpu}"
  memory = "${var.worker_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.worker_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-worker-${format("%02d", count.index+1)}-disk1"
  }
  ## TODO: add docker vol to workers
  disk {
    use_sdrs = true
    ## TODO: add docker_vol_disk_size to var file
    size = "50"

    ## TODO: add docker_vol_disk_type to var file
    ##type = "${var.disk_type}"
    type = "thin"
    datastore = "${var.datastore}"
    name = "${var.short_name}-worker-${format("%02d", count.index+1)}-disk1"
  }
  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "worker"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
      user = "${var.ssh_user}"
      key_file = "${var.ssh_key}"
      host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.worker_count}"
}

resource "vsphere_virtual_machine" "mi-kubeworker-nodes" {
  name = "${var.short_name}-kubeworker-${format("%03d", count.index+1)}"

  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.worker_cpu}"
  memory = "${var.worker_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.worker_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-kubeworker-${format("%02d", count.index+1)}-disk1"
  }

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "kubeworker"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
      user = "${var.ssh_user}"
      key_file = "${var.ssh_key}"
      host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.kubeworker_count}"
}

resource "vsphere_virtual_machine" "mi-edge-nodes" {
  name = "${var.short_name}-edge-${format("%02d", count.index+1)}"
  datacenter = "${var.datacenter}"
  folder = "${var.folder}"
  cluster = "${var.cluster}"
  resource_pool = "${var.pool}"

  vcpu = "${var.edge_cpu}"
  memory = "${var.edge_ram}"

  linked_clone = "${var.linked_clone}"

  disk {
    use_sdrs = true
    #size = "${var.edge_volume_size}"
    template = "${var.template}"
    type = "${var.disk_type}"
    datastore = "${var.datastore}"
    #name = "${var.short_name}-edge-${format("%02d", count.index+1)}-disk1"
  }

  network_interface {
    label = "${var.network_label}"
  }

  domain = "${var.domain}"
  dns_servers = ["${var.dns_server1}", "${var.dns_server2}"]

  custom_configuration_parameters = {
    role = "edge"
    ssh_user = "${var.ssh_user}"
    consul_dc = "${var.consul_dc}"
  }

  connection = {
    user = "${var.ssh_user}"
    key_file = "${var.ssh_key}"
    host = "${self.network_interface.0.ipv4_address}"
  }

  provisioner "remote-exec" {
    inline = [ "sudo hostnamectl --static set-hostname ${self.name}" ]
  }

  count = "${var.edge_count}"
}

output "control_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-control-nodes.*.network_interface.0.ipv4_address)}"
}

output "worker_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-worker-nodes.*.network_interface.0.ipv4_address)}"
}

output "kubeworker_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-kubeworker-nodes.*.network_interface.ipv4_address)}"
}

output "edge_ips" {
  value = "${join(\",\", vsphere_virtual_machine.mi-edge-nodes.*.network_interface.0.ipv4_address)}"
}

Please include folders in search for network_interface labels in vsphere_virtual_machine provider

This issue was originally opened by @xantheran as hashicorp/terraform#5843. It was migrated here as part of the provider split. The original body of the issue is below.


I have been running into an issue with the vsphere_virtual_machine provider. My networks are in a folder off of root so I need to included the folder with the label field under network_interface.

  network_interface {
    label = "Hosted_Network/VL74_dvPortGroup"
    ipv4_address = "10.X.Y.Y"
    ipv4_prefix_length = "24"
  }

The VM gets provisioned just fine, however the folder not taken into account when returning the label for the network_interface. This results in the error below if I taint the resource

Error applying plan:

1 error(s) occurred:

* vsphere_virtual_machine.base: network '*VL74_dvPortGroup_IAP' not found

If I leave the folder in the label field terraform forces a new resource since it detects a change in the rendered plan

-/+ module.stage.couchbase-data.vsphere_virtual_machine.base
    cluster:                                "TerraformTest Hosting Cluster" => "TerraformTest Hosting Cluster"
    datacenter:                             "ZZ Datacenter" => "ZZ Datacenter"
    disk.#:                                 "1" => "1"
    disk.0.datastore:                       "Hosted_Datastores/VMSTOR.1" => "Hosted_Datastores/VMSTOR.1"
    disk.0.template:                        "devops/zz/Templates/ubuntu1404-template" => "devops/zz/Templates/ubuntu1404-template"
    disk.0.type:                            "thin" => "thin"
    dns_servers.#:                          "2" => "2"
    dns_servers.0:                          "10.X.X.X" => "10.X.X.X"
    dns_servers.1:                          "10.X.X.Y" => "10.X.X.Y"
    domain:                                 "devops.internal" => "devops.internal"
    folder:                                 "devops/zz/VMs/Staging/DATA" => "devops/zz/VMs/Staging/DATA"
    gateway:                                "10.X.Y.1" => "10.X.Y.1"
    memory:                                 "8192" => "8192"
    name:                                   "STAGE-COUCHBASE112" => "STAGE-COUCHBASE112"
    network_interface.#:                    "1" => "1"
    network_interface.0.ip_address:         "" => "<computed>"
    network_interface.0.ipv4_address:       "10.X.Y.Y" => "10.X.Y.Y"
    network_interface.0.ipv4_prefix_length: "24" => "24"
    network_interface.0.ipv6_address:       "<scrubbed>" => "<computed>"
    network_interface.0.ipv6_prefix_length: "<scrubbed>" => "<computed>"
    network_interface.0.label:              "VL74_dvPortGroup" => "Hosted_Network/VL74_dvPortGroup (forces new resource)
    network_interface.0.subnet_mask:        "" => "<computed>"
    time_zone:                              "America/Chicago" => "America/Chicago"
    vcpu:                                   "4" => "4"

Crash Report when applying a vsphere machine

This issue was originally opened by @ethankhall as hashicorp/terraform#4302. It was migrated here as part of the provider split. The original body of the issue is below.


I was running terraform apply and had a crash.

Output from consule before the crash

vsphere_virtual_machine.ldap: Creating...
  cluster:                         "" => "main"
  datacenter:                      "" => "home"
  disk.#:                          "" => "2"
  disk.0.datastore:                "" => "ssd1"
  disk.0.template:                 "" => "templates/centos71-v0.5.18"
  disk.1.datastore:                "" => "ssd1"
  disk.1.size:                     "" => "10"
  domain:                          "" => "int.ehdev.io"
  memory:                          "" => "768"
  name:                            "" => "infrastructure-ldap-1"
  network_interface.#:             "" => "2"
  network_interface.0.ip_address:  "" => "<computed>"
  network_interface.0.label:       "" => "Infa VM"
  network_interface.0.subnet_mask: "" => "<computed>"
  network_interface.1.ip_address:  "" => "<computed>"
  network_interface.1.label:       "" => "Infa VM"
  network_interface.1.subnet_mask: "" => "<computed>"
  time_zone:                       "" => "America/Los_Angeles"
  vcpu:                            "" => "1"
Error applying plan:

1 error(s) occurred:

* vsphere_virtual_machine.ldap: unexpected EOF

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
panic: interface conversion: interface is nil, not string

Terraform Version

> terraform --version
Terraform v0.6.8

Crash Log

crash.txt

vSphere: bad documentation for network_interface.label

This issue was originally opened by @ChloeTigre as hashicorp/terraform#7385. It was migrated here as part of the provider split. The original body of the issue is below.


Abstract

This is either a documentation bug or a badly named field.

Terraform Version

I run
Terraform v0.7.0-dev (f312ba1f28b91deb988e41778f840703f418bcdc+CHANGES)

Affected Resource(s)

vsphere_virtual_machine

Terraform Configuration Files

resource "vsphere_folder" "testfolder" {
  path       = "dev0/test-tf"
  datacenter = "dc0"
}

resource "vsphere_virtual_machine" "testvm2" {
  name       = "testvm-chloe2"
  folder     = "${vsphere_folder.testfolder.path}"
  vcpu       = 2
  memory     = 512
  datacenter = "dc0"
  cluster    = "cluster0"

  disk {
    template = "DEV/templates/debian-jessie-template-01"
    datastore  = "DOM1_DEV01"
  }

  network_interface {
    label = "dvpg0/portgroup1"
  }

  skip_customization = true
}

Expected Behavior

The label should be the label of the nic OR the field shall be renamed as connect_to or something approaching.

Actual Behavior

The label field designates which network you want to connect the NIC to. There is no way to specify a name for the NIC.

Steps to Reproduce

terraform apply

Important Factoids

vSphere 5.5, Terraform in a dev branch (working on the vSphere provider)

References

None.

vsphere: virtual_machine creation hangs on waiting for VM

This issue was originally opened by @partamonov as hashicorp/terraform#7479. It was migrated here as part of the provider split. The original body of the issue is below.


terraform-0.6.16
Terraform version: 0.7.0 rc2 46a0709bba004d8b6e0eedad411270b3ae135a9e

Example resource:

resource "vsphere_virtual_machine" "web" {
  name   = "terraform-web"
  folder = "${vsphere_folder.frontend.path}"
  vcpu   = 1
  memory = 512
  cluster = "Cluster"
  domain = "TESTDOMAIN"
  datacenter = "DC"
  skip_customization = true

  network_interface {
    label = "some"
    ipv4_address = "10.X.X.X"
    ipv4_prefix_length = "27"
    ipv4_gateway = "10.X.X.1"
  }

  disk {
    template = "Templates/template-vyos.1.1.7-basic"
    type = "eager_zeroed"
    datastore = "datastore"
  }
}

And it is waiting for ever:

vsphere_virtual_machine.web: Still creating... (1m0s elapsed)
2016/07/04 17:00:41 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
2016/07/04 17:00:46 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
vsphere_virtual_machine.web: Still creating... (1m10s elapsed)
2016/07/04 17:00:47 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:00:47 [DEBUG] new vm: {VirtualMachine vm-770502} @ /VDC1/vm/Projects/terraform-test/terraform-web
2016/07/04 17:00:51 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
2016/07/04 17:00:53 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:00:53 [DEBUG] add cdroms: []
2016/07/04 17:00:53 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:00:53 [DEBUG] VM customization skipped
2016/07/04 17:00:53 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:00:53 [INFO] Created virtual machine: Projects/terraform-test/terraform-web
2016/07/04 17:00:53 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:00:53 [DEBUG] virtual machine resource data: &schema.ResourceData{schema:map[string]*schema.Schema{"vcpu":(*schema.Schema)(0x1c721a40), "memory":(*schema.Schema)(0x1c721ab0), "cluster":(*schema.Schema)(0x1c721c00), "linked_clone":(*schema.Schema)(0x1c721ce0), "cdrom":(*schema.Schema)(0x1c782230), "domain":(*schema.Schema)(0x1c721dc0), "dns_suffixes":(*schema.Schema)(0x1c721ea0), "enable_disk_uuid":(*schema.Schema)(0x1c782000), "custom_configuration_parameters":(*schema.Schema)(0x1c782070), "network_interface":(*schema.Schema)(0x1c782150), "windows_opt_config":(*schema.Schema)(0x1c7820e0), "memory_reservation":(*schema.Schema)(0x1c721b20), "datacenter":(*schema.Schema)(0x1c721b90), "resource_pool":(*schema.Schema)(0x1c721c70), "time_zone":(*schema.Schema)(0x1c721e30), "dns_servers":(*schema.Schema)(0x1c721f10), "name":(*schema.Schema)(0x1c7218f0), "folder":(*schema.Schema)(0x1c7219d0), "gateway":(*schema.Schema)(0x1c721d50), "skip_customization":(*schema.Schema)(0x1c721f80), "disk":(*schema.Schema)(0x1c7821c0)}, config:(*terraform.ResourceConfig)(nil), state:(*terraform.InstanceState)(nil), diff:(*terraform.InstanceDiff)(0x1c750d38), meta:map[string]string(nil), multiReader:(*schema.MultiLevelFieldReader)(0x1c6e83a0), setWriter:(*schema.MapFieldWriter)(0x1c6e8350), newState:(*terraform.InstanceState)(0x1c987140), partial:false, partialMap:map[string]struct {}(nil), once:sync.Once{m:sync.Mutex{state:0, sema:0x0}, done:0x1}, isNew:true}
2016/07/04 17:00:56 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
vsphere_virtual_machine.web: Still creating... (1m20s elapsed)
2016/07/04 17:01:01 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
2016/07/04 17:01:06 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
vsphere_virtual_machine.web: Still creating... (1m30s elapsed)
2016/07/04 17:01:11 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
2016/07/04 17:01:16 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
vsphere_virtual_machine.web: Still creating... (1m40s elapsed)
2016/07/04 17:01:21 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
2016/07/04 17:01:26 [DEBUG] vertex provider.vsphere (close), waiting for: vsphere_virtual_machine.web
vsphere_virtual_machine.web: Still creating... (1m50s elapsed)

At least I saw like 25 minutes waiting

When I abort run and try to re-run, I'm getting:

* vsphere_virtual_machine.web: The guest operating system did not respond to a hot-remove request for device ethernet0 in a timely manner.
2016/07/04 17:11:54 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

* vsphere_virtual_machine.web: The guest operating system did not respond to a hot-remove request for device ethernet0 in a timely manner.
2016/07/04 17:11:54 [ERROR] root: eval: *terraform.EvalOpFilter, err: 1 error(s) occurred:

* vsphere_virtual_machine.web: The guest operating system did not respond to a hot-remove request for device ethernet0 in a timely manner.
2016/07/04 17:11:54 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

* vsphere_virtual_machine.web: The guest operating system did not respond to a hot-remove request for device ethernet0 in a timely manner.
2016/07/04 17:11:54 [TRACE] [walkApply] Exiting eval tree: vsphere_virtual_machine.web
2016/07/04 17:11:54 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.web

Can you point me if this is terraform issue or something with template, which I'm using.

And this 2016/07/04 17:22:15 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:22:15 [DEBUG] add cdroms: [] 2016/07/04 17:22:15 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:22:15 [DEBUG] VM customization skipped 2016/07/04 17:22:15 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/07/04 17:22:15 [INFO] Created virtual machine: Projects/terraform-test/terraform-web is repeating from time to time

vsphere virtual machine resource resolves multiple networks

This issue was originally opened by @kzittritsch as hashicorp/terraform#5607. It was migrated here as part of the provider split. The original body of the issue is below.


I am trying to use the vsphere virtual machine resource but am running into an issue where vsphere is returning multiple networks due to (I think) a * being prefixed to the network label in the provider:
https://github.com/hashicorp/terraform/blob/master/builtin/providers/vsphere/resource_vsphere_virtual_machine.go#L657

// buildNetworkDevice builds VirtualDeviceConfigSpec for Network Device.
func buildNetworkDevice(f *find.Finder, label, adapterType string) (*types.VirtualDeviceConfigSpec, error) {
    network, err := f.Network(context.TODO(), "*"+label)
    if err != nil {
        return nil, err
    }

The terraform error message:

* vsphere_virtual_machine.app_server: path '*QA' resolves to multiple networks

My network_interface configuration inside my virtual machine resource config:

  network_interface {
    label = "QA"
    ipv4_address = "10.37.2.68"
    ipv4_prefix_length = "20"
  }

Using govc I can see how this wildcard resolution is causing a conflict because I have two networks that end in "QA":

ddckevinz@mbp-kevinz terraform $ govc ls /VT/network/ | grep QA
/VT/network/ERPQA
/VT/network/QA

It does not appear that I can provide a full path to the network in the label field (ie supplying label = "/VT/network/QA" results in an error cannot traverse type Network). I threw together a small test to confirm that dropping the * allows the Finder.Network() method to resolve each network individually.

In the current resource, short of renaming my networks in vsphere, is there a way to create a virtual machine with my "QA" network? I am using terraform 0.6.11 and vsphere 5.5 btw.

provider/vsphere: Crash when creating "vsphere_virtual_machine"

This issue was originally opened by @mehlert as hashicorp/terraform#4727. It was migrated here as part of the provider split. The original body of the issue is below.


Running terraform v0.6.9

Panic due to type issues? In vsphere our datastore is nested within a folder:

2016/01/18 16:14:11 [DEBUG] terraform-provider-vsphere: panic: reflect.Set: value of type mo.Folder is not assignable to type mo.Datastore

gist of crash.log

https://gist.github.com/mehlert/746b13d7881081528b7d

panic when duplicate names are used in vsphere

This issue was originally opened by @billyfoss as hashicorp/terraform#8716. It was migrated here as part of the provider split. The original body of the issue is below.


I am using a module with resources using multiple instances.

In once case I assigned the hostname for 2 resources with the same name. This caused an error when vsphere tried to create the duplicate resource. It is obviously a user error, but I am still reporting since it triggered a panic.

The crash.log and console output are in the gist below, but the relevant section appears to be

2016/09/07 10:55:51 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/09/07 10:55:51 [ERROR] The name 'candsh003b' already exists.
2016/09/07 10:55:51 [DEBUG] plugin: terraform: vsphere-provider (internal) 2016/09/07 10:55:51 [DEBUG] new vm: {VirtualMachine vm-38261} @ /CANA/vm/Environments/SD/DEV-NFA-003/candsh003b
2016/09/07 10:55:51 [DEBUG] plugin: terraform: panic: runtime error: invalid memory address or nil pointer dereference
2016/09/07 10:55:51 [DEBUG] plugin: terraform: [signal SIGSEGV: segmentation violation code=0x1 addr=0x2a8 pc=0x1d9ae0f]
2016/09/07 10:55:51 [DEBUG] plugin: terraform:
2016/09/07 10:55:51 [DEBUG] plugin: terraform: goroutine 123 [running]:
2016/09/07 10:55:51 [DEBUG] plugin: terraform: panic(0x27f4e60, 0xc420014090)
2016/09/07 10:55:51 [DEBUG] plugin: terraform: /opt/go/src/runtime/panic.go:500 +0x1a1
2016/09/07 10:55:51 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/vendor/github.com/vmware/govmomi/object.VirtualMachine.Device(0xc4201e4780, 0xc4206e8e70, 0xe, 0xc4206e8e90, 0x8, 0xc4204dab70, 0x2f, 0x43ea700, 0xc42019b610, 0xc4202ffe00, ...)

Terraform Version

$ terraform -v
Terraform v0.7.3

Affected Resource(s)

terraform: vsphere-provider

Terraform Configuration Files

Closer to real files here
https://gist.github.com/billyfoss/297e88d41eb76f6f666ddb7d82c2561c
Relevant subsection

resource "vsphere_virtual_machine" "harvester" {
  depends_on = ["vsphere_folder.folder"]

  count = "${var.harvester_count}"
  name = "${var.harvester_hostnames[count.index]}"
...}
where

# Note the duplicate on the last 2
variable harvester_hostnames { default = ["candsh003a", "candsh003b", "candsh003b"] }

Debug Output

https://gist.github.com/billyfoss/297e88d41eb76f6f666ddb7d82c2561c

Panic Output

https://gist.github.com/billyfoss/297e88d41eb76f6f666ddb7d82c2561c

Expected Behavior

Error message to the user.

Actual Behavior

Panic

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

Running against vSphere 6.0 to deploy Windows templates.

References

None known

vSphere: creating a VM with an existing vmdk will not work

This issue was originally opened by @ChloeTigre as hashicorp/terraform#8552. It was migrated here as part of the provider split. The original body of the issue is below.


Hello,

I'm working on the vSphere code of Terraform and have noticed a strange quirk.

@tkak, @dkalleg you worked on that so I'm pinging you guys but of course anyone may contribute.

at the end of the func addHardDisk (L1323 in master), we see

return vm.AddDevice(context.TODO(), disk)
    } else {
        log.Printf("[DEBUG] addHardDisk: Disk already present.\n")
        return nil
    }

When you specify a disk with a vmdk, the diskPath necessarily already points to an existing VMDK hence the disk will never be attached to the VM. There's no reason whatsoever to specify a vmdk path if the said disk does not already exist.

So I guess this is a bug.

I'll work on a patch since I need this feature. Can you confirm that it does not work as expected ?

vsphere_virtual_machine and vsphere_folder race condition on destroy

This issue was originally opened by @sprokopiak as hashicorp/terraform#6637. It was migrated here as part of the provider split. The original body of the issue is below.


I'm encountering a race condition when using both vsphere_folder and vsphere_virtual_machine. Around ~75% of the time I try to destroy my VM in a created folder, Terraform tries to destroy the folder before the VM, which causes an error to be thrown since the folder is not empty.

From what I've seen, the destroy virtual machine resource always runs before the destroy folder resource can error, resulting in a vSphere environment where my VM was deleted, but the folder remains. This is easily cleaned up by running terraform destroy -force again to destroy the folder.

Terraform Version

v0.6.16

Affected Resource(s)

Please list the resources as a list, for example:

  • vsphere_virtual_machine
  • vsphere_folder

Terraform Configuration Files

# Create a folder
resource "vsphere_folder" "web" {
    path = "SCP Testing/terraform"
    datacenter = "DEVESX"
}

# Create a virtual machine within the folder
resource "vsphere_virtual_machine" "web" {
    name = "terraform-web"
    folder = "${vsphere_folder.web.path}"
    datacenter = "DEVESX"
    vcpu = 4
    memory = 8192
    cluster = "General"

    network_interface {
        label = "dvPortGroup_8HourLease"
    }

    disk {
        template = "templates/CP - packer-virtualbox-iso-1462307446"
        datastore = "/DEVESX/datastore/VM Datastores/Nobackup"
    }
}

Debug Output

Output

Panic Output

N/A

Expected Behavior

The virtual machine should have been destroyed, and then the folder should have been deleted.

Actual Behavior

The folder tried to get deleted before deleting the virtual machine. The folder failed to be deleted because the virtual machine was still inside the folder. The virtual machine ended up being deleted on vSphere, but the folder remained.

Steps to Reproduce

  1. terraform apply
  2. terraform destroy -force

Important Factoids

vSphere 5.5

References

N/A

vSphere Disk Controllers & Drives

This issue was originally opened by @ooOOJavaOOoo as hashicorp/terraform#7776. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

Terraform v0.7.0-rc3 (3f4857a07a24f3c9e2db6b4458fbf5be19a8b256)

Affected Resource(s)

Please list the resources as a list, for example:
vsphere_virtual_machine
vsphere_virtual_machine.disk

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
module "dc_vsphere" {
  source = "git::https://itsals.visualstudio.com/DefaultCollection/EnterpriseInfrastructure/_git/tf_provision//dc/vsphere"
  stg = "${var.stg}"
  vsphere_location = "${var.vsphere_location}"
  vsphere_security_zone = "${var.vsphere_security_zone}"
}

provider "vsphere" {
  user = "${var.vsphere_user}"
  password = "${var.vsphere_password}"
  vsphere_server = "${module.dc_vsphere.vsphere_server}"
  allow_unverified_ssl = true
}

resource "vsphere_virtual_machine" "general" {
  count = "${var.count}"
  datacenter = "${module.dc_vsphere.datacenter}"
  cluster = "${module.dc_vsphere.general_cluster}"
  name = "${format("%s%s%s%02d", module.dc_vsphere.general_name_prefix, var.app_name, var.instance_name, count.index + 1)}"
  vcpu = "${var.cpu}"
  memory = "${var.memory}"
  windows_opt_config {
    domain = "${module.dc_vsphere.general_win_domain}"
    domain_user = "${var.domain_user}"
    domain_user_password = "${var.domain_password}"
    product_key = ""
  }
  network_interface {
    label = "${module.dc_vsphere.vlan_label_temp}"
  }
  disk {
    template = "${module.dc_vsphere.win2k8r2_template}"
    datastore = "${module.dc_vsphere.general_datastore}"
  }
  disk {
    datastore = "${module.dc_vsphere.general_datastore}"
    size = "100"
    type = "thin"
    controller_type = "scsi-paravirtual"
    name = "EDrive"
  }
  disk {
    datastore = "${module.dc_vsphere.general_datastore}"
    size = "50"
    type = "thin"
    controller_type = "scsi-paravirtual"
    name = "FDrive"
  }
  disk {
    datastore = "${module.dc_vsphere.general_datastore}"
    size = "50"
    type = "thin"
    controller_type = "scsi-paravirtual"
    name = "GDrive"
  }
  disk {
    datastore = "${module.dc_vsphere.general_datastore}"
    size = "50"
    type = "thin"
    controller_type = "scsi-paravirtual"
    name = "KDrive"
  }  
}

### Debug Output
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

### Panic Output
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`.

### Expected Behavior
What should have happened?
a disk and scsi disk controller should be created for each drive and the drives should be attached (up to the 4 supported controllers in vmware)

### Actual Behavior
What actually happened?
2 scsi disk controllers were created (1 of each type, default and scsi-paravirtual)  and 2 drives were created

### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform apply`

### Important Factoids
Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

We should be able to create controllers as a resource and then use them inside the vsphere_virtual_machine.disk resource.  When the controller_id is specified in the Disk block it should attach the drive

### References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
- GH-1234

cannot set MAC address in network interface in vsphere provider

This issue was originally opened by @godfryd as hashicorp/terraform#5992. It was migrated here as part of the provider split. The original body of the issue is below.


Currently it is only possible to set MAC address when one is deploying VM to vsphere.
We are using static DHCP where we allocate IP address to MAC address.
So we would need to be able to configure MAC address only on network interface.
The system will get IP from DHCP when it is booted.
Could you add support for that?

Assigning static IP to nodes - help!

This issue was originally opened by @adammy123 as hashicorp/terraform#7714. It was migrated here as part of the provider split. The original body of the issue is below.


Hi, so I have this problem using vSphere to apply terraform nodes and making the IP address static. I don't know why the IP address of the nodes keep jumping - causing errors here and there as I carry on building the infrastructure. I have tried changing the ifcfg-eno(xxx) file (am running on CentOS 7) but am unable to restart the system network. This only applies to the VMs created byterraform apply.

Is there any other way to deploy the nodes with static IP addresses?

(am currently very new to this scene and would appreciate your help! sorry if I've missed out any relevant important information, will update asap!)

Terraform Version

0.6.16

Terraform Configuration Files

provider "vsphere" {
  vsphere_server = "..."         # Your vCenter Address
  user = "..."                                         # vCenter Admin
  password = "..."
  allow_unverified_ssl = "true"
}

module "vsphere-dc" {
  source = "./terraform/vsphere"
  long_name = ""                                        # okay to leave blank
  short_name = "adam-mantl"                             # This will be the prefix for all nodes
  datacenter = "Intern2016"                                     # vCenter Data Center Object
  cluster = "First-Cluster"                             # vCenter Cluster Name
  pool = ""                                             # format is cluster_name/Resources/pool_name
  template = "intern2016/Test-Adam6"    # The VM to use as the source VM
  network_label = "VM Network"                          # The VMW Port-Group Name
  domain = "test.openberl.in"
  dns_server1 = "10.0.4.201"
  dns_server2 = "10.0.4.202"
  datastore = "datastore"                               # Datastore on Cluster
  control_count = 3                                     # How many control nodes to deploy
  worker_count = 3                                      # How many worker nodes to deploy
  edge_count = 2                                        # How many edge nodes to deploy
  kubeworker_count = 0
  control_volume_size = 30                              # Unused - size will be determined by template
  worker_volume_size = 30                               # Unused - size will be determined by template
  edge_volume_size = 30                                 # Unused - size will be determined by template
  ssh_user = "root"                                     # The user in the VM Template to use
  ssh_key = "~/.ssh/id_rsa"                             # Path to the private key on build box
  consul_dc = "mantl"

  #Optional Parameters
  folder = "intern2016/Mantl"                                   # Folder in vCenter, must exist
  control_cpu = "2"
  worker_cpu = "4"
  edge_cpu = "2"
  control_ram = "4096"
  worker_ram = "10240"
  edge_ram = "4096"
  disk_type = "thin"
  #linked_clone = "" # true or false, default is false.  If using linked_clones and have problems installing Mantl, revert to full clones
}

Debug Output

[root@adam-mantl-control-01 ~]# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

[root@adam-mantl-control-01 ~]# systemctl status network.service
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: failed (Result: exit-code) since Wed 2016-07-20 12:14:56 UTC; 7s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 27054 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)

Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service: control process exited, code=exited status=1
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Failed to start LSB: Bring up/down networking.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Unit network.service entered failed state.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service failed.

[root@adam-mantl-control-01 ~]# journalctl -xe
-bash: /Users/Adammy123/Downloads/google-cloud-sdk/path.bash.inc: No such file or directory
-bash: /Users/Adammy123/Downloads/google-cloud-sdk/completion.bash.inc: No such file or directory
Muhammads-MacBook-Pro:~ Adammy123$ ssh [email protected]
[email protected]'s password: 
Last login: Wed Jul 20 12:09:27 2016 from 10.0.134.42
[root@adam-mantl-control-01 ~]# systemctl restart network
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
[root@adam-mantl-control-01 ~]# systemctl status network.service
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: failed (Result: exit-code) since Wed 2016-07-20 12:14:56 UTC; 7s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 27054 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)

Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 network[27054]: RTNETLINK answers: File exists
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service: control process exited, code=exited status=1
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Failed to start LSB: Bring up/down networking.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: Unit network.service entered failed state.
Jul 20 12:14:56 adam-mantl-control-01 systemd[1]: network.service failed.
[root@adam-mantl-control-01 ~]# journalctl -xe
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [WARN] raft: Clearing log suffix from 4033 to 4034
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:57 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:57 [ERR] consul.acl: Failed to get policy for 'ff481b98-5b46-4876-a7f6-70b7c5088ef7': No cl
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [ERR] agent: failed to sync remote state: rpc error: rpc error: rpc error: rpc error: rp
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: c error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: 
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rp
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: ror: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc 
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: r: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc er
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: pc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error:
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rp
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: rror: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: or: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc e
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: : rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc error: rpc err
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.161:8300
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [WARN] raft: Clearing log suffix from 4035 to 4036
Jul 20 12:14:58 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:58 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.156:8300->10.0.134.155:6072
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.160:8300
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Clearing log suffix from 4037 to 4037
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Clearing log suffix from 4038 to 4038
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:00 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:00 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.151:8300->10.0.134.153:4772
Jul 20 12:15:01 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:01 [ERR] agent: failed to sync remote state: No cluster leader
Jul 20 12:15:01 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:01 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:01 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:01 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:15:02 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:02 [ERR] agent: failed to sync remote state: No cluster leader
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Election timeout reached, restarting election
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [ERR] consul: failed to wait for barrier: node is not the leader
Jul 20 12:15:03 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:03 [WARN] raft: Clearing log suffix from 4039 to 4039
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Rejecting vote request from 10.0.134.161:8300 since we have a leader: 10.0.
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [ERR] memberlist: Conflicting address for adam-mantl-control-01. Mine: 10.0.134.156:8301
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [ERR] serf: Node name conflicts with another node at 10.0.134.151:8301. Names must be un
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [WARN] raft: Clearing log suffix from 4040 to 4041
Jul 20 12:15:04 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:04 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:15:05 adam-mantl-control-01 sudo[27293]:   consul : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/docker ps -a --format {{.Image}}\t{{.Status}}\t{{.N
Jul 20 12:15:05 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:05 [WARN] agent: Check 'distributive-consul-checks' is now warning
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.161:8300
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [WARN] raft: Clearing log suffix from 4042 to 4042
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.156:8300->10.0.134.155:6073
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [WARN] raft: Clearing log suffix from 4043 to 4043
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [ERR] raft-net: Failed to flush response: write tcp 10.0.134.151:8300->10.0.134.155:5060
Jul 20 12:15:06 adam-mantl-control-01 consul[9330]: 2016/07/20 12:15:06 [ERR] agent: failed to sync remote state: No cluster leader
[root@adam-mantl-control-01 ~]# journalctl -xe > crash.txt
[root@adam-mantl-control-01 ~]# ls
anaconda-ks.cfg  crash.txt
[root@adam-mantl-control-01 ~]# nano crash.txt

  GNU nano 2.3.1                                          File: crash.txt                                                                                           

-- Logs begin at Wed 2016-07-20 10:29:28 UTC, end at Wed 2016-07-20 12:17:33 UTC. --
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as a $
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [WARN] raft: AppendEntries to 10.0.134.160:8300 rejected, sending older logs (next: 4019)
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:46 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:46 [WARN] raft: Clearing log suffix from 4020 to 4021
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as a $
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [WARN] raft: Clearing log suffix from 4022 to 4023
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [ERR] raft: Failed to get log at index 4022: log not found
Jul 20 12:14:47 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:47 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [ERR] memberlist: Conflicting address for adam-mantl-control-01. Mine: 10.0.134.156:8301 Th$
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [ERR] serf: Node name conflicts with another node at 10.0.134.151:8301. Names must be uniqu$
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Heartbeat timeout reached, starting election
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.156:8300
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Remote peer 10.0.134.151:8300 does not have local node 10.0.134.156:8300 as a $
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [WARN] raft: Clearing log suffix from 4024 to 4025
Jul 20 12:14:49 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:49 [ERR] consul: failed to wait for barrier: leadership lost while committing log
Jul 20 12:14:51 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:51 [WARN] raft: Duplicate RequestVote from candidate: 10.0.134.161:8300
Jul 20 12:14:51 adam-mantl-control-01 consul[9330]: 2016/07/20 12:14:51 [WARN] raft: Clearing log suffix from 4026 to 4026
Jul 20 12:14:51 adam-mantl-control-01 sshd[27010]: Accepted password for root from 10.128.16.209 port 55620 ssh2
Jul 20 12:14:52 adam-mantl-control-01 kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
Jul 20 12:14:52 adam-mantl-control-01 systemd[1]: Created slice user-0.slice.
-- Subject: Unit user-0.slice has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-0.slice has finished starting up.
--
-- The start-up result is done.
Jul 20 12:14:52 adam-mantl-control-01 systemd[1]: Starting user-0.slice.
-- Subject: Unit user-0.slice has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-0.slice has begun starting up.
Jul 20 12:14:52 adam-mantl-control-01 systemd[1]: Started Session 25 of user root.
-- Subject: Unit session-25.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-25.scope has finished starting up.
--
-- The start-up result is done.
Jul 20 12:14:52 adam-mantl-control-01 systemd-logind[782]: New session 25 of user root.
-- Subject: A new session 25 has been created for user root
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Expected Behavior

Able to restart network after editting ifcfg file to enable static IP addressing in place of dhcp

Actual Behavior

Unable to restart network

Steps to Reproduce

  1. terraform get, plan, apply
  2. systemctl restart network

vsphere crash

This issue was originally opened by @deasmi as hashicorp/terraform#5945. It was migrated here as part of the provider split. The original body of the issue is below.


crash.log here GIST
Running latest 0.6.14

Recently rebooted vcenter server and had a few SSO issues, which are now fixed for vsphere client at least, but this looks to be past logon stage to my uneducated eyes.

vsphere windows breaks due to invalid time_zone syntax

This issue was originally opened by @ydnitin as hashicorp/terraform#8213. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

I am having trouble creating a windows VM via vsphere. apply finishes with an error (see below).

Terraform Version

Terraform v0.7.0

Affected Resource(s)

  • vsphere

Terraform Configuration Files

# Create a virtual machine
resource "vsphere_virtual_machine" "test" {
  datacenter = "Datacentre"
  name   = "xxxxxxxx"
  vcpu   = 8
  memory = 65536
  cluster = "Temp Dev"
  disk {
        datastore = "xxxxxxxx"
        template = "windows-2012-R2"
  }
  network_interface {
        label =  "xxxxxx"
        ipv4_address = "10.100.160.110"
        ipv4_prefix_length = "24"
  }

  gateway = "10.100.160.1"
  time_zone = "New Zealand Standard Time"
  domain = "xx.xxx.xx.xx"
  dns_suffixes = ["xx.xxx.xx.xx"]
  dns_servers = ["10.100.113.5"]
}

Error reported:

Error applying plan:

1 error(s) occurred:

* vsphere_virtual_machine.test: Error converting TimeZone: strconv.ParseInt: parsing "New Zealand Standard Time": invalid syntax

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Expected Behavior

Should have ended up with a windows VM with time_zone set to New Zealand time.

Actual Behavior

Created a VM with no network adapter. Apply finished with the error above and terraform state didn't register that it created the VM. Resulting in manual deletion of the VM.
I tried without the time_zone config then it defaults to "Etc/UTC" which also produces the same error/behaviour. terraform plan indicates that it will create the same VM again "Plan: 1 to add, 0 to change, 0 to destroy."

Steps to Reproduce

  1. terraform apply

This is my first time creating windows VM with terraform. Any help is much appreciated.

vsphere: limited user permissions

This issue was originally opened by @jsw94583 as hashicorp/terraform#8125. It was migrated here as part of the provider split. The original body of the issue is below.


terraform -version
Terraform v0.6.16

We are using the required permissions based on the documentation below.

https://www.terraform.io/docs/providers/vsphere/

We created a user in vsphere with these limited permissions, not admin.

When we run the apply we get this error from the trace log, the line number are from vi.

676416 2016/08/04 11:07:12 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:12 [DEBUG] template: &object.VirtualMachine{Common:object.Common{c:(*vim25.Client)(0x8221e4c80)       , r:types.ManagedObjectReference{Type:"VirtualMachine", Value:"vm-2333"}}, InventoryPath:"/apcera-sf2/vm/brie-dc2/base-template"}
676417 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] resource pool: &object.ResourcePool{Common:object.Common{c:(*vim25.Client)(0x8221e4c       80), r:types.ManagedObjectReference{Type:"ResourcePool", Value:"resgroup-41"}}, InventoryPath:"/apcera-sf2/host/172.27.0.31/Resources"}
676418 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] folder: "brie-dc2"
676419 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] datastore: &object.Datastore{Common:object.Common{c:(*vim25.Client)(0x8221e4c80), r:       types.ManagedObjectReference{Type:"Datastore", Value:"datastore-44"}}, InventoryPath:"/apcera-sf2/datastore/datastore.31"}
676420 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] relocate type: [moveAllDiskBackingsAndDisallowSharing]
676421 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] relocate spec: {{} <nil> <nil> 0x8221743a0 moveAllDiskBackingsAndDisallowSharing 0x8       22174380 <nil> [{{} 2000 {Datastore datastore-44}  0x8223641a0 []}]  [] []}
676422 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] ipv4 gateway: 172.27.1.1
676423 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] ipv4 address: 172.27.1.246
676424 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] ipv4 prefix length: 22
676425 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] ipv4 subnet mask: 255.255.252.0
676426 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] network configs: {{} 0x822697760 255.255.252.0 [172.27.1.1] 0x8223d2120 []    }
676427 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] virtual machine config spec: {{}      [] []  0 0 <nil> <nil>      <nil> <nil> <nil>        <nil> <nil> 4 1 8192 <nil> <nil> <nil> <nil> <nil> [] <nil> 0x8223d2150 <nil> <nil> <nil> <nil> [] []  <nil> <nil> <nil> <nil> <nil> <nil> <nil>  0 <nil> <nil> <nil> <n       il> <nil> <nil> [] <nil>}
676428 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] starting extra custom config spec: map[]
676429 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] custom spec: {{} <nil> 0x8228e0640 {{} [vsphere.local] [8.8.8.8 8.8.4.4]} [{{}  {{}        0x822697760 255.255.252.0 [172.27.1.1] 0x8223d2120 []    }}] []}
676430 2016/08/04 11:07:13 [DEBUG] terraform-provider-vsphere: 2016/08/04 11:07:13 [DEBUG] clone spec: {{} {{} <nil> <nil> 0x8221743a0 moveAllDiskBackingsAndDisallowSharing 0x       822174380 <nil> [{{} 2000 {Datastore datastore-44}  0x8223641a0 []}]  [] []} false 0x82218d080 <nil> false <nil> <nil>}
676431 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalWriteState
676432 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalApplyProvisioners
676433 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalIf
676434 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalWriteDiff
676435 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalIf
676436 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalWriteState
676437 2016/08/04 11:07:13 [DEBUG] root.DC2: eval: *terraform.EvalApplyPost
676438 2016/08/04 11:07:13 [ERROR] root.DC2: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:
676439 
676440 * vsphere_virtual_machine.instance-manager-SETIP: ServerFaultCode: Permission to perform this operation was denied.
676441 2016/08/04 11:07:13 [ERROR] root.DC2: eval: *terraform.EvalSequence, err: 1 error(s) occurred:
676442 
676443 * vsphere_virtual_machine.instance-manager-SETIP: ServerFaultCode: Permission to perform this operation was denied.
676444 2016/08/04 11:07:13 [ERROR] root.DC2: eval: *terraform.EvalOpFilter, err: 1 error(s) occurred:
676445 
676446 * vsphere_virtual_machine.instance-manager-SETIP: ServerFaultCode: Permission to perform this operation was denied.
676447 2016/08/04 11:07:13 [ERROR] root.DC2: eval: *terraform.EvalSequence, err: 1 error(s) occurred:
676448 
676449 * vsphere_virtual_machine.instance-manager-SETIP: ServerFaultCode: Permission to perform this operation was denied.
676450 2016/08/04 11:07:13 [TRACE] [walkApply] Exiting eval tree: vsphere_virtual_machine.instance-manager-SETIP

If needed I can send a 'redacted' copy of our main.tf and other files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.