Git Product home page Git Product logo

terraform-provider-rancher2's Introduction

Terraform Provider for Rancher v2

Go Report Card

Requirements

  • Terraform >= 0.11.x
  • Go 1.13 to build the provider plugin
  • Docker 20.10.x to run acceptance tests

Building The Provider

Clone repository to: $GOPATH/src/github.com/terraform-providers/terraform-provider-rancher2

$ mkdir -p $GOPATH/src/github.com/terraform-providers; cd $GOPATH/src/github.com/terraform-providers
$ git clone [email protected]:terraform-providers/terraform-provider-rancher2

Enter the provider directory and build the provider

$ cd $GOPATH/src/github.com/terraform-providers/terraform-provider-rancher2
$ make build

Using the Provider

If you're building the provider, follow the instructions to install it as a plugin. After placing it into your plugins directory, run terraform init to initialize it. Documentation about the provider specific configuration options can be found on the provider's website.

Developing the Provider

If you wish to work on the provider, you'll first need Go installed on your machine (version 1.9+ is required). You'll also need to correctly setup a GOPATH, as well as adding $GOPATH/bin to your $PATH.

To compile the provider, run make build. This will build the provider and put the provider binary in $GOPATH/bin .

$ make build
...
$ $GOPATH/bin/terraform-provider-rancher2
...

To just compile provider binary on repo path and test on terraform:

$ make bin
$ terraform init
$ terraform plan
$ terraform apply

See development process for more details.

Testing the Provider

In order to test the provider, simply run make test.

$ make test

In order to run the full suite of Acceptance tests, a running rancher system, a rancher API key and a working k8s cluster imported are needed. Acceptance tests cover a Rancher server upgrade, v2.3.6 and v2.4.2.

To run the Acceptance tests, simply run make testacc. scripts/gotestacc.sh will be run, deploying all needed requirements, running tests and cleanup.

$ make testacc

Due to network limitations on Docker for osx and/or windows, there is a way to run dockerized acceptance test.

$ EXPOSE_HOST_PORTS=true make docker-testacc

To run the structure tests, run

$ go clean --testcache && go test -v ./rancher2

See test process for details on release testing (Terraform Maintainers Only).

Branching the Provider

The provider is branched into three release lines with major version alignment with Rancher 2.6, 2.7, and 2.8. The release/v2 branch with 2.0.0+ is aligned with Rancher 2.6, the release/v3 branch with 3.0.0+ is aligned with Rancher 2.7, and the master branch with 4.0.0+ is aligned with Rancher 2.8. The lifecycle of each major provider version is aligned with the lifecycle of each Rancher minor version. For example, provider versions 4.0.x which are aligned with Rancher 2.8.x will only be actively maintained until the EOM for Rancher 2.8.x and supported until EOL for Rancher 2.8.x.

See the Rancher support matrix for details.

Aligning major provider releases with minor Rancher releases means,

  • We can follow semver
  • We can cut patch/minor versions on an as-needed basis to fix bugs or add new resources
  • We have 'out of band' flexibility and are only tied to releasing a new version of the provider when we get a new 2.x Rancher minor version.

See the compatibility matrix for details.

If you are using Terraform to provision clusters on instances of Rancher 2.7 and 2.8, you must have a separate configuration in a separate dir for each provider. Otherwise, Terraform will overwrite the .tfstate file every time you switch versions.

Releasing the Provider

As of Terraform 2.0.0 and 3.0.0, the provider is tied to Rancher minor releases but can be released 'out of band' within that minor version. For example, 4.0.0 will be released 1-2 weeks after Rancher 2.8.x and fixes and features in the 4.0.0 release will be supported for clusters provisioned via Terraform on Rancher 2.8.x. A critical bug fix can be released 'out of band' as 4.0.1 and backported to release/v3 as 3.0.1. A new feature can also be released 'out of band' as 4.1.0 but not backported.

The RKE provider should be released after every RKE or KDM release. For example, if upstream RKE 1.3.15 was released, bump the RKE version to 1.3.15 and release the provider.

To release the provider

  • Create a draft of the release and select create new tag for the version you are releasing
  • Create release notes by clicking Generate release notes
  • Copy the release notes to the CHANGELOG and update to the following format
# <tag version> (Month Day, Year)
FEATURES:
ENHANCEMENTS:
BUG FIXES:
  • Create a PR to update CHANGELOG
  • Copy the updated notes back to the draft release and save (DO NOT release with just the generated notes. Those are just a template to help you)
  • Undraft the release, which creates the tag and builds the release
  • If necessary - create a followup PR to edit ./docs/compatibility-matrix.md with the new version information

terraform-provider-rancher2's People

Contributors

a-blender avatar adamkpickering avatar armsnyder avatar axeal avatar bobvanb avatar bourne-id avatar cgriggs01 avatar cloudnautique avatar dependabot[bot] avatar drpebcak avatar eliyamlevy avatar futuretea avatar harrisonwaffel avatar jakefhyde avatar jeffb4 avatar jiaqiluo avatar jlamillan avatar josh-diamond avatar jrosinsk avatar kkaempf avatar lots0logs avatar markusewalker avatar mjura avatar mouellet avatar paynejacob avatar rawmind0 avatar snasovich avatar stefandanaita avatar thatmidwesterncoder avatar zparnold avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-rancher2's Issues

Terraform provider release

Hey @rawmind0,

We can use this issue as a thread for releasing this under the HashiCorp ecosystem.

Are there anymore commits required for the release of this provider?

Thanks,
Chris

when updating cluster returns error

Hi, maybe im doing something wrong, but if a cluster is in Updating status, and I try to create a namespace there is an error saying that cluster is not active and terraform exits, why it doesnt wait till timeout?

Unable to initialise rancher2 provider after upgrading to Terraform 0.12

Hi,

I am unable to initialise rancher2 provider after upgrading to terraform. Also, since rancher2 was officially available in terraform registry, I removed the terraform-provider-rancher2 from ~/.terraform.d/plugins.

After this, I started to see this error:

Provider "rancher2" not available for installation.

A provider named "rancher2" could not be found in the Terraform Registry.

This may result from mistyping the provider name, or the given provider may
be a third-party provider that cannot be installed automatically.

In the latter case, the plugin must be installed manually by locating and
downloading a suitable distribution package and placing the plugin's executable
file in the following directory:
    terraform.d/plugins/darwin_amd64

Terraform detects necessary plugins by inspecting the configuration and state.
To view the provider versions requested by each module, run
"terraform providers".


Error: no provider exists with the given name

Feature: Add users to clusters created with terraform

Hi folks. It looks like we can create clusters but I would also like to be able to add members to the cluster I create with this module. It should be pretty trivial to add a user based on their username, right? Suggested mock up:

members = {
  "MYAD\tibers" = "owner"
  "MYAD\user1" = "member"
}

Support for Custom would be cool to have but at least give us the most general tools.

terraform destroy eks fails with Status [403 Forbidden]

when we delete an EKS cluster, the cluster actually gets deleted in the Rancher UI, but terraform does not know that the cluster got deleted in Rancher and it still tries to destroy every 10s. Rancher responds with a "403 Forbidden" error

module.k8s-rancher-sz-1.rancher2_cluster.eks_cluster: Destroying... (ID: c-qphhb)
module.k8s-rancher-sz-1.rancher2_cluster.eks_cluster: Still destroying... (ID: c-qphhb, 10s elapsed)
module.k8s-rancher-sz-1.rancher2_cluster.eks_cluster: Still destroying... (ID: c-qphhb, 20s elapsed)
module.k8s-rancher-sz-1.rancher2_cluster.eks_cluster: Still destroying... (ID: c-qphhb, 30s elapsed)
Releasing state lock. This may take a few moments...

Error: Error applying plan:

1 error(s) occurred:

* module.k8s-rancher-sz-1.rancher2_cluster.eks_cluster (destroy): 1 error(s) occurred:

* rancher2_cluster.eks_cluster: [ERROR] waiting for cluster (c-qphhb) to be removed: Bad response statusCode [403]. Status [403 Forbidden]. Body: [baseType=error, code=Forbidden, message=clusters.management.cattle.io "c-qphhb" is forbidden: User "u-67xj8" cannot get resource "clusters" in API group "management.cattle.io" at the cluster scope] from [https://rancher.infra.xxxxx.xxxxxxxx/v3/clusters/c-qphhb]

Openstack Cloud Provider shouldn't require user_id

When defining a cluster that uses Rancher-launched Kubernetes on OpenStack, I have to put a blank value for user_id. The actual requirement is to have either username or user_id specified but it seems we're requiring BOTH fields to be defined here.

resource "rancher2_cluster" "cluster" {
  name        = "${var.cluster_name}"
  description = "${var.cluster_description}"

  rke_config {
    cloud_provider {
      openstack_cloud_provider {
        global {
          auth_url    = "${var.openstack_auth_url}"
          tenant_id   = "${var.openstack_tenant_id}"
          domain_name = "${var.openstack_domain_name}"
          user_id     = ""    ### I shouldn't have to do this
          username    = "${var.openstack_username}"
          password    = "${var.openstack_password}"
        }
...

terraform resource to assign membership to a cluster.

I been looking but I don't this this exists today.

Is there a way for me from a terraform to a assign member users / ldap/azure ad groups to a cluster?
It seems I have to create the clusters using terraform and then go to the website to manually add the correct membership.

Feature request: Improve management of AWS STS tokens

Current issue

For security, we want to be able to use STS tokens when creating a AWS based Rancher cluster (EKS or EC2).

Currently rancher and the provider allow to define the properties access_key, secret_key and session_token for the eks_config of the rancher2_cluster resource or the amazon_ec2_config of the rancher2_node_template resource.

In this ticket I will focus on the behaviour of the EKS clusters.

Passing these values does work during the creation, and updates, but with the current approach there are several caveats and it is sub-optimal.

Here is a lit of the issues:

  • "Delete" operations never refreshes the STS token:

    • The STS creds are passed during creation or updates, and saved in rancher. Delete will use the STS from the latest previous create/updated, that might be expired, causing the delete operation to fail.
    • Workaround: do an "apply" to force an update of the STS credentials in rancher before delete. But this is manual, error-prone, and potentially not desirable or possible to run an update.
  • STS Creds attributes change all the time:

    • Due the nature of STS, creds will likely change in every terraform execution, showing a diff of these attributes in the plan, despite not being a real change in the cluster.
    • Workaround: ignore these attributes with lifecycle.ignore_changes, but then you cannot update the token.
  • Inconvenient to generate or get this STS token:

    • As the STS creds are being passed as value, they must be generated generated and pass them from outside of terraform, e.g by using TF_VAR_ variables.

Alternative implementation

I want to suggest that the rancher provider does manage these credentials differently:

  • The AWS credentials are passed in the provider, so they are always available for all the operations: create/update/delete.

  • The resource will pull the credentials from the provider definition unless overrided explicitly. Changes in the creds in the provider shall be ignored for the plan.

  • The delete operation will update first the credentials of the cluster before delete, to guarantee that rancher keeps credentials.

  • The provider can accept a assume_role stanza to assume a role and generate a STS token, similar to the one used in the aws provider, or just get a STS token. Note this role would be different to the service_role for the cluster itself

  • The provider can pull credentials from the usual sources for AWS: env vars, IAM profiles, config files, or explicitelly

Example of the definition

# A provider with static creds
provider "rancher2" {
  api_url    = "${var.rancher_url}"
  access_key = "${var.rancher_access_key}"
  secret_key = "${var.rancher_secret_key}"
  aws_credentials { 
       access_key = "foo"
       secret_key = "bar"
  } 
}

# A provider getting creds from env/iam profile
provider "rancher2" {
  api_url    = "${var.rancher_url}"
  access_key = "${var.rancher_access_key}"
  secret_key = "${var.rancher_secret_key}"
  aws_credentials { 
       env_or_iam_profile = true
  } 
}

# A provider with creds from the creds file
provider "rancher2" {
  api_url    = "${var.rancher_url}"
  access_key = "${var.rancher_access_key}"
  secret_key = "${var.rancher_secret_key}"
  aws_credentials { 
     shared_credentials_file = "/Users/tf_user/.aws/creds"
     profile                 = "customprofile"
   } 
}

# A provider getting a STS token
provider "rancher2" {
  api_url    = "${var.rancher_url}"
  access_key = "${var.rancher_access_key}"
  secret_key = "${var.rancher_secret_key}"
  aws_credentials { 
    access_key = "${var.rancher_access_key}"
    secret_key = "${var.rancher_secret_key}"
    sts_token {
       duration = 3600
     }
  } 
}

# A provider assuming a role
provider "rancher2" {
  api_url    = "${var.rancher_url}"
  access_key = "${var.rancher_access_key}"
  secret_key = "${var.rancher_secret_key}"
  aws_credentials { 
    assume_role {
       role_arn     = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
       session_name = "rancher-provider"
       external_id  = "my-terraform-racher"
       duration = 3600
     }
  } 
}

Our use case

Our most interesting use case is getting the token via assume role using credentials from the system (env vars / IAM profile or properties files).

Out of scope

Other IaaS might have a similar issue. That is out of scope here.

terraform access keys should not be required when using iam profile (part 2/2)

Please see rancher/rancher#20817 for part 1/2

What kind of request is this (question/bug/enhancement/feature request):
bug

Steps to reproduce (least amount of steps as possible):
Setup s3BackupConfig with instance that has an iam profile attached that allows s3 backups using Rancher terraform provider. Do not use access key or secret key.

Result:
You are not allowed to do this as access key and secret key are required in terraform however you should be able to set this up if the instance has an iam profile attached.

Add ability to provision applications within a project

Provisioning apps from Rancher or Helm charts would be super useful. In my case, I'm setting up our CI/CD environment and need to deploy the Jenkins helm chart within a project.

I can try to tackle this if it's a feature that's desired. If there's a workaround instead, I'll take that as well.

What is `docker_version` doing?

I do not get what the field docker_version is doing in this provider.

When using engine_install_url you can choose the docker version to install on your host os.
The scripts for that are located here: https://github.com/rancher/install-docker
Maybe a reference to that repo would be helpful for others in the docs.

When omitting docker_version everything works ok as of my recent testing this morning.

terraform rancher2_etcd_backup does not work until cluster is created

What kind of request is this (question/bug/enhancement/feature request):
bug

Description
When using the rancher2_etcd_backup resource, we need to specify a cluster_id. Terraform will fail when creating this resource if the cluster is not yet active (when creating the cluster at the same time) A wait or timeout should be added to wait for the cluster to become active. It works when applying after the cluster has become active. Alternatively, it works when specified under rke_config/services/etcd/backup_config using the rancher2_cluster resource. Would just like to see functionality be the same between the two resources

Other details that may be helpful:

Example terraform config

resource "rancher2_cluster" "k8s_rancher_cluster" {
  name = "${var.cluster_name}"
  rke_config {
    cloud_provider {
      name = "aws",
      aws_cloud_provider {
        global {
          disable_security_group_ingress = true
        }
      }
    },
    kubernetes_version = "${var.k8s_version}",
    services {
      etcd {
        extra_args {
          election-timeout = "5000",
          heartbeat-interval = "500"
        },
        snapshot = "true",
        backup_config {
          enabled = true,
          interval_hours = 1,
          retention = 24
        }
      },
      kubelet {
        extra_args {
          enforce-node-allocatable = "pods",
          kube-reserved = "cpu=500m,memory=1Gi",
          kube-reserved-cgroup = "/docker",
          system-reserved = "cpu=500m,memory=1Gi",
          system-reserved-cgroup = "/system.slice",
          eviction-hard = "memory.available<500Mi"
        }
      }
    }
  }
}

resource "rancher2_cluster_role_template_binding" "cluster_owner" {
  name = "${var.cluster_name}"
  cluster_id = "${local.cluster_id}"
  role_template_id = "cluster-owner"
  group_principal_id = "activedirectory_group://CN=${var.ad_group_name}"
}

# resource "rancher2_etcd_backup" "k8s_etcd_backup" {
#   backup_config {
#     enabled = true
#     interval_hours = 1
#     retention = 24
#     s3_backup_config {
#       access_key = "${var.rancher2_access_key}"
#       bucket_name = "${var.etcd_bucket}/etcd/${var.cluster_name}"
#       endpoint = "s3.amazonaws.com"
#       region = "${module.aws.aws_region}"
#       secret_key = "${var.rancher2_secret_key}"
#     }
#   }
#   cluster_id = "${local.cluster_id}"
#   name = "${var.stack}"
# }

locals {
  cluster_reg_cmd = "${lookup(rancher2_cluster.k8s_rancher_cluster.cluster_registration_token[0], "node_command")}",
  cluster_id = "${lookup(rancher2_cluster.k8s_rancher_cluster.cluster_registration_token[0], "cluster_id")}"
}

output "cluster_id" {
  value = "${local.cluster_id}"
}

Plan always showing changes unless empty RKE config elements defined

When defining a cluster using Rancher-launched Kubernetes on Openstack, my terraform plan always shows changes to the openstack_cloud_provider sub-objects that I haven't explicitly defined in my TF code.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ rancher2_cluster.cluster
      rke_config.0.cloud_provider.0.openstack_cloud_provider.0.block_storage.#: "1" => "0"
      rke_config.0.cloud_provider.0.openstack_cloud_provider.0.metadata.#:      "1" => "0"
      rke_config.0.cloud_provider.0.openstack_cloud_provider.0.route.#:         "1" => "0"
      rke_config.0.network.0.weave_network_provider.#:                          "1" => "0"

To workaround this, I have to add empty objects to the TF code but this should really be handled by the provider.

resource "rancher2_cluster" "cluster" {
  name        = "${var.cluster_name}"
  description = "${var.cluster_description}"

  rke_config {
    cloud_provider {
      openstack_cloud_provider {
        global {
          ...
       }

        load_balancer {
          ...
        }

        #### WORKAROUND
        block_storage {}
        metadata {}
        route {}
      }
    }

    network {
      plugin = "weave"
      #### WORKAROUND
      weave_network_provider {
        password= ""
      }
    }
  }
}

Add data source "rancher2_project"

Background

We have been rolling our own custom-made Terraform provider for Rancher 2 (link). Once we realized that an official provider plugin was being developed, we abandoned the project, though.

Now we would like to switch to the new provider but found a few things still missing.

Missing feature: data "rancher2_project"

We would like to access some of the automatically-generated Rancher 2 projects (namely: "Default" and "System") to automatically associate some of our Kubernetes namespaces with them. (Our homegrown provider plugin offers this functionality: link to go sources)

Example

data "rancher2_project" "system" {
  cluster_id = "${var.my_rancher2_cluster_id}"
  name       = "System"
}

resource "kubernetes_namespace" "istio_system" {
  metadata {
    annotations {
      "field.cattle.io/projectId" = "${data.rancher2_project.system.id}"
    }
    name = "istio-system"
  }
}

[Feature Request] VSphere - Support Datastore Cluster

Hi,
I'm trying to create a new cluster on my VSphere environment and we have a datastore cluster but the node templates only support datastore not datastore cluster. Creating machines in this way we have to do storage migration after the machine was created to not left specific datastore with no free space.

rancher2_auth_config_openldap.certificate idempotency

I'm configuring rancher2_auth_config_openldap with a certificate read from a file (tried heredoc as well, same result) and can't figure out how to format it so that it doesn't get scheduled as changed over and over:

Terraform will perform the following actions:

  # rancher2_auth_config_openldap.openldap will be updated in-place
  ~ resource "rancher2_auth_config_openldap" "openldap" {
        access_mode                        = "unrestricted"
        allowed_principal_ids              = []
        annotations                        = {}
      ~ certificate                        = (sensitive value)
      [...]

I tried removing the trailing newline, removing line-breaks, putting \n in the file, etc., but couldn't find out how to work around whatever transformation is applied there. The fact that it only shows up as (sensitive value) isn't helping either.

CPU/Memory Limits and Resources are not being set in namespaces

Hello,

While doing some finalizing of our Rancher2 terraform stack, we discovered the CPU/Memory resources and limits are not being set in the namespaces. Here's an example of namespace terraform resource.

resource "rancher2_namespace" "namespace" {
  name = "${var.namespace}"
  project_id = "${var.project_id}"
  resource_quota {
    limit {
      requests_cpu = "500m"
      requests_memory = "2048Mi"
      limits_cpu = "${var.cpu_limit}"
      limits_memory = "${var.memory_limit}"
      requests_storage = "${var.storage_request}"
    }
  }
}

When we then describe the namespace we see that previous values are only store under the field.cattle.io/resourceQuota annotations, but not under the Resources Limits or Resources Quota

kubectl describe ns eng-bro
Name:         eng-bro
Labels:       cattle.io/creator=norman
              field.cattle.io/projectId=p-zxsmg
Annotations:  cattle.io/status:
                {"Conditions":[{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2019-08-09T20:51:47Z"},{"Type":"ResourceQuot...
              field.cattle.io/creatorId: user-6jgnv
              field.cattle.io/description:
              field.cattle.io/projectId: c-fxthd:p-zxsmg
              field.cattle.io/resourceQuota:
                {"limit":{"limitsCpu":"2000m","limitsMemory":"2048Mi","requestsCpu":"500m","requestsMemory":"2048Mi","requestsStorage":"250Gi"}}
              lifecycle.cattle.io/create.namespace-auth: true
Status:       Active

No resource quota.

No resource limits.

Also, these changes are not show up in the Rancher2 UI either.

Screen Shot 2019-08-09 at 2 14 57 PM

I can confirm that setting the resource quotas and limits using the normal kubectl derivatives works as expected.

Bootstrap mode does not authenticate correctly

EDIT: issue described in this post was due to user error - please refer to posts below for more info.


When using the bootstrap mode of the provider at the moment you are required to set the credential properties to empty strings like this:

provider "rancher2" {
  api_url = "https://rancher.example.com"
  access_key = ""
  secret_key = ""
  token_key = ""
  bootstrap = true
}

If you don't, like shown in the documentation, it will throw the error that credentials may not be set in bootstrap mode. The check seems to be assuming wrong default values. I would expect the bootstrap mode to work without providing empty credential strings.

Using Terraform v0.11.13 and rancher2 provider version 1.0.0.

Create EKS cluster on Terraform 0.12 causes terraform to crash

Hi,

I wanted to create an EKS cluster on terraform by not specifying VPC/subnet/security group and specifying everything else. But, when I ran the terraform config, it resulted into terraform crash. Here's the `rancher2_clusterz terraform config.

resource "rancher2_cluster" "rancher-custom" {
  name        = var.rancher_cluster_name
  description = "${var.rancher_cluster_name} Kubernetes cluster"

  eks_config {
    access_key                      = var.AWS_ACCESS_KEY_ID
    secret_key                      = var.AWS_SECRET_ACCESS_KEY
    security_groups                 = var.existing_vpc ? [var.security_group_name] : [""]
    service_role                    = var.service_role != "" ? var.service_role : aws_iam_role.eks[0].name
    subnets                         = var.existing_vpc ? list(var.subnet_id1, var.subnet_id2, var.subnet_id3) : [""]
    ami                             = var.ami_id == "" ? data.aws_ami.distro.id : var.ami_id
    associate_worker_node_public_ip = var.associate_worker_node_public_ip
    instance_type                   = var.instance_type
    kubernetes_version              = var.kubernetes_version
    maximum_nodes                   = var.maximum_nodes
    minimum_nodes                   = var.minimum_nodes
    node_volume_size                = var.disk_size
    region                          = var.region
    session_token                   = var.session_token
    virtual_network                 = var.vpc_id
  }
}

access_key and secret_key is set in env. The terraform plan shows the following details>

  # module.cattle-eks.rancher2_cluster.rancher-custom will be created
  + resource "rancher2_cluster" "rancher-custom" {
      + annotations                             = (known after apply)
      + cluster_registration_token              = (known after apply)
      + default_pod_security_policy_template_id = (known after apply)
      + default_project_id                      = (known after apply)
      + description                             = "test-eks Kubernetes cluster"
      + driver                                  = (known after apply)
      + enable_network_policy                   = false
      + id                                      = (known after apply)
      + kube_config                             = (known after apply)
      + labels                                  = (known after apply)
      + name                                    = "test-eks"
      + system_project_id                       = (known after apply)

      + cluster_auth_endpoint {
          + ca_certs = (known after apply)
          + enabled  = (known after apply)
          + fqdn     = (known after apply)
        }

      + eks_config {
          + access_key                      = (sensitive value)
          + ami                             = "ami-0033d0dbf7f5e92af"
          + associate_worker_node_public_ip = true
          + kubernetes_version              = "1.12"
          + maximum_nodes                   = 3
          + minimum_nodes                   = 1
          + node_volume_size                = 20
          + region                          = "eu-central-1"
          + secret_key                      = (sensitive value)
          + security_groups                 = [
              + "",
            ]
          + service_role                    = "rancher-eks-role-6z4uzzvkou"
          + subnets                         = [
              + "",
            ]
          + user_data                       = (known after apply)
        }
    }

Basically, I want to utilize the VPC/Subnet creation functionality - Standard: Rancher generated VPC and Subnet which is shown in the Rancher UI at the time of cluster creation.

Screenshot 2019-06-27 at 17 50 55

terraform apply output:

panic: interface conversion: interface {} is nil, not string
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4:
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: goroutine 113 [running]:
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/rancher2.toArrayString(0xc0002ddef0, 0x1, 0x1, 0xf, 0xc000688e28, 0xa)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/rancher2/util.go:210 +0xfb
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/rancher2.expandClusterEKSConfig(0xc0002dddc0, 0x1, 0x1, 0xc0004cac10, 0x8, 0x60000c0008f9818, 0xffffffffffffffff, 0x3)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/rancher2/structure_cluster_eks_config.go:128 +0x8bd
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/rancher2.expandCluster(0xc0002e6c40, 0x28d5f70, 0xc0008f96a0, 0xc0004c9758)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/rancher2/structure_cluster.go:232 +0x75a
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/rancher2.resourceRancher2ClusterCreate(0xc0002e6c40, 0x1d7d440, 0xc00056e000, 0x2, 0x29026c0)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/rancher2/resource_rancher2_cluster.go:34 +0x40
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/helper/schema.(*Resource).Apply(0xc00045e780, 0xc00060c550, 0xc00000b300, 0x1d7d440, 0xc00056e000, 0x1bde201, 0xc000685948, 0xc000683ad0)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/helper/schema/resource.go:286 +0x363
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/helper/schema.(*Provider).Apply(0xc00045fb00, 0xc0004c9a00, 0xc00060c550, 0xc00000b300, 0xc000924868, 0xc000155e30, 0x1be0820)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/helper/schema/provider.go:285 +0x9c
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc000154578, 0x1f595e0, 0xc00066c0f0, 0xc0001407e0, 0xc000154578, 0xc0001a6cc0, 0x1c0c7e0)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/helper/plugin/grpc_provider.go:842 +0x87a
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x1d44ec0, 0xc000154578, 0x1f595e0, 0xc00066c0f0, 0xc00060c0f0, 0x0, 0x0, 0x0, 0xc000017200, 0x5de)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/github.com/hashicorp/terraform/internal/tfplugin5/tfplugin5.pb.go:3019 +0x23e
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002c1380, 0x1f62060, 0xc000575380, 0xc000492100, 0xc000142b70, 0x28d63e0, 0x0, 0x0, 0x0)
2019-06-27T17:54:24.415+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc/server.go:966 +0x4a2
2019-06-27T17:54:24.416+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc.(*Server).handleStream(0xc0002c1380, 0x1f62060, 0xc000575380, 0xc000492100, 0x0)
2019-06-27T17:54:24.416+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc/server.go:1245 +0xd61
2019-06-27T17:54:24.416+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0001520d0, 0xc0002c1380, 0x1f62060, 0xc000575380, 0xc000492100)
2019-06-27T17:54:24.416+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc/server.go:685 +0x9f
2019-06-27T17:54:24.416+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: created by github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
2019-06-27T17:54:24.416+0200 [DEBUG] plugin.terraform-provider-rancher2_v1.3.0_x4: 	/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-rancher2/vendor/google.golang.org/grpc/server.go:683 +0xa1
2019-06-27T17:54:24.419+0200 [DEBUG] plugin: plugin process exited: path=/Users/shantanudeshpande/cloudssky_contrib/src/github.com/kubernauts/tk8-provisioner-cattle-eks/.terraform/plugins/darwin_amd64/terraform-provider-rancher2_v1.3.0_x4 pid=417 error="exit status 2"
2019/06/27 17:54:24 [DEBUG] module.cattle-eks.rancher2_cluster.rancher-custom: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalMaybeTainted
2019/06/27 17:54:24 [TRACE] EvalMaybeTainted: module.cattle-eks.rancher2_cluster.rancher-custom encountered an error during creation, so it is now marked as tainted
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalWriteState
2019/06/27 17:54:24 [TRACE] EvalWriteState: removing state object for module.cattle-eks.rancher2_cluster.rancher-custom
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalApplyProvisioners
2019/06/27 17:54:24 [TRACE] EvalApplyProvisioners: rancher2_cluster.rancher-custom has no state, so skipping provisioners
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalMaybeTainted
2019/06/27 17:54:24 [TRACE] EvalMaybeTainted: module.cattle-eks.rancher2_cluster.rancher-custom encountered an error during creation, so it is now marked as tainted
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalWriteState
2019/06/27 17:54:24 [TRACE] EvalWriteState: removing state object for module.cattle-eks.rancher2_cluster.rancher-custom
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalIf
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalIf
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalWriteDiff
2019/06/27 17:54:24 [TRACE] module.cattle-eks: eval: *terraform.EvalApplyPost
2019/06/27 17:54:24 [ERROR] module.cattle-eks: eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2019/06/27 17:54:24 [ERROR] module.cattle-eks: eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2019/06/27 17:54:24 [TRACE] [walkApply] Exiting eval tree: module.cattle-eks.rancher2_cluster.rancher-custom
2019/06/27 17:54:24 [TRACE] vertex "module.cattle-eks.rancher2_cluster.rancher-custom": visit complete
2019/06/27 17:54:24 [TRACE] dag/walk: upstream of "module.cattle-eks.provider.rancher2 (close)" errored, so skipping
2019/06/27 17:54:24 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2019/06/27 17:54:24 [TRACE] dag/walk: upstream of "root" errored, so skipping
2019/06/27 17:54:24 [TRACE] statemgr.Filesystem: not making a backup, because the new snapshot is identical to the old
2019/06/27 17:54:24 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2019/06/27 17:54:24 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2019/06/27 17:54:24 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2019/06/27 17:54:24 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2019-06-27T17:54:24.452+0200 [DEBUG] plugin: plugin exited



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Project/Namespace dependencies on the cluster being active

This may be related to #12

I'm in a situation where I

  1. create rancher2_cluster resource called "clusterA"
  2. I then create x number of workers/etc/controllers to be part of "clusterA" with the user_data being injected from clusterA.cluster_registration_token[0].node_command
  3. I have x number of "rancher2_project" with a cluster_id of "clusterA.id"

Expected: wait till clusterA is active with the workers/etcd/controllers to create the projects

Actual: fails with an Error creating project: Cluster ID xxxxx is not active

rancher2 provider auth broken in 1.4.0

access_key and secret_key attributes for the rancher2 provider consistently return a 401 unauth with the 1.4.0 provider.

setting token_key instead (with access_key and secret_key concatenated) works fine

Rancher cluster with invalid name returns ambiguous error

When creating a cluster using rancher2_cluster, if the cluster name has a space in it, this provider returns the following ambiguous error:

Error: rpc error: code = Unavailable desc = transport is closing

It would be nice if this was something more helpful like "fix your cluster name"

terraform crashing with rancher2 provider after adding cloud_provider config

This issue was originally opened by @richardmosquera as hashicorp/terraform#21753. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.12.2
+ provider.rancher2 v1.1.0

Terraform Configuration Files

resource "rancher2_cluster" "ld-nms" {
  name        = "ld-nms"
  description = "ld-nms cluster"

  rke_config {
    network {
      plugin = "canal"
    }

    cloud_provider {
      vsphere_cloud_provider {
        global {
          insecure_flag = true
        }

        virtual_center {
          name        = "<ip.address>"
          user        = "${var.vsphere_username}"
          password    = "${var.vsphere_password}"
          datacenters = "<DATACENTER>"
        }

        workspace {
          server            = "<ip.address>>"
          folder            = "kubernetes"
          default_datastore = "<CLUSTER>/<DATASTORE>"
          datacenter        = "<DATACENTER>"
        }
      }
    }
  }
}

Debug Output

https://gist.github.com/richardmosquera/b365f3fe064c4a37ef82acc60bd3b415

Crash Output

https://gist.github.com/richardmosquera/4424ef0ebf713b758bf1a2b33770caa2

Expected Behavior

Cloud provider should have been added to the cluster config.

Actual Behavior

Terraform crashed

Steps to Reproduce

Everything was working fine, terraform plan reported no changes.
Crashes started happening when I added the cloud_config. However, even after removing it, it still crashed.

  1. terraform plan (0 changes)
  2. add cloud provider config as above
  3. terraform plan
  4. crash

Additional Context

To get terraform to not crash I had to delete the ld-nms cluster.

This is also not the first time it has happened. I've seen the same behavior with other resources.

References

engine_install_url defaults to null

As discussed on the Rancher Slack channel.

I've been working on a deployment from the ground up of a Rancher instance, and ran into what could be described as a bug.

The documentation for the rancher2_node_template resource lists engine_install_url, as an optional parameter. As such I omitted it from my deployment, however, when I attempted to create a cluster with rancher2_cluster it consistently failed with an error that it was unable to install Docker.

Here is my node template portion of my tf file(a few things sanitized):

resource "rancher2_node_template" "node-template-sto3" {
  name                  = "node-template-sto3"
  openstack_config {
    auth_url            = "https://XXX.XXXXX.XXXX:5000/v3"
    availability_zone   = "av_zone"
    region              = "region"
    username            = "[email protected]"
    domain_name         = "Default"
    flavor_id           = "83d8b44a-26a0-4f02-a981-079446926445"
    image_id            = "9d7efc49-d5f4-4fb4-8a42-a7c082404a32"
    net_id              = "fe434879-33a2-4aaf-bb2c-60667a592156"
    password            = var.openstack_password
    ssh_user            = "ubuntu"
    tenant_name         = "tenant_name"
    sec_groups          = "86bd43c4-6afc-469f-b59d-152cfc2613aa,55e080ae-5e74-4b1f-855e-fd97f0ed1048,86b640da-b301-425c-84e9-e31bfeb2eec6"
   }
  depends_on            = [ "rancher2_node_driver.openstack"]
}

Fairly quickly tracked it down to the missing engine_install_url, in the UI the script for installing docker was empty, so I entered in the script and it went forward.

When you create a node template with the UI the value is pre-populated for you, as such I believe it should be either a required parameter, or it should have some type of a default. Rancher somewhere does have a set of defaults, as it populates them in the UI. Possibly that could be pulled?

Wait for/skip waiting for specific state of the cluster

I am pretty sure pre-release version of the provider did not wait for any kind of cluster state, could we restore this functionality?

It is impossible to update Custom Nodes cluster without access to join command and the join command is not accessible until Cluster becomes active. Note that instances are destroyed before cluster is updated, so it can never become active/

Maybe we could add cluster state wait as another resource or data source so we can still do that at some point in the future even if we turn off state in the cluster itself.

rancher2_cluster resource token output is missing token

The cluster registration token returned from rancher2_cluster lacks the actual registration token

(sanitized)

 {
            "annotations": {},
            "cluster_id": "c-",
            "command": "kubectl apply -f https://rancher/v3/import/f.yaml",
            "id": "c-:system",
            "insecure_command": "curl --insecure -sfL https://rancher/v3/import/f.yaml | kubectl apply -f -",
            "labels": {
              "cattle.io/creator": "norman"
            },
            "manifest_url": "https://rancher/v3/import/f.yaml",
            "name": "system",
            "node_command": "sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.2.4 --server https://rancher --token f",
            "windows_node_command": "PowerShell -Sta -NoLogo -NonInteractive -Command \"\u0026 {docker run --rm -v C:/:C:/host --isolation hyperv rancher/rancher-agent:v2.2.4-nanoserver-1803 -server https://rancher -token f; if($?){\u0026 c:/etc/rancher/run.ps1;}}\""
          }

Whereas in the REST response there's a token attribute that corresponds to the secret created by the YAML template defined above

For the case where I'm defining the YAML above as terraform resources it'd be useful to have access to the raw token to inject into the secret.

Expose kube_config as a data source

I have written a module for deploying an RKE cluster and a 2nd one to AVI-fy the same cluster AVI requires access to the master node via the api url /k8/cluster/cluster-id. I cant use dependancies across modules nor can I use it as one module as the cluster-id becomes available before the kube-config is ready to be consumed by AVI. Terraform doesnt have a delay. so I'm forced to run the script in two legs. I think if I could read this as a datasource from the avi module I could loop and keep checking the datasource.

Maybe add a completed / valid property. Unless terraform can get there stuff working. IDK just a customer trying to automate multiple deployments of clusters.

Issues creating vsphere credentials

Terraform v0.11.14

  • provider.rancher2 v1.1.0

I'm getting an error when trying to create vsphere credentials:

rancher2_cloud_credential.hkcreds: Creating...
driver: "" => ""
labels.%: "" => ""
name: "" => "hkcreds"
vsphere_credential_config.#: "0" => "1"
vsphere_credential_config.0.password: "" => ""
vsphere_credential_config.0.username: "" => "[email protected]"
vsphere_credential_config.0.vcenter: "" => "vcenter.hostname"
vsphere_credential_config.0.vcenter_port: "" => "443"

Error: Error applying plan:

1 error occurred:
* rancher2_cloud_credential.hkcreds: 1 error occurred:
* rancher2_cloud_credential.hkcreds: json: cannot unmarshal number into Go struct field vmwarevsphereCredentialConfig.vcenterPort of type string

The terraform config:

resource "rancher2_cloud_credential" "hkcreds" {
  name = "hkcreds"
  vsphere_credential_config {
    username = "[email protected]"
    password = "<removed>"
    vcenter = "vcenter.hostname"
    vcenter_port = "443"
  }
} 

I've also tried removing the port from the terraform config but I get the same error.

Default value for kubernetes_version in aks_config is no longer applicable

The version for the default value of this parameter is no longer availble:

az aks get-versions -l eastus -o table
KubernetesVersion    Upgrades
-------------------  -----------------------
1.13.5               None available
1.12.8               1.13.5
1.12.7               1.12.8, 1.13.5
1.11.9               1.12.7, 1.12.8
1.11.8               1.11.9, 1.12.7, 1.12.8
1.10.13              1.11.8, 1.11.9
1.10.12              1.10.13, 1.11.8, 1.11.9`

Error in Rancher:

The value of parameter orchestratorProfile.OrchestratorVersion is invalid.

Solution is to change the defualt value in schema_cluster_aks_config.go but this will probably become an issue again. Parameter should either be required or default value should be routinely updated (will never work).

You should be able to bootstrap and use create resources in the same run.

This pattern should work. Its pretty useless as a tool if I can't use it to immediately create resources on the rancher server I just created.

provider "rancher2" {
  api_url   = "https://${local.name}.${local.domain}"
  bootstrap = true
}
resource "rancher2_bootstrap" "admin" {
  depends_on = [
    "null_resource.wait_for_rancher"
  ]
  password = "${local.rancher_password}"
}
resource "rancher2_cluster" "aws_rke" {
...
}

Currently this returns a "401" error.
rancher2_cluster.aws_rke: Bad response statusCode [401]. Status [401 Unauthorized]. Body: [message=must authenticate] from [https://cure53-testing-1.eng.rancher.space/v3]

Setting ignore_docker_version to false is not applied to cluster

Terraform Versions
Terraform v0.11.13
rancher2 v1.2.0

Terraform Configuration File

resource "rancher2_cluster" "rancher2-cluster" {
  name        = "${var.rancher_cluster_name}"

  cluster_auth_endpoint {
    enabled = true
  }

  rke_config {
    ignore_docker_version = false
    kubernetes_version    = "${var.kubernetes_version}"

    authentication {
      strategy = "x509|webhook"
    }

    ingress {
      provider = "nginx"
    }

    monitoring {
      provider = "metrics-server"
    }

    network {
      plugin = "canal"

      options {
        flannel_backend_type = "vxlan"
      }
    }

    services {
      etcd {
        backup_config {
          enabled        = true
          interval_hours = 12
          retention      = 6
        }

        creation  = "12h"
        retention = "72h"
        snapshot  = false

        extra_args {
          election-timeout   = "5000"
          heartbeat-interval = "500"
        }
      }

      kube_api {
        pod_security_policy     = false
        service_node_port_range = "30000-32767"
      }

      kubelet {
        fail_swap_on = false
      }
    }
  }
}

Expected Behavior
ignore_docker_version to be set to false

Actual Behavior
ignore_docker_version is set to true
The initial plan has false for the ignore_docker_version
And subsequent plans indicate the needed change rke_config.0.ignore_docker_version: "true" => "false" and the apply indicates a successful change. But checking the config in Rancher still has ignore_docker_version: true.

Additional Context
Rancher - v2.2.4
Kubernetes - v1.13.5-rancher1-2

Is there a "local user" resource?

Is there a way to create local users?

Need a way to create local "API" users for automation. These should not be tied to any external auth provider.

Invalid default values for Openstack Cloud Provider load balancer monitor

When using this provider to create a rancher-launched cluster in OpenStack, the default values that are set for load_balancer monitor_delay and monitor_timeout are invalid. Specifically, these values are being set as integers with defaults of "60" and "30" respectively. These DO NOT work as they should be a duration string with a time unit "1m" and "30s".

Related issue in main rancher repo:
rancher/rancher#14577

NodeTemplate vSphere boot2dockerUrl argument unknown

Hi,

I seem to have encounter an issue using the rancher2 provider plugin (v1.0 & v1.1) with terraform v0.11.13, I am unable to set the boot2dockerUrl key to a defined string.
image
I get this error in return
image

If someone has any idea of why or if it's just not supported yet please let me know.
Thanks

node_pool name is being set to the hostname_prefix

When creating a rancher2_node_pool the name that appears in the UI for the node_pool is the value from hostname_prefix rather than the value from the name field.

Example:

resource "rancher2_node_pool" "worker_node_pool" {
cluster_id = "${rancher2_cluster.cluster.id}"
name = "worker-node-pool"
hostname_prefix = "worker-instance"
node_template_id = "${rancher2_node_template.nodetemplate.id}"
quantity = 1
control_plane = false
etcd = false
worker = true
}

Result:

The UI shows:

Pool: worker-instance

Expected:

Pool: worker-node-pool

Rancher Cluster delivering outdated kube_config

When creating a new cluster resource, and setting the cluster_auth_endpoint to enabled = true, the kube_config generated doesn't reflect the authorized endpoints immediately. Instead, I have to refresh the resource in order to get the expanded kube_config that includes the endpoints.

enable_cluster_monitoring: true

Is there any way to set the enable_cluster_monitoring setting to true in an RKE cluster with the provider? I see the monitoring section in the RKE config, but that deals with the metrics-server and doesn't look like it enables the Prometheus integration that this setting does.

cluster_auth_endpoint key invalid under rancher2_cluster

Hello,

I'm having issues trying to enable the cluster_auth_endpoint argument under the rancher2_cluster resource. Based on what is listed in the doc, it appears to be in the right place.

Here's a snippet of the code:

resource "rancher2_cluster" "k8s_rancher_cluster" {
  name = "${var.cluster_name}"
  cluster_auth_endpoint {
    enabled = true,
    fqdn = "${var.api_name}"
  }
...

And here's the output I'm receiving:

Error: rancher2_cluster.k8s_rancher_cluster: : invalid or unknown key: cluster_auth_endpoint

Here is the my Terraform and plugin versions

terraform -version
Terraform v0.11.13
+ provider.aws v1.44.0
+ provider.rancher2 v1.3.0
+ provider.template v1.0.0
+ provider.vault v2.1.0

Rancher internal annotations and labels should not be considered "changes"

After creating a rancher2_namespace resource, when I go to apply again, it detects changes that were made by Rancher and wants to remove those changes:

  ~ rancher2_namespace.cert_manager
      annotations.%:                                         "3" => "1"
      annotations.cattle.io/status:                          "{\"Conditions\":[{\"Type\":\"InitialRolesPopulated\",\"Status\":\"True\",\"Message\":\"\",\"LastUpdateTime\":\"2019-06-04T20:34:00Z\"},{\"Type\":\"ResourceQuotaInit\",\"Status\":\"True\",\"Message\":\"\",\"LastUpdateTime\":\"2019-06-04T20:33:59Z\"}]}" => ""
      annotations.lifecycle.cattle.io/create.namespace-auth: "true" => ""
      labels.%:                                              "3" => "1"
      labels.cattle.io/creator:                              "norman" => ""
      labels.field.cattle.io/projectId:                      "p-6js9c" => ""

The resource defintion:

resource "rancher2_namespace" "cert_manager" {
  name       = "cert-manager"
  project_id = "${local.rancher_system_project}"

  annotations {
    "iam.amazonaws.com/permitted" = "${var.cluster}-cert-manager.*"
  }

  labels {
    "certmanager.k8s.io/disable-validation" = "true"
  }
}

Add data sources for several rancher2_* resources

  • rancher2_catalog
  • rancher2_cloud_credential
  • rancher2_cluster
  • rancher2_cluster_driver
  • rancher2_cluster_logging
  • rancher2_cluster_role_template_binding
  • rancher2_etcd_backup
  • rancher2_global_role_binding
  • rancher2_namespace
  • rancher2_node_driver
  • rancher2_node_pool
  • rancher2_project
  • rancher2_project_logging
  • rancher2_project_role_template_binding
  • rancher2_registry
  • rancher2_user

It seems that the suggested approach to querying existing resources is to terraform import them into terraform state by their id. I think there are use cases such as node pools, cloud credentials and projects where we'd want to query them by name or other attributes if applicable.

Unable to authenticate with the Rancher API after new v1.4.0 release

Hi,

I am not able to authenticate with the Rancher API after the latest release.

Error: Bad response statusCode [401]. Status [401 Unauthorized]. Body: [message=must authenticate] from [https://xyz.rancher.com/v3]

The TF code is same as it was previously with v1.3.0 and even when I built my own plugin binary from this repository. The latest commit at the time of building it was:

commit a37273a8923ce5b6c0916f62699ae94b8b66cef9 (HEAD -> master)
Merge: ea12d08 203e4cf
Author: Raúl Sánchez <[email protected]>
Date:   Sun Aug 4 20:53:43 2019 +0200

    Merge pull request #76 from rawmind0/customca

    Added custom_ca argument on etcd s3_backup_config on rancher2_cluster and rancher2_etcd_backup resources

Clearly, something has changed in the latest release which is not allowing me to authenticate with the Rancher API.

The TF code which was working with earlier release is:

resource "random_string" "rancher_cloud_cred_random" {
  count            = var.existing_vpc ? 1 : 0
  upper            = false
  length           = 8
  special          = false
  override_special = "/@\" "
}

# Provider config
provider "rancher2" {
  api_url    = var.rancher_api_url
  access_key = var.rancher_access_key
  secret_key = var.rancher_secret_key
}

resource "rancher2_cloud_credential" "test" {
  count            = var.existing_vpc ? 1 : 0
  name             = "cc-${random_string.rancher_cloud_cred_random[count.index].result}"

  amazonec2_credential_config {
    access_key = var.AWS_ACCESS_KEY_ID
    secret_key = var.AWS_SECRET_ACCESS_KEY
  }
}

This code works just fine if I manually copy my locally built binary instead of the latest release.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.