Git Product home page Git Product logo

terraform-provider-minikube's Introduction

terraform-provider-minikube

Go Report Card codecov

A terraform provider for minikube!

The goal of this project is to allow developers to create minikube clusters and integrate it with common kubernetes terraform providers such as hashicorp/kubernetes and hashicorp/helm all within the comfort of Minikube!

You can learn more about how to use the provider at https://registry.terraform.io/providers/scott-the-programmer/minikube/latest/docs

Installing your preferred driver

minikube start --vm=true --driver=hyperkit --download-only
minikube start --vm=true --driver=hyperv --download-only
minikube start --driver=docker --download-only

Some drivers require a bit of prerequisite setup, so it's best to visit https://minikube.sigs.k8s.io/docs/drivers/ first

Usage

provider minikube {
  kubernetes_version = "v1.30.0"
}

resource "minikube_cluster" "cluster" {
  vm      = true
  driver  = "hyperkit"
  cni     = "bridge"
  addons  = [
    "dashboard",
    "default-storageclass",
    "ingress",
    "storage-provisioner"
  ]
}

You can use minikube to verify the cluster is up & running

> minikube profile list

|----------------------------------------|-----------|---------|---------------|------|---------|---------|-------|
|                Profile                 | VM Driver | Runtime |      IP       | Port | Version | Status  | Nodes |
|----------------------------------------|-----------|---------|---------------|------|---------|---------|-------|
| terraform-provider-minikube            | hyperkit  | docker  | 192.168.64.42 | 8443 | v1.26.3 | Running |     1 |
|----------------------------------------|-----------|---------|---------------|------|---------|---------|-------|

Outputs

In order to integrate the minikube providers with other k8s providers, you can reference the following outputs

  • client_certificate (string, sensitive) client certificate used in cluster
  • client_key (string, sensitive) client key for cluster
  • cluster_ca_certificate (string, sensitive) certificate authority for cluster
  • host (string) the host name for the cluster

These outputs are consistent across supported by all minikube cluster types

i.e.

provider "kubernetes" {
  host = minikube_cluster.cluster.host

  client_certificate     = minikube_cluster.cluster.client_certificate
  client_key             = minikube_cluster.cluster.client_key
  cluster_ca_certificate = minikube_cluster.cluster.cluster_ca_certificate
}

Want to help out?

See the contributing doc if you wish to get into the details of this terraform minikube provider!

terraform-provider-minikube's People

Contributors

caerulescens avatar dependabot[bot] avatar falcosuessgott avatar harmonicoscillator avatar kwallner avatar pehlicd avatar robert-zipco avatar scott-the-programmer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

terraform-provider-minikube's Issues

Terraform apply always fails with apiServer.certSANs: Invalid value: ""

Trying to create a simple minikube_cluster resource with terraform and terraform-provider-minikube failes with the following errors

│ Error: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 3
│ stdout:
│ 
│ stderr:
│ W0320 15:53:06.074140    6026 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
│ apiServer.certSANs: Invalid value: "": altname is not a valid IP address, DNS label or a DNS label with subdomain wildcards: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'); a wildcard DNS-1123 subdomain must start with '*.', followed by a valid DNS subdomain, which must consist of lower case alphanumeric characters, '-' or '.' and end with an alphanumeric character (e.g. '*.example.com', regex used for validation is '\*\.[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
│ To see the stack trace of this error execute with --v=5 or higher
│ 
│ 
│   with module.minikube_cluster.minikube_cluster.periscope,
│   on cluster/minikube.tf line 5, in resource "minikube_cluster" "periscope":
│    5: resource "minikube_cluster" "periscope" {
│

Looking at the rootcasue it seems with the provider the kubeadm config contains a apiServer.certSANs value with "".

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", ""]

The "" string entry is invalid.

I looked into the implementation and this implementations seems to be using minikube library instead of just using os.Exec command to start a local minikube. This way we are missing a lot of default check and runtime override done by minikube cli.

Using docker driver with mount flag set forces replacement

It looks like if you pass "mount = true" then it is not persisted in the state and it always forces a replacement.
This means you can't have a stack with multiple resources (like helm_release) that depend on the cluster because the cluster will be destroyed and recreated but the releases are not reinstalled since at plan time they are still installed

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "minikube"
  addons = [
    "default-storageclass",
    "storage-provisioner"
  ]
  mount = true
  mount_string = "/home/amne:/home/amne"
  network = "minikube"
}

Running terraform plan will output this (snipped):
~ mount = false -> true # forces replacement

[Feature] resource_cluster data source

To be close to feature complete, we should implement the ability to reference an existing minikube cluster as a data source

The goal is to allow users to reference previously created clusters (either created via the cli, or through a separate terraform stack) by specifying something like

data "minikube_cluster" "some_cluster" {
    cluster_name = "some_existing_cluster_name"
}

provider "kubernetes" {
  host = minikube_cluster.some_cluster.host

  client_certificate     = data.minikube_cluster.some_cluster.client_certificate
  client_key             = data.minikube_cluster.some_cluster.client_key
  cluster_ca_certificate = data.minikube_cluster.some_cluster.cluster_ca_certificate
}

Inconsistent with minikube CLI: pods don't access each other

Hello,

I'm trying to use this provider, but I'm not having success getting the pods to access each other. This does not happen when using the minikube CLI.

With this provider

Using this provider, the unleash pod receives the following error when accessing the unleash-postgresql pod: getaddrinfo EAI_AGAIN unleash-postgresql.

// main.tf
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
    }
    helm = {
      source = "hashicorp/helm"
    }
    minikube = {
      source = "scott-the-programmer/minikube"
    }
  }
}

// minikube
provider "minikube" {
  kubernetes_version = "v1.26.3"
}

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "minikube-cluster-docker"
  addons = [
    "dashboard",
    "default-storageclass",
    "storage-provisioner",
    "ingress",
    "ingress-dns",
  ]
}

// Kubernetes
provider "kubernetes" {
  host = minikube_cluster.docker.host

  client_certificate     = minikube_cluster.docker.client_certificate
  client_key             = minikube_cluster.docker.client_key
  cluster_ca_certificate = minikube_cluster.docker.cluster_ca_certificate
}

resource "kubernetes_namespace" "unleash" {
  metadata {
    name = "unleash"
  }
}

// Helm
provider "helm" {
  kubernetes {
    host = minikube_cluster.docker.host

    client_certificate     = minikube_cluster.docker.client_certificate
    client_key             = minikube_cluster.docker.client_key
    cluster_ca_certificate = minikube_cluster.docker.cluster_ca_certificate
  }
}

// Helm Unleash
resource "helm_release" "unleash" {
  name       = "unleash"
  repository = "https://docs.getunleash.io/helm-charts"
  chart      = "unleash"
  namespace  = "unleash"

  set {
    name  = "secrets.INIT_CLIENT_API_TOKENS"
    value = "default:development.unleash-insecure-api-token"
  }
}

resource "helm_release" "unleash-edge" {
  name       = "unleash-edge"
  repository = "https://docs.getunleash.io/helm-charts"
  chart      = "unleash-edge"
  namespace  = "unleash"

  set {
    name  = "secrets.TOKENS"
    value = "default:development.unleash-insecure-api-token"
  }
}
Without this provider

Using minikube CLI, the problem doesn't happen:

minikube start --driver=docker --profile=minikube-docker
minikube addons enable ingress --profile=minikube-docker
minikube addons enable ingress-dns --profile=minikube-docker
// main.tf
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
    }
    helm = {
      source = "hashicorp/helm"
    }
    minikube = {
      source = "scott-the-programmer/minikube"
    }
  }
}

// Kubernetes
provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "minikube-docker"
}

resource "kubernetes_namespace" "unleash" {
  metadata {
    name = "unleash"
  }
}

// Helm
provider "helm" {
  kubernetes {
    config_path    = "~/.kube/config"
    config_context = "minikube-docker"
  }
}

// Helm Unleash
resource "helm_release" "unleash" {
  name       = "unleash"
  repository = "https://docs.getunleash.io/helm-charts"
  chart      = "unleash"
  namespace  = "unleash"

  set {
    name  = "secrets.INIT_CLIENT_API_TOKENS"
    value = "default:development.unleash-insecure-api-token"
  }
}

resource "helm_release" "unleash-edge" {
  name       = "unleash-edge"
  repository = "https://docs.getunleash.io/helm-charts"
  chart      = "unleash-edge"
  namespace  = "unleash"

  set {
    name  = "secrets.TOKENS"
    value = "default:development.unleash-insecure-api-token"
  }
}
To test `unleash` access
kubectl config --kubeconfig=$HOME/.kube/config use-context minikube-docker

kubectl port-forward --namespace unleash deploy/unleash-edge 3063

curl --location --request GET 'http://0.0.0.0:3063/api/client/features' --header 'Content-Type: application/json' --header 'Authorization: default:development.unleash-insecure-api-token' --data-raw ''

Perhaps the provider runs with other minikube options already configured.

Is there something missing from my provider configuration?

Generate schema_cluster.go from minikube start params

Description

Currently, schema_cluster.go is crafted by hand. When the version of minikube inevitably changes, this leads to drift between what we provide in terraform vs what is actually supported by the minikube api.

A good example of this is #31 where a typo lead to kvm based clusters to not spin up

Solution

I think there's 2 things we can do here

A) run a lint/validation process to detect drift immediately based on the minikube version defined in version.go

B) introduce a new make target to generate schema_cluster.go whenever we bump version.go

fix: `cni=auto` does not work when `nodes>=2` with `docker`, `qemu2`, `kvm2` drivers

description

The error; I added "<service_name>" as a placeholder:

"transport: Error while dialing: dial tcp: lookup <service_name>: i/o timeout"

There seems to be an issue with the way that terraform-provider-minikube is configuring the CNI plugin when there's multiple nodes. I have been using the kvm2, qemu2, and docker drivers; my issue doesn't occur with minikube start --driver=<driver_name> --nodes=3 usage with the exception of driver=qemu2 and network=builtin, which doesn't work upstream (see below).

The main issue seems to be that containers cannot communicate after I install a helm chart where one pod communicates with another pod. I've identified that the containers can always communicate using minikube start --driver=<driver_name> --nodes=3, and they can never communicate when using terraform-minikube-provider with --cni=auto. When using terraform-provider-minikube with the kvm2 driver, kindnet does start correctly, and the coredns records look correct too.

I've attached isolated audit logs for successful single and multi-node minikube start runs with docker and kvm2 drivers.

versions

minikube

$ minikube version
minikube version: v1.32.0
commit: 8220a6eb95f0a4d75f7f2d7b14cef975f050512d

docker

$ docker version
Client: Docker Engine - Community
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.20.10
 Git commit:        afdd53b
 Built:             Thu Oct 26 09:08:02 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.7
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.10
  Git commit:       311b9ff
  Built:            Thu Oct 26 09:08:02 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.27
  GitCommit:        a1496014c916f9e62104b33d1bb5bd03b0858e59
 runc:
  Version:          1.1.11
  GitCommit:        v1.1.11-0-g4bccb38
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

os

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 12 (bookworm)
Release:	12
Codename:	bookworm

libvirt

$ virsh version 
Compiled against library: libvirt 9.0.0
Using library: libvirt 9.0.0
Using API: QEMU 9.0.0
Running hypervisor: QEMU 7.2.7

Cases causing the issue

To be clear, nodes=1 always works for minikube start usage and terraform-provider-minikube usage.

Configure driver=docker + nodes=3 + cni=auto

Logs from minikube start usage:
logs-docker-single-node-success.txt
logs-docker-multi-node-success.txt

resource "minikube_cluster" "default" {
  cluster_name        = "minikube"
  driver              = "docker"
  nodes               = 3
  cni                 = "auto"
}

Configure driver=kvm2 + nodes=3 + cni=auto

Logs from minikube start usage:
logs-kvm2-single-node-success.txt
logs-kvm2-multi-node-success.txt

resource "minikube_cluster" "default" {
  cluster_name        = "minikube"
  driver              = "kvm2"
  nodes               = 3
  cni                 = "auto"
}

Configure driver=qemu2 + nodes=3 + cni=auto + network=builtin

This doesn't seem to work upstream either, so I'm not concerned with this case:

resource "minikube_cluster" "default" {
  cluster_name        = "minikube"
  driver              = "kvm2"
  nodes               = 3
  cni                 = "auto"
  network             = "builtin"
}

Storage provisioner seems to get enabled but doesn't provision storage in apply

Storage provisioner seems to get enabled now after the previous issue but when applied with a pvc afterwards, the pvc is stuck in a pending state.

Example tf:

resource "minikube_cluster" "docker" {
  cluster_name = "${var.prefix}minikube"

  driver              = "docker"
  auto_update_drivers = true
  container_runtime   = "docker"

  cpus   = 4
  memory = "16384mb"

  cni = "calico"

  wait = ["all"]

  addons = [
    "dashboard",
    "default-storageclass",
    "storage-provisioner",
    "ingress",
    "ingress-dns",
    "istio",
    "istio-provisioner",
    "logviewer",
    "metrics-server"
  ]
}
resource "kubernetes_namespace" "ns" {
  metadata {
    name = "${var.prefix}backstage"
  }
}
resource "kubernetes_persistent_volume_claim" "pg_pvc" {
  metadata {
    name      = "${var.prefix}postgres"
    namespace = local.namespace
    labels = {
      type = "local"
    }
  }
  spec {
    storage_class_name = "standard"
    access_modes = [
      "ReadWriteOnce"
    ]
    resources {
      requests = {
        storage = "2G"
      }
    }
  }
}

Same as before though, this does work if you add the provisioner at the end and local-exec with minikube cli to enable the storage-provisioner

fix: `docker` driver does not work when `nodes>=2`

This is a continuation of #131 for fixing docker driver multi-node usage. See logs below.


There is still an issue with the docker driver in v0.3.9; both of the following configurations need to work for this issue to be solved:

Configuration using cni=auto:

resource "minikube_cluster" "default" {
  cluster_name        = "dev-local-docker"
  driver              = "docker"
  nodes               = 3
  cni                 = "auto"
}

Configuration without cni=auto (logged below):

resource "minikube_cluster" "default" {
  cluster_name        = "dev-local-docker"
  driver              = "docker"
  nodes               = 3
}

Logs:

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of scott-the-programmer/minikube from the dependency lock file
- Using previously-installed scott-the-programmer/minikube v0.3.9

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # minikube_cluster.default will be created
  + resource "minikube_cluster" "default" {
      + addons                     = [
          + "default-storageclass",
          + "metrics-server",
          + "storage-provisioner",
        ]
      + apiserver_ips              = (known after apply)
      + apiserver_name             = "minikubeCA"
      + apiserver_names            = (known after apply)
      + apiserver_port             = 8443
      + auto_pause_interval        = 1
      + auto_update_drivers        = true
      + base_image                 = "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0"
      + cache_images               = true
      + cert_expiration            = 1576800
      + client_certificate         = (sensitive value)
      + client_key                 = (sensitive value)
      + cluster_ca_certificate     = (sensitive value)
      + cluster_name               = "dev-local-docker"
      + container_runtime          = "docker"
      + cpus                       = 8
      + delete_on_failure          = false
      + disable_driver_mounts      = false
      + disable_metrics            = false
      + disable_optimizations      = false
      + disk_size                  = "32768mb"
      + dns_domain                 = "cluster.local"
      + dns_proxy                  = false
      + download_only              = false
      + driver                     = "docker"
      + dry_run                    = false
      + embed_certs                = false
      + enable_default_cni         = false
      + extra_disks                = 0
      + force                      = false
      + force_systemd              = false
      + host                       = (known after apply)
      + host_dns_resolver          = true
      + host_only_cidr             = "192.168.59.1/24"
      + host_only_nic_type         = "virtio"
      + hyperkit_vsock_ports       = (known after apply)
      + hyperv_use_external_switch = false
      + id                         = (known after apply)
      + insecure_registry          = (known after apply)
      + install_addons             = true
      + interactive                = true
      + iso_url                    = (known after apply)
      + keep_context               = false
      + kvm_gpu                    = false
      + kvm_hidden                 = false
      + kvm_network                = "default"
      + kvm_numa_count             = 1
      + kvm_qemu_uri               = "qemu:///system"
      + memory                     = "8192mb"
      + mount                      = false
      + mount_9p_version           = "9p2000.L"
      + mount_gid                  = "docker"
      + mount_msize                = 262144
      + mount_port                 = 0
      + mount_string               = "/home:/minikube-host"
      + mount_type                 = "9p"
      + mount_uid                  = "docker"
      + namespace                  = "default"
      + nat_nic_type               = "virtio"
      + native_ssh                 = true
      + nfs_share                  = (known after apply)
      + nfs_shares_root            = "/nfsshares"
      + no_kubernetes              = false
      + no_vtx_check               = false
      + nodes                      = 3
      + ports                      = (known after apply)
      + preload                    = true
      + registry_mirror            = (known after apply)
      + service_cluster_ip_range   = "10.96.0.0/12"
      + ssh_port                   = 22
      + ssh_user                   = "root"
      + vm                         = false
      + wait_timeout               = 6
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + client_certificate     = (sensitive value)
  + client_key             = (sensitive value)
  + cluster_ca_certificate = (sensitive value)
  + host                   = (known after apply)
  + id                     = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

minikube_cluster.default: Creating...
minikube_cluster.default: Still creating... [10s elapsed]
minikube_cluster.default: Still creating... [20s elapsed]
minikube_cluster.default: Still creating... [30s elapsed]
minikube_cluster.default: Still creating... [40s elapsed]
minikube_cluster.default: Still creating... [50s elapsed]
minikube_cluster.default: Still creating... [1m0s elapsed]
minikube_cluster.default: Still creating... [1m10s elapsed]
minikube_cluster.default: Still creating... [1m20s elapsed]
minikube_cluster.default: Still creating... [1m30s elapsed]
minikube_cluster.default: Still creating... [1m40s elapsed]
minikube_cluster.default: Still creating... [1m50s elapsed]
minikube_cluster.default: Still creating... [2m0s elapsed]
minikube_cluster.default: Still creating... [2m10s elapsed]
minikube_cluster.default: Still creating... [2m20s elapsed]
minikube_cluster.default: Still creating... [2m30s elapsed]
minikube_cluster.default: Still creating... [2m40s elapsed]
minikube_cluster.default: Still creating... [2m50s elapsed]
minikube_cluster.default: Still creating... [3m0s elapsed]
minikube_cluster.default: Still creating... [3m10s elapsed]
minikube_cluster.default: Still creating... [3m20s elapsed]
minikube_cluster.default: Still creating... [3m30s elapsed]
minikube_cluster.default: Still creating... [3m40s elapsed]
minikube_cluster.default: Still creating... [3m50s elapsed]
minikube_cluster.default: Still creating... [4m0s elapsed]
minikube_cluster.default: Still creating... [4m10s elapsed]
minikube_cluster.default: Still creating... [4m20s elapsed]
minikube_cluster.default: Still creating... [4m30s elapsed]
minikube_cluster.default: Still creating... [4m40s elapsed]
minikube_cluster.default: Still creating... [4m50s elapsed]
╷
│ Error: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 917zc4.932xquy3qc0a9pet --discovery-token-ca-cert-hash sha256:3d1b312f8cca25fc2b6360a0e7a1f38804a4eb1c280a749d9107ab72d0431b63 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=dev-local-docker-m02": Process exited with status 1
│ stdout:
│ [preflight] Running pre-flight checks
│ [preflight] The system verification failed. Printing the output from the verification:
│ KERNEL_VERSION: 6.1.0-16-amd64
│ OS: Linux
│ CGROUPS_CPU: enabled
│ CGROUPS_CPUSET: enabled
│ CGROUPS_DEVICES: enabled
│ CGROUPS_FREEZER: enabled
│ CGROUPS_MEMORY: enabled
│ CGROUPS_PIDS: enabled
│ CGROUPS_HUGETLB: enabled
│ CGROUPS_IO: enabled
│ [preflight] Reading configuration from the cluster...
│ [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
│ [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
│ [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
│ [kubelet-start] Starting the kubelet
│ [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
│ [kubelet-check] Initial timeout of 40s passed.
│ 
│ stderr:
│ W0204 00:38:30.046049    2382 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
│       [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
│       [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
│       [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.1.0-16-amd64\n", err: exit status 1
│       [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
│       [WARNING Port-10250]: Port 10250 is in use
│       [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
│ error execution phase kubelet-start: error uploading crisocket: Unauthorized
│ To see the stack trace of this error execute with --v=5 or higher
│ 
│ 
│   with minikube_cluster.default,
│   on main.tf line 10, in resource "minikube_cluster" "default":
│   10: resource "minikube_cluster" "default" {
│ 
╵
...

docs: provide kvm2 driver configuration instructions for debian linux

The operating system I use for development is always stable debian, so I actually know the exact steps to configure libvirt with minikube on debian. I think it would be a good idea to remove the "living dangerously" section from the readme and just provide the actual steps to configure libvirt to work with minikube. I think the idea of danger might make less people use the provider as well. Maybe make a note about bugged minikube states from partial initialization; solution is rm ~/.minikube to reset minikube's state.

Storage provisioner doesn't get enabled

Trying to run a minikube cluster and enable storage-provisioner addon with below block but addon doesn't get enabled


resource "minikube_cluster" "docker" {
  cluster_name = "minikube"

  driver              = "docker"
  auto_update_drivers = true
  container_runtime   = "docker"

  cpus   = 4
  memory = "16384mb"

  cni = "calico"

  wait = ["all"]

  addons = [
    "dashboard",
    "default-storageclass",
    "storage-provisioner",
    "ingress",
    "ingress-dns",
    "istio",
    "istio-provisioner",
    "logviewer",
    "metrics-server"
  ]
}

provider base image does not correspond to the minikube default image

Describe the bug

provider base image does not correspond to the minikube default image

To Reproduce

minikube version: v1.31.2
commit: fd7ecd9c4599bef9f04c0986c4a0187f98a4396
terraform {
  required_version = ">= 1.0"

  required_providers {
    minikube = {
      source  = "scott-the-programmer/minikube"
      version = ">= 0.3.4"
    }
  }
}

provider "minikube" {
  kubernetes_version = "v1.27.3"
}

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "docker"
  cpus         = 4
  memory       = "8192mb"
  cni          = "bridge"
  addons = [
    "ingress",
    "default-storageclass",
    "storage-provisioner"
  ]
}

tf apply ->

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # minikube_cluster.docker will be created
  + resource "minikube_cluster" "docker" {
      + addons                     = [
          + "default-storageclass",
          + "ingress",
          + "storage-provisioner",
...
      + base_image                 = "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631"
...

tf apply again ->

minikube_cluster.docker: Refreshing state... [id=docker]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # minikube_cluster.docker must be replaced
-/+ resource "minikube_cluster" "docker" {
      ~ apiserver_ips              = [] -> (known after apply)
      ~ apiserver_names            = [
          - "minikubeCA",
        ] -> (known after apply)
      ~ base_image                 = "docker.io/kicbase/stable:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631" -> "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631" # forces replacement
      ~ client_certificate         = (sensitive value)
      ~ client_key                 = (sensitive value)
      ~ cluster_ca_certificate     = (sensitive value)
      ~ host                       = "https://192.168.49.2:8443" -> (known after apply)
      ~ hyperkit_vsock_ports       = [] -> (known after apply)
      ~ id                         = "docker" -> (known after apply)
      ~ insecure_registry          = [] -> (known after apply)
      ~ iso_url                    = [
          - "https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso",
        ] -> (known after apply)
      ~ nfs_share                  = [] -> (known after apply)
      ~ ports                      = [] -> (known after apply)
      ~ registry_mirror            = [] -> (known after apply)
        # (58 unchanged attributes hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Expected behavior

No changes compared to the first tf run.

Addon ordering causes TF Update

Something I noticed while testing out PVC resources - i.e.

  # minikube_cluster.docker will be updated in-place
  ~ resource "minikube_cluster" "docker" {
      ~ addons                     = [
            # (1 unchanged element hidden)
            "default-storageclass",
          + "storage-provisioner",
            "ingress",
            # (4 unchanged elements hidden)
            "metrics-server",
          - "storage-provisioner",
        ]
        id                         = "minikube"
        # (72 unchanged attributes hidden)
    }

This isn't a blocker by any means, as the update will essentially be a no-op

Provisioning minikube with this terraform provider results in cluster with broken network

I can't seem to get DNS to work inside the cluster if I install minikube using this provider.

main.tf:

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "minikube"
  addons = [
    "default-storageclass",
    "storage-provisioner"
  ]
  mount = true
  mount_string = "/home/amne:/home/amne"
  network = "minikube"
}

Once applied I try to run:

$ kubectl run -it --rm test-nginx-svc --image=nginx  -- bash
If you don't see a command prompt, try pressing enter.
root@test-nginx-svc:/# curl -v https://kubernetes:443/
* Could not resolve host: kubernetes
* Closing connection 0
curl: (6) Could not resolve host: kubernetes
root@test-nginx-svc:/#

As you can see it cannot resolve "kubernetes" but it should:

$ kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  16m
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   16m

It looks like there are no endpoints for kube-dns so kubelet will start adding reject rules in iptables for it:

$ kubectl get endpoints -A
NAMESPACE       NAME                       ENDPOINTS                                                        AGE
default         kubernetes                 192.168.49.2:8443                                                11m
kube-system     k8s.io-minikube-hostpath   <none>                                                           11m
kube-system     kube-dns                                                                                    11m

If I manually start minikube:

$ minikube start --mount-string="$HOME:/home/amne" --mount --network=minikube --v=3
😄  minikube v1.30.1 on Ubuntu 22.04 (amd64)
    ▪ MINIKUBE_PROFILE=
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2900MB) ...
❗  Listening to 0.0.0.0 on external docker host 127.0.0.1. Please be advised
🐳  Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner

❗  /usr/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.26.3.
    ▪ Want kubectl v1.26.3? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Then I run a basic nginx image again:

$ kubectl run -it --rm test-nginx-svc --image=nginx  -- bash
If you don't see a command prompt, try pressing enter.
root@test-nginx-svc:/# curl -v https://kubernetes:443/
*   Trying 10.96.0.1:443...
* Connected to kubernetes (10.96.0.1) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
root@test-nginx-svc:/#

As you can see it can resolve the service name without issues and the endpoints look file:

$ kubectl get endpoints -A
NAMESPACE     NAME                       ENDPOINTS                                     AGE
default       kubernetes                 192.168.49.2:8443                             9m36s
kube-system   k8s.io-minikube-hostpath   <none>                                        8m50s
kube-system   kube-dns                   10.244.0.2:53,10.244.0.2:53,10.244.0.2:9153   9m22s

Typo in resource_cluster.go may cause failure of configured minikube mounts

I'm attempting to do some local testing including local mounts configured and could not get mounting to work when configured in resource "minikube_cluster" "docker".

My configuration snippet roughly looks like this:

main.tf

terraform {
  required_providers {
    minikube = {
      source  = "scott-the-programmer/minikube"
      version = "0.3.1"
    }
  }
}

provider "minikube" {
  kubernetes_version = "v1.27.3"
}

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "terraform-provider-minikube-acc-docker"
  addons = [
    "dashboard",
    "default-storageclass",
    "storage-provisioner",
  ]
  cpus = 4
  memory = "7168mb"
  mount = true
  mount_string = "<local_path>:/data"

}

I was able to get mounting to work by allowing terraform to create the minikube resource first and then manually using a minikube mount <local_path>:/data command afterwards.

I also noticed that when attempting to run terraform apply again, after an initial successful run, without changing any configuration at all, while the minikube cluster was running, terraform would see that 'mount' was false (despite my configuration, but apparently agreeing with the reality of no active mount) and try to recreate the resource.

  # minikube_cluster.docker must be replaced
-/+ resource "minikube_cluster" "docker" {
      ~ apiserver_ips              = [] -> (known after apply)
      ~ apiserver_names            = [
          - "minikubeCA",
        ] -> (known after apply)
      ~ client_certificate         = (sensitive value)
      ~ client_key                 = (sensitive value)
      ~ cluster_ca_certificate     = (sensitive value)
      ~ host                       = "https://127.0.0.1:55917" -> (known after apply)
      ~ hyperkit_vsock_ports       = [] -> (known after apply)
      ~ id                         = "terraform-provider-minikube-acc-docker" -> (known after apply)
      ~ insecure_registry          = [] -> (known after apply)
      ~ iso_url                    = [
          - "https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-arm64.iso",
        ] -> (known after apply)
      ~ mount                      = false -> true # forces replacement
      ~ nfs_share                  = [] -> (known after apply)
      ~ ports                      = [] -> (known after apply)
      ~ registry_mirror            = [] -> (known after apply)
        # (57 unchanged attributes hidden)
    }

I believe I found a possible culprit here, where it looks like the mount parameter is actually being set to the hyperv_use_external_switch value instead of the mount value.

`terraform apply` fails to start `minikube_cluster` when driver="kvm2"

Hey Scott!

Thanks for providing and maintaining this awesome package; I think I found a typo in the source code that's causing an issue with starting the minikube cluster resource when using the KVM driver. See here.

os: Debian 11 (bullseye)
minikube: 1.26.1
libvirtd: 7.0.0

resource "minikube_cluster" "default" {
  cluster_name = "test"
  driver = "kvm2"
}
module.minikube-cluster.minikube_cluster.default: Creating...
╷
│ Error: Failed to start host: driver start: ensuring active networks: getting libvirt connection: connecting to libvirt socket: virError(Code=5, Domain=0, Message='no connection driver available for URI 'qemu' does not include a driver name')
│ 
│   with module.minikube-cluster.minikube_cluster.default,
│   on ../../modules/minikube-cluster/cluster.tf line 3, in resource "minikube_cluster" "default":
│    3: resource "minikube_cluster" "default" {
│ 
╵

It's complaining that there's no connection driver for the URI qemu; it seems that the URI is formatted incorrectly.


I regularly use minikube with the kvm driver, I have no issue starting minikube when done from the command line:

$ minikube start --driver=kvm2
😄  minikube v1.26.1 on Debian 11.5
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

The URI used in the logs after the successful start,

I1117 13:30:24.834677  248739 driver.go:365] Setting default libvirt URI to qemu:///system

In the source code, the kvm_qemu_uri value is set to qemu by default, while the description is ///system' The KVM QEMU connection URI. (kvm2 driver only). I think you intended to set the default value of kvm_qemu_uri to qemu:///system, which would make sense while the description has the extra ``///system' ` at the beginning of the sentence.

fix: `terraform-provider-minikube` fails to start using `qemu2` driver and `socket_vmnet` network

Using minikube start --driver=qemu2 --network=socket_vmnet works as expected, while terraform-provider-minikube does not start. The error is: Failed to start host: creating host: create: creating: : exec: no command, which refers to a missing binary I think; see here.

The bug is reproduced on macOS 13, arm64 using:

resource "minikube_cluster" "default" {
  cluster_name          = "minikube"
  driver                = "qemu2"
  network               = "socket_vmnet"
}

Would you be able to document what attributes this supports?

This is really great, I think it would be valuable to have on your main doc page for users: https://github.com/scott-the-programmer/terraform-provider-minikube/blob/main/examples/resources/minikube_cluster/resource.tf#L28

I also want to confirm, are the attributes driver specific? That example is doing minikube_cluster.docker.*, are those attributes only supported for the docker driver?

Thanks! I want to hook up the helm provider to the minikube cluster like this.

docs: document how-to use `terragrunt` with `terraform-provider-minikube`

terragrunt is an open source tool created by GruntWorks.io, and it's used as a wrapper to terraform where it provides features that terraform cannot provide. The goal of the documentation is to show another way to use the minikube, kubernetes, and helm providers together because it doesn't replace terraform usage; it extends terraform usage.

This is something that I would like to document for the open source community as I'm not really sure if anyone knows this is possible because I figured it out on my own using terraform-provider-minikube.

There are a lot of advantages:

  • terragrunt has dedicated support for opentofu, which fits with @scott-the-programmer and GruntWorks.io's goals to bridge the industry licensing issue that's causing the migration (see #120)
  • The terraform project is downloaded using git, which encourages caching and reproducible init / plan / apply sequences through GitOps
  • You can use a common configuration for all terraform-provider-minikube usage, which keeps your HCL 100% dry while providing 100% control over common settings between different drivers (docker, kvm2, qemu2, ...). This has a lot to do with how the project's structure is designed, which I would like to cover.
  • You can template the values within providers block, meaning that kubernetes_version from terraform-provider-minikube can be templated:
locals {
  kubernetes_version = "v1.28.3"
}

generate "provider" {
  path      = "provider.tf"
  if_exists = "overwrite"
  contents  = <<EOF
provider "minikube" {
  kubernetes_version = "${local.kubernetes_version}"
}
EOF
}
  • You can template the values with the remote_state block in a similar way:
remote_state {
  backend = "local"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite"
  }
  config = {
    path = "${get_parent_terragrunt_dir()}/${path_relative_to_include()}/terraform.tfstate"
  }
}

tf state missmatch using g flag in memory

Describe the bug

tf state missmatch using g flag in memory

To Reproduce

minikube version: v1.31.2
commit: fd7ecd9c4599bef9f04c0986c4a0187f98a4396
terraform {
  required_version = ">= 1.0"

  required_providers {
    minikube = {
      source  = "scott-the-programmer/minikube"
      version = ">= 0.3.4"
    }
  }
}

provider "minikube" {
  kubernetes_version = "v1.27.3"
}

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "docker"
  cpus         = 4
  memory       = "8g"
  cni          = "bridge"
  addons = [
    "ingress",
    "default-storageclass",
    "storage-provisioner"
  ]
}

tf apply ->

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # minikube_cluster.docker will be created
  + resource "minikube_cluster" "docker" {
      + addons                     = [
          + "default-storageclass",
          + "ingress",
          + "storage-provisioner",
...
      + memory                     = "8g"
...

tf apply again ->

minikube_cluster.docker: Refreshing state... [id=docker]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # minikube_cluster.docker must be replaced
-/+ resource "minikube_cluster" "docker" {
      ~ apiserver_ips              = [] -> (known after apply)
      ~ apiserver_names            = [
          - "minikubeCA",
        ] -> (known after apply)
      ~ client_certificate         = (sensitive value)
      ~ client_key                 = (sensitive value)
      ~ cluster_ca_certificate     = (sensitive value)
      ~ host                       = "https://192.168.49.2:8443" -> (known after apply)
      ~ hyperkit_vsock_ports       = [] -> (known after apply)
      ~ id                         = "docker" -> (known after apply)
      ~ insecure_registry          = [] -> (known after apply)
      ~ iso_url                    = [
          - "https://github.com/kubernetes/minikube/releases/download/v1.31.0/minikube-v1.31.0-amd64.iso",
        ] -> (known after apply)
      ~ memory                     = "8192mb" -> "8g" # forces replacement
      ~ nfs_share                  = [] -> (known after apply)
      ~ ports                      = [] -> (known after apply)
      ~ registry_mirror            = [] -> (known after apply)
        # (58 unchanged attributes hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Expected behavior

No changes compared to the first tf run.

Enable GPUs for the minikube docker driver

I'd like to enable NVIDIA GPUs for the docker minikube driver.

This should be possible by using the --gpus all flag, according to the official minikube docs:
minikube start --driver docker --container-runtime docker --gpus all

I can't find how to do this using the terraform provider.

The coredns container is endlessly restarting

Container logs:

root@minikube:/# docker logs -f c015cf083060
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:50302 - 14823 "HINFO IN 7953148733213368499.4453155829133784578. udp 57 false 512" - - 0 6.003067196s
[ERROR] plugin/errors: 2 7953148733213368499.4453155829133784578. HINFO: read udp 10.244.0.2:36315->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:42987 - 13272 "HINFO IN 7953148733213368499.4453155829133784578. udp 57 false 512" - - 0 6.002698336s
[ERROR] plugin/errors: 2 7953148733213368499.4453155829133784578. HINFO: read udp 10.244.0.2:50808->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:47092 - 59248 "HINFO IN 7953148733213368499.4453155829133784578. udp 57 false 512" - - 0 4.001190974s
[ERROR] plugin/errors: 2 7953148733213368499.4453155829133784578. HINFO: read udp 10.244.0.2:37685->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:45686 - 48752 "HINFO IN 7953148733213368499.4453155829133784578. udp 57 false 512" - - 0 2.001175282s
[ERROR] plugin/errors: 2 7953148733213368499.4453155829133784578. HINFO: read udp 10.244.0.2:40673->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:49354 - 65204 "HINFO IN 7953148733213368499.4453155829133784578. udp 57 false 512" - - 0 2.000775685s
[ERROR] plugin/errors: 2 7953148733213368499.4453155829133784578. HINFO: read udp 10.244.0.2:44471->192.168.49.1:53: i/o timeout
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s

Steps to reproduce:

terraform {
        required_providers {
                minikube = {
                        source = "scott-the-programmer/minikube"
                        version = "0.3.7"
                } 
        }
 
 
}


provider minikube {
  kubernetes_version = "v1.28.3"
}

resource "minikube_cluster" "cluster" {
  driver       = "docker"
  cluster_name = "minikube"
  addons = [
    "default-storageclass",
    "storage-provisioner",
    "dashboard",
    "ingress"
  ]
  wait = ["all"]
  host_dns_resolver = false
  memory = "16384mb"
}

What's interesting is that when deploying the cluster using minikube without terraform, everything works as expected.

$ minikube version
minikube version: v1.32.0
commit: 8220a6eb95f0a4d75f7f2d7b14cef975f050512d
$ /usr/local/bin/minikube start --profile minikube2 --driver=docker 
$ docker exec -ti minikube2 docker logs 8229446a5a47
.:53
[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:58269 - 9097 "HINFO IN 8133781860100051641.7404535210535734537. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01729004s

Terrform deployment stops working if enabling additional addons

Issue came up as tried to enable minikube addons storage-provisioner and/or csi-hostpath-driver.

Could reproduce issue with examples/resources/minikube_cluster.

Setup:

  • Using examples/resources/minikube_cluster with following modifications:
    • Disabled (commented out) resource "minikube_cluster" "hyperkit" [...]
    • Renamed cluster: cluster_name = "minikube" ... maybe does not matter, but for completeness
  • Using terraform-provider-minikube version = "0.2.1" (also used 0.2.0)

Tests:

  • Was using addon storage-provisioner (also applies to at least csi-hostpath-driver)
Manually enabling addon
  1. Apply: terraform apply without errors/warnings
  2. Add Addon: minikube addons enable storage-provisioner without errors/warnings
  3. Run apply again: terraform apply

Step 3. fails with:

minikube_cluster.docker: Refreshing state... [id=minikube]
kubernetes_deployment.deployment: Refreshing state... [id=default/nginx-example]
╷
│ Error: Get "http://localhost/apis/apps/v1/namespaces/default/deployments/nginx-example": dial tcp 127.0.0.1:80: connect: connection refused
│ 
│   with kubernetes_deployment.deployment,
│   on resource.tf line 37, in resource "kubernetes_deployment" "deployment":
│   37: resource "kubernetes_deployment" "deployment" {
│ 

minkube status still states everything is ok.

Adding addon storage-provisioner to addons for resource "minikube_cluster" "docker"
  1. Apply: terraform apply without errors/warnings
  2. Run apply again: terraform apply

Step 2. fails with connection refused same as above.

For completeness: without the additional addons the are no issues/problems.

Thank you for your great work. It really helps with k8s-learning, -experiments and -testing.

Failed to start host: can't create with that IP, address already in use

What happened

I am trying to create a cluster in minikube with 4 nodes, but unfortunately I always get the error "Failed to start host: can't create with that IP, address already in use". When I set 1 node everything works ok.

 Plan: 1 to add, 0 to change, 0 to destroy.
minikube_cluster.docker: Creating...
minikube_cluster.docker: Still creating... [10s elapsed]
minikube_cluster.docker: Still creating... [20s elapsed]
minikube_cluster.docker: Still creating... [30s elapsed]
minikube_cluster.docker: Still creating... [40s elapsed]
minikube_cluster.docker: Still creating... [50s elapsed]
minikube_cluster.docker: Still creating... [1m0s elapsed]
minikube_cluster.docker: Still creating... [1m10s elapsed]
minikube_cluster.docker: Still creating... [1m20s elapsed]
minikube_cluster.docker: Still creating... [1m30s elapsed]
minikube_cluster.docker: Still creating... [1m40s elapsed]
minikube_cluster.docker: Still creating... [1m50s elapsed]
minikube_cluster.docker: Still creating... [2m0s elapsed]
minikube_cluster.docker: Still creating... [2m10s elapsed]
minikube_cluster.docker: Still creating... [2m20s elapsed]
minikube_cluster.docker: Still creating... [2m30s elapsed]
minikube_cluster.docker: Still creating... [2m40s elapsed]
minikube_cluster.docker: Still creating... [2m50s elapsed]
minikube_cluster.docker: Still creating... [3m0s elapsed]
╷
│ Error: Failed to start host: can't create with that IP, address already in use
│ 
│   with minikube_cluster.docker,
│   on main.tf line 19, in resource "minikube_cluster" "docker":
│   19: resource "minikube_cluster" "docker" {

Software versions

minikube version

minikube version: v1.27.1
commit: fe869b5d4da11ba318eb84a3ac00f336411de7ba

docker version

Client: Docker Engine - Community
 Cloud integration: v1.0.31
 Version:           23.0.5
 API version:       1.42
 Go version:        go1.19.8
 Git commit:        bc4487a
 Built:             Wed Apr 26 16:17:45 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Desktop
 Engine:
  Version:          23.0.5
  API version:      1.42 (minimum version 1.12)
  Go version:       go1.19.8
  Git commit:       94d3ad6
  Built:            Wed Apr 26 16:17:45 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.20
  GitCommit:        2806fc1057397dbaeefbea0e4e17bddfbd388f38
 runc:
  Version:          1.1.5
  GitCommit:        v1.1.5-0-gf19387a
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Terraform version

Terraform v1.5.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/kubernetes v2.23.0
+ provider registry.terraform.io/scott-the-programmer/minikube v0.3.3

How to reproduce

terraform init
terraform plan
terraform apply
terraform {
  required_version = "~> 1.3"
  required_providers {
    minikube = {
      source  = "scott-the-programmer/minikube"
      version = "0.3.3"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.23.0"
    }
  }
}

provider "minikube" {
  kubernetes_version = "v1.26.3"
}

resource "minikube_cluster" "docker" {
  driver       = "docker"
  cluster_name = "spot"
  nodes        = 4
  addons = [
    "dashboard",
    "ingress",
    "default-storageclass",
    "storage-provisioner"
  ]
}

provider "kubernetes" {
  host                   = minikube_cluster.docker.host
  client_certificate     = minikube_cluster.docker.client_certificate
  client_key             = minikube_cluster.docker.client_key
  cluster_ca_certificate = minikube_cluster.docker.cluster_ca_certificate
}

Note

Creating cluster using minikube cli works well
minikube start --nodes 4 -p multinode

minikube start --nodes 4 -p multinode --force
😄  [multinode] minikube v1.27.1 on Ubuntu 20.04 (amd64)
❗  minikube skips various validations when --force is supplied; this may lead to unexpected behavior
✨  Automatically selected the docker driver. Other choices: ssh, none
🛑  The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
💡  If you are running minikube within a VM, consider using --driver=none:
📘    https://minikube.sigs.k8s.io/docs/reference/drivers/none/
📌  Using Docker driver with root privileges
👍  Starting control plane node multinode in cluster multinode
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting worker node multinode-m02 in cluster multinode
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.76.2
🐳  Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    ▪ env NO_PROXY=192.168.76.2
🔎  Verifying Kubernetes components...

👍  Starting worker node multinode-m03 in cluster multinode
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.76.2,192.168.76.3
🐳  Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    ▪ env NO_PROXY=192.168.76.2
    ▪ env NO_PROXY=192.168.76.2,192.168.76.3
🔎  Verifying Kubernetes components...

👍  Starting worker node multinode-m04 in cluster multinode
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.76.2,192.168.76.3,192.168.76.4
🐳  Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    ▪ env NO_PROXY=192.168.76.2
    ▪ env NO_PROXY=192.168.76.2,192.168.76.3
    ▪ env NO_PROXY=192.168.76.2,192.168.76.3,192.168.76.4
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "multinode" cluster and "default" namespace by default

|-----------|-----------|---------|--------------|------|---------|---------|-------|--------|
|  Profile  | VM Driver | Runtime |      IP      | Port | Version | Status  | Nodes | Active |
|-----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| dev       | docker    | docker  | 192.168.58.2 | 8443 | v1.25.2 | Unknown |     1 |        |
| multinode | docker    | docker  | 192.168.76.2 | 8443 | v1.25.2 | Running |     4 |        |
| prod      | docker    | docker  | 192.168.67.2 | 8443 | v1.25.2 | Unknown |     1 |        |
|-----------|-----------|---------|--------------|------|---------|---------|-------|--------|

Someone knows how to solve?

Regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.