coreos / terraform-aws-kubernetes Goto Github PK
View Code? Open in Web Editor NEWInstall a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more
License: Apache License 2.0
Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more
License: Apache License 2.0
master version (https://github.com/coreos/terraform-aws-kubernetes/tree/e50ed698445430db6006d893dafe33f9b83a263a) of the module
produeced following:
module.kubernetes.module.workers.aws_autoscaling_group.workers: Creation complete after 59s (ID: tectonic-testcluster-workers)
Error applying plan:
1 error(s) occurred:
* module.kubernetes.module.tectonic.template_dir.tectonic: 1 error(s) occurred:
* template_dir.tectonic: failed to render /home/alho/ws_terraform/kubernetes-tektonik/.terraform/modules/9427d33e7fb697ae705f99f3435b0b36/modules/tectonic/resources/manifests/cluster-config.yaml: 37:24: unknown variable accessed: kube_dns_service_ip
Terraform does not automatically rollback in the face of errors.
My config:
module "kubernetes" {
source = "git::ssh://[email protected]/coreos/terraform-aws-kubernetes.git?ref=master"
tectonic_admin_email = "[email protected]"
tectonic_admin_password = "secret"
tectonic_autoscaling_group_extra_tags = [ { key = "project", value = "slack", propagate_at_launch = true } ]
tectonic_aws_etcd_ec2_type = "t2.medium"
tectonic_aws_worker_ec2_type = "t2.medium"
tectonic_aws_master_ec2_type = "t2.medium"
tectonic_etcd_count = "0"
tectonic_master_count = "1"
tectonic_worker_count = "2"
tectonic_base_domain = "dev.aws.xz.org"
tectonic_cluster_name = "tectonic-testcluster"
tectonic_vanilla_k8s = true
tectonic_aws_external_master_subnet_ids = ["subnet-49af1521","subnet-de7cb2a4"]
tectonic_aws_external_vpc_id = "vpc-xzy"
tectonic_aws_external_worker_subnet_ids = ["subnet-xyz","subnet-xyzz"]
tectonic_aws_extra_tags = [ { key = "project", value = "slack"} ]
tectonic_aws_private_endpoints = true
tectonic_aws_region = "eu-central-1"
tectonic_aws_ssh_key = "xy"
}
Error: Error applying plan:
1 error(s) occurred:
* module.kube_certs.output.id: At column 5, line 2: join: argument 1 should be type list, got type string in:
${sha1("
${join(" ",
local_file.apiserver_key.id,
local_file.apiserver_crt.id,
local_file.kube_ca_key.id,
local_file.kube_ca_crt.id,
local_file.kubelet_key.id,
local_file.kubelet_crt.id,
)}
")}
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Currently running on MASTER:
Sun Dec 3 21:56:54 2017 +0100 fb5c6c7 (HEAD -> master, origin/master, origin/HEAD) bump master to: 0a22c73d39f67ba4bb99106a9e72322a47179736 [Lucas Serven]
terraform-aws-kubernetes/main.tf
Line 233 in fb5c6c7
The deploy kicked off fine, great work! This is a me issue but thought I would confirm, using v0.11.1 when I try to destroy it behaves like the apply, prompting for vars if not declared and refreshing state. Used a local tfstate and seems to be reflective of the env, is it possible I have an issue there or with some variable declarations? I should be able to destroy this as it is yes?
I get the following error doing a terraform init from cmder on windows with terraform 0.11.6.
λ terraform init
Initializing modules...
- module.container_linux
- module.vpc
- module.etcd
- module.ignition_masters
- module.masters
Error downloading modules: Error loading modules: module masters: Error parsing .terraform\modules\a6d55dc7db89e295b5d38b4ac42cff84\modules\aws\master-asg\variables-ignition.tf: At 1:1: expected: IDENT | STRING | ASSIGN | LBRACE got: PERIOD
As mentioned in #7, using this module and issuing a terraform get
can take quite a long time to complete; this is due to there being multiple modules found within the https://github.com/coreos/tectonic-installer repository which is fairly large; Terraform isn't very intelligent here and will download a separate copy of the whole repository in order to fetch each referenced module.
From experience, it took ~ 15 minutes to complete and I ended up with this in my .terraform/modules/
directory after a terraform get
:
$ du -csh .terraform/modules/*
116M .terraform/modules/0265c1acff7568f9479b3a4fa8cc6486
116M .terraform/modules/3aa94b5df05f822eb0850160aed4b4c2
0 .terraform/modules/47e34766793267a2ca0c92b0125f9306
116M .terraform/modules/4b606590b69ee3406e64b8eba3e06b93
116M .terraform/modules/6295ae1cc6a065a11da6a34a40c5d60c
116M .terraform/modules/67d8c51b048aaa7f7a915d2ad1b02350
116M .terraform/modules/6e104386e328625fc55743e4c1d71da2
116M .terraform/modules/6f0d7e10af33bc8da466f81c382c2b53
116M .terraform/modules/7091451e1cb93545cf9c1a48bfb7bde8
116M .terraform/modules/7e1bf06297d7ecd4edff2069216ad52b
116M .terraform/modules/a6b15ed8796db8e41e9bd3a6e64ded13
116M .terraform/modules/b566d7c9296e1b6728775296681fb99b
116M .terraform/modules/c106d093c23885656fc1913c6af0379a
116M .terraform/modules/c3319329e046819c25b064ff2d18bc05
116M .terraform/modules/e21b76f292d0e58c18041364caacd7a6
116M .terraform/modules/e4de431502698a9a78ee238d5736b454
116M .terraform/modules/f27e913576b996172f07b8307378d99e
4.0K .terraform/modules/module-subdir.json
1.9G total
That's ~ 16 copies of the same repository and about 2 GB of data.
I have created a self-signed certificate as root CA using openssl req -newkey rsa:2048 -nodes -keyout ca.key -x509 -days 365 -out ca.crt
. The verification of both the key (using openssl rsa -in ca.key -check
) and certificate (using openssl x509 -text -noout -in ca.crt
) succeeds.
I'm trying to deploy a new Kubernetes cluster using this, with above key/cert as CA. I'm using tectonic_1.7.9-tectonic.1
which comes with Terraform v0.10.7
. My terraform.tfvars
is as follows:
// (optional) Extra AWS tags to be applied to created autoscaling group resources.
// This is a list of maps having the keys `key`, `value` and `propagate_at_launch`.
//
// Example: `[ { key = "foo", value = "bar", propagate_at_launch = true } ]`
// tectonic_autoscaling_group_extra_tags = ""
// (optional) Unique name under which the Amazon S3 bucket will be created. Bucket name must start with a lower case name and is limited to 63 characters.
// The Tectonic Installer uses the bucket to store tectonic assets and kubeconfig.
// If name is not provided the installer will construct the name using "tectonic_cluster_name", current AWS region and "tectonic_base_domain"
// tectonic_aws_assets_s3_bucket_name = ""
// Instance size for the etcd node(s). Example: `t2.medium`. Read the [etcd recommended hardware](https://coreos.com/etcd/docs/latest/op-guide/hardware.html) guide for best performance
tectonic_aws_etcd_ec2_type = "t2.medium"
// (optional) List of additional security group IDs for etcd nodes.
//
// Example: `["sg-51530134", "sg-b253d7cc"]`
// tectonic_aws_etcd_extra_sg_ids = ""
// The amount of provisioned IOPS for the root block device of etcd nodes.
// Ignored if the volume type is not io1.
tectonic_aws_etcd_root_volume_iops = "100"
// The size of the volume in gigabytes for the root block device of etcd nodes.
tectonic_aws_etcd_root_volume_size = "30"
// The type of volume for the root block device of etcd nodes.
tectonic_aws_etcd_root_volume_type = "gp2"
// (optional) List of subnet IDs within an existing VPC to deploy master nodes into.
// Required to use an existing VPC and the list must match the AZ count.
//
// Example: `["subnet-111111", "subnet-222222", "subnet-333333"]`
// tectonic_aws_external_master_subnet_ids = ""
// (optional) If set, the given Route53 zone ID will be used as the internal (private) zone.
// This zone will be used to create etcd DNS records as well as internal API and internal Ingress records.
// If set, no additional private zone will be created.
//
// Example: `"Z1ILINNUJGTAO1"`
// tectonic_aws_external_private_zone = ""
// (optional) ID of an existing VPC to launch nodes into.
// If unset a new VPC is created.
//
// Example: `vpc-123456`
// tectonic_aws_external_vpc_id = ""
// (optional) List of subnet IDs within an existing VPC to deploy worker nodes into.
// Required to use an existing VPC and the list must match the AZ count.
//
// Example: `["subnet-111111", "subnet-222222", "subnet-333333"]`
// tectonic_aws_external_worker_subnet_ids = ""
// (optional) Extra AWS tags to be applied to created resources.
// tectonic_aws_extra_tags = ""
// (optional) This configures master availability zones and their corresponding subnet CIDRs directly.
//
// Example:
// `{ eu-west-1a = "10.0.0.0/20", eu-west-1b = "10.0.16.0/20" }`
// tectonic_aws_master_custom_subnets = ""
// Instance size for the master node(s). Example: `t2.medium`.
tectonic_aws_master_ec2_type = "t2.medium"
// (optional) List of additional security group IDs for master nodes.
//
// Example: `["sg-51530134", "sg-b253d7cc"]`
// tectonic_aws_master_extra_sg_ids = ""
// (optional) Name of IAM role to use for the instance profiles of master nodes.
// The name is also the last part of a role's ARN.
//
// Example:
// * Role ARN = arn:aws:iam::123456789012:role/tectonic-installer
// * Role Name = tectonic-installer
// tectonic_aws_master_iam_role_name = ""
// The amount of provisioned IOPS for the root block device of master nodes.
// Ignored if the volume type is not io1.
tectonic_aws_master_root_volume_iops = "100"
// The size of the volume in gigabytes for the root block device of master nodes.
tectonic_aws_master_root_volume_size = "30"
// The type of volume for the root block device of master nodes.
tectonic_aws_master_root_volume_type = "gp2"
// (optional) If set to true, create private-facing ingress resources (ELB, A-records).
// If set to false, no private-facing ingress resources will be provisioned and all DNS records will be created in the public Route53 zone.
// tectonic_aws_private_endpoints = true
// (optional) If set to true, create public-facing ingress resources (ELB, A-records).
// If set to false, no public-facing ingress resources will be created.
// tectonic_aws_public_endpoints = true
// The target AWS region for the cluster.
tectonic_aws_region = "eu-west-1"
// Name of an SSH key located within the AWS region. Example: coreos-user.
tectonic_aws_ssh_key = "<redacted>"
// Block of IP addresses used by the VPC.
// This should not overlap with any other networks, such as a private datacenter connected via Direct Connect.
tectonic_aws_vpc_cidr_block = "10.0.0.0/16"
// (optional) This configures worker availability zones and their corresponding subnet CIDRs directly.
//
// Example: `{ eu-west-1a = "10.0.64.0/20", eu-west-1b = "10.0.80.0/20" }`
// tectonic_aws_worker_custom_subnets = ""
// Instance size for the worker node(s). Example: `t2.medium`.
tectonic_aws_worker_ec2_type = "t2.medium"
// (optional) List of additional security group IDs for worker nodes.
//
// Example: `["sg-51530134", "sg-b253d7cc"]`
// tectonic_aws_worker_extra_sg_ids = ""
// (optional) Name of IAM role to use for the instance profiles of worker nodes.
// The name is also the last part of a role's ARN.
//
// Example:
// * Role ARN = arn:aws:iam::123456789012:role/tectonic-installer
// * Role Name = tectonic-installer
// tectonic_aws_worker_iam_role_name = ""
// (optional) List of ELBs to attach all worker instances to.
// This is useful for exposing NodePort services via load-balancers managed separately from the cluster.
//
// Example:
// * `["ingress-nginx"]`
// tectonic_aws_worker_load_balancers = ""
// The amount of provisioned IOPS for the root block device of worker nodes.
// Ignored if the volume type is not io1.
tectonic_aws_worker_root_volume_iops = "100"
// The size of the volume in gigabytes for the root block device of worker nodes.
tectonic_aws_worker_root_volume_size = "30"
// The type of volume for the root block device of worker nodes.
tectonic_aws_worker_root_volume_type = "gp2"
// The base DNS domain of the cluster. It must NOT contain a trailing period. Some
// DNS providers will automatically add this if necessary.
//
// Example: `openstack.dev.coreos.systems`.
//
// Note: This field MUST be set manually prior to creating the cluster.
// This applies only to cloud platforms.
//
// [Azure-specific NOTE]
// To use Azure-provided DNS, `tectonic_base_domain` should be set to `""`
// If using DNS records, ensure that `tectonic_base_domain` is set to a properly configured external DNS zone.
// Instructions for configuring delegated domains for Azure DNS can be found here: https://docs.microsoft.com/en-us/azure/dns/dns-delegate-domain-azure-dns
tectonic_base_domain = "<redacted>"
// (optional) The content of the PEM-encoded CA certificate, used to generate Tectonic Console's server certificate.
// If left blank, a CA certificate will be automatically generated.
tectonic_ca_cert = "path/to/ca.crt"
// (optional) The content of the PEM-encoded CA key, used to generate Tectonic Console's server certificate.
// This field is mandatory if `tectonic_ca_cert` is set.
tectonic_ca_key = "path/to/ca.key"
// (optional) The algorithm used to generate tectonic_ca_key.
// The default value is currently recommended.
// This field is mandatory if `tectonic_ca_cert` is set.
tectonic_ca_key_alg = "RSA"
// (optional) This declares the IP range to assign Kubernetes pod IPs in CIDR notation.
// tectonic_cluster_cidr = "10.2.0.0/16"
// The name of the cluster.
// If used in a cloud-environment, this will be prepended to `tectonic_base_domain` resulting in the URL to the Tectonic console.
//
// Note: This field MUST be set manually prior to creating the cluster.
// Warning: Special characters in the name like '.' may cause errors on OpenStack platforms due to resource name constraints.
tectonic_cluster_name = "<redacted>"
// (optional) The Container Linux update channel.
//
// Examples: `stable`, `beta`, `alpha`
// tectonic_container_linux_channel = "stable"
// The Container Linux version to use. Set to `latest` to select the latest available version for the selected update channel.
//
// Examples: `latest`, `1465.6.0`
tectonic_container_linux_version = "latest"
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key algorithm.
// tectonic_ddns_key_algorithm = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key name.
// tectonic_ddns_key_name = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key secret.
// tectonic_ddns_key_secret = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server IP/host to register IP addresses to.
// tectonic_ddns_server = ""
// (optional) DNS prefix used to construct the console and API server endpoints.
// tectonic_dns_name = ""
// (optional) The path of the file containing the CA certificate for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variable `tectonic_etcd_servers` must also be set.
// tectonic_etcd_ca_cert_path = "/dev/null"
// (optional) The path of the file containing the client certificate for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_key_path` must also be set.
// tectonic_etcd_client_cert_path = "/dev/null"
// (optional) The path of the file containing the client key for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_cert_path` must also be set.
// tectonic_etcd_client_key_path = "/dev/null"
// The number of etcd nodes to be created.
// If set to zero, the count of etcd nodes will be determined automatically.
//
// Note: This is not supported on bare metal.
tectonic_etcd_count = "0"
// (optional) List of external etcd v3 servers to connect with (hostnames/IPs only).
// Needs to be set if using an external etcd cluster.
// Note: If this variable is defined, the installer will not create self-signed certs.
// To provide a CA certificate to trust the etcd servers, set "tectonic_etcd_ca_cert_path".
//
// Example: `["etcd1", "etcd2", "etcd3"]`
// tectonic_etcd_servers = ""
// (optional) If set to `true`, all etcd endpoints will be configured to use the "https" scheme.
//
// Note: If `tectonic_experimental` is set to `true` this variable has no effect, because the experimental self-hosted etcd always uses TLS.
// tectonic_etcd_tls_enabled = true
// If set to true, experimental Tectonic assets are being deployed.
tectonic_experimental = false
// The path to the tectonic licence file.
// You can download the Tectonic license file from your Account overview page at [1].
//
// [1] https://account.coreos.com/overview
//
// Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.
tectonic_license_path = "path/to/tectonic-license.txt"
// The number of master nodes to be created.
// This applies only to cloud platforms.
tectonic_master_count = "1"
// (optional) Configures the network to be used in Tectonic. One of the following values can be used:
//
// - "flannel": enables overlay networking only. This is implemented by flannel using VXLAN.
//
// - "canal": [ALPHA] enables overlay networking including network policy. Overlay is implemented by flannel using VXLAN. Network policy is implemented by Calico.
//
// - "calico": [ALPHA] enables BGP based networking. Routing and network policy is implemented by Calico. Note this has been tested on baremetal installations only.
// tectonic_networking = "flannel"
// The path the pull secret file in JSON format.
// This is known to be a "Docker pull secret" as produced by the docker login [1] command.
// A sample JSON content is shown in [2].
// You can download the pull secret from your Account overview page at [3].
//
// [1] https://docs.docker.com/engine/reference/commandline/login/
//
// [2] https://coreos.com/os/docs/latest/registry-authentication.html#manual-registry-auth-setup
//
// [3] https://account.coreos.com/overview
//
// Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.
tectonic_pull_secret_path = "path/to/config.json"
// (optional) This declares the IP range to assign Kubernetes service cluster IPs in CIDR notation.
// The maximum size of this IP range is /12
// tectonic_service_cidr = "10.3.0.0/16"
// If set to true, a vanilla Kubernetes cluster will be deployed, omitting any Tectonic assets.
tectonic_vanilla_k8s = false
// The number of worker nodes to be created.
// This applies only to cloud platforms.
tectonic_worker_count = "2"
But the terraform apply
command fails with following errors:
Error applying plan:
5 error(s) occurred:
* module.identity_certs.tls_locally_signed_cert.identity_server: 1 error(s) occurred:
* tls_locally_signed_cert.identity_server: no PEM block found in ca_private_key_pem
* module.kube_certs.tls_locally_signed_cert.apiserver: 1 error(s) occurred:
* tls_locally_signed_cert.apiserver: no PEM block found in ca_private_key_pem
* module.identity_certs.tls_locally_signed_cert.identity_client: 1 error(s) occurred:
* tls_locally_signed_cert.identity_client: no PEM block found in ca_private_key_pem
* module.ingress_certs.tls_locally_signed_cert.ingress: 1 error(s) occurred:
* tls_locally_signed_cert.ingress: no PEM block found in ca_private_key_pem
* module.kube_certs.tls_locally_signed_cert.kubelet: 1 error(s) occurred:
* tls_locally_signed_cert.kubelet: no PEM block found in ca_private_key_pem
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
I also tried with -refresh=false
without any effect (link).
EDIT: Updated config to add tectonic_ca_key_alg = "RSA"
parameter which does not fix the issue.
Using what's in examples/kubernetes.tf by itself I'm unable to terraform init
with the following error:
$ terraform init
Initializing modules...
- module.kubernetes
- module.kubernetes.container_linux
- module.kubernetes.vpc
- module.kubernetes.etcd
- module.kubernetes.ignition_masters
- module.kubernetes.masters
- module.kubernetes.ignition_workers
- module.kubernetes.workers
- module.kubernetes.kube_certs
- module.kubernetes.etcd_certs
- module.kubernetes.ingress_certs
- module.kubernetes.identity_certs
- module.kubernetes.bootkube
- module.kubernetes.tectonic
- module.kubernetes.flannel_vxlan
- module.kubernetes.calico
- module.kubernetes.canal
Error getting plugins: module root:
module kubernetes.root: module container_linux: required variable "version" not set
$ terraform --version
Terraform v0.11.0
I'm trying to set up a POC cluster for a bit of experimentation. Chose the vanilla method (see config at bottom);
At first, I had the same issue as #6. Running plan and apply again sorted it, but now the running kubectl cluster-info
gives me:
$ kubectl cluster-info
Kubernetes master is running at https://mydomain.com:443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
running with dump
just returns
Unable to connect to the server: unexpected EOF
ssh'ing to the machine, hyperkube seemed to have restarted a few times. when it became stable, the logs were full of:
E0209 09:46:08.708370 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://mydomain.com:443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: EOF
docker ps -a
gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS N
AMES
9eecc428bfb4 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." About a minute ago Exited (0) About a minute ago q
uirky_newton
2b204d005d85 quay.io/coreos/hyperkube "/usr/bin/flock /v..." 2 minutes ago Exited (255) About a minute ago k
8s_kube-apiserver_bootstrap-kube-apiserver-ip-10-102-39-231.eu-west-1.compute.internal_kube-system_7372638d060e8634f939e7d9638e7fb2_7
22ea417a2d6e quay.io/coreos/hyperkube "./hyperkube contr..." 20 minutes ago Up 20 minutes k
8s_kube-controller-manager_bootstrap-kube-controller-manager-ip-10-102-39-231.eu-west-1.compute.internal_kube-system_593e5c19268a732b18cf733be361f7ef_0
adbd68ad12c2 quay.io/coreos/hyperkube "./hyperkube sched..." 20 minutes ago Up 20 minutes k
8s_kube-scheduler_bootstrap-kube-scheduler-ip-10-102-39-231.eu-west-1.compute.internal_kube-system_9ed9a738aa21e46d4aa2be533a40fe37_0
67f6b3a5d35b gcr.io/google_containers/pause-amd64:3.0 "/pause" 21 minutes ago Up 21 minutes k
8s_POD_bootstrap-kube-controller-manager-ip-10-102-39-231.eu-west-1.compute.internal_kube-system_593e5c19268a732b18cf733be361f7ef_0
a253488550f7 gcr.io/google_containers/pause-amd64:3.0 "/pause" 21 minutes ago Up 21 minutes $
8s_POD_bootstrap-kube-scheduler-ip-10-102-39-231.eu-west-1.compute.internal_kube-system_9ed9a738aa21e46d4aa2be533a40fe37_0
0b6d728cbd27 gcr.io/google_containers/pause-amd64:3.0 "/pause" 21 minutes ago Up 21 minutes k
8s_POD_bootstrap-kube-apiserver-ip-10-102-39-231.eu-west-1.compute.internal_kube-system_7372638d060e8634f939e7d9638e7fb2_0
fe3f47a56eb4 quay.io/coreos/bootkube:v0.8.1 "/bootkube start -..." 21 minutes ago Exited (1) About a minute ago g
allant_noyce
17fbff44936f quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 21 minutes ago Exited (0) 21 minutes ago r
omantic_babbage
3c0738a79899 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (0) 22 minutes ago s
illy_bhabha
1998998cbb32 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (0) 23 minutes ago l
ucid_hopper
1c1014cc46e4 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (1) 23 minutes ago v
igilant_swartz
aac978cfe59b quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (1) 23 minutes ago p
riceless_curran
6cd84c94bbe2 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (1) 23 minutes ago d
azzling_kowalevski
c334f6942bdf quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (1) 23 minutes ago s
harp_ride
57351683101c quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (1) 23 minutes ago p
eaceful_bassi
2067d4c77259 quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/bin/bash -c '\n ..." 23 minutes ago Exited (1) 23 minutes ago a
doring_wing
ff330a2822ef quay.io/coreos/awscli:025a357f05242fdad6a81e8a6b520098aa65a600 "/detect-master.sh" 23 minutes ago Exited (0) 23 minutes ago h
ardcore_spence
Getting the logs of the 2b204 container:
core@ip-10-102-39-231 ~ $ docker logs 2b204d005d85
I0209 09:50:03.083580 5 server.go:114] Version: v1.8.4+coreos.0
I0209 09:50:03.084027 5 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider.
I0209 09:50:03.084119 5 aws.go:847] Building AWS cloudprovider
I0209 09:50:03.084176 5 aws.go:810] Zone not specified in configuration file; querying AWS metadata service
I0209 09:50:03.312930 5 tags.go:76] AWS cloud filtering on ClusterID: myclusterid
I0209 09:50:03.319499 5 aws.go:847] Building AWS cloudprovider
I0209 09:50:03.319593 5 aws.go:810] Zone not specified in configuration file; querying AWS metadata service
I0209 09:50:03.395295 5 tags.go:76] AWS cloud filtering on ClusterID: myclusterid
I0209 09:50:04.006413 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.007071 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
W0209 09:50:04.007977 5 admission.go:66] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts.
I0209 09:50:04.009754 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.013321 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.014111 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.014711 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.015425 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.016160 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.016988 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.017710 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.020156 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.022998 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.025027 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.026320 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.027219 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.028469 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.029282 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.030056 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.030701 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.031359 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.045911 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.046704 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.047484 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.048272 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.048979 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.049801 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.050593 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.051364 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.052163 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.052917 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.056419 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.058839 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.061129 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.064863 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.065823 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.066573 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.067287 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.068001 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.068732 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.069383 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.070143 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.070879 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.073015 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.073705 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.074413 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.075173 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.075909 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.076577 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.077304 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.078184 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.078976 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:04.079694 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
W0209 09:50:04.086404 5 genericapiserver.go:311] Skipping API batch/v2alpha1 because it has no resources.
W0209 09:50:04.098126 5 genericapiserver.go:311] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/02/09 09:50:04 log.go:33: [restful/swagger] listing is available at https://{public-ip}/swaggerapi
[restful] 2018/02/09 09:50:04 log.go:33: [restful/swagger] https://{public-ip}/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/02/09 09:50:04 log.go:33: [restful/swagger] listing is available at https://{public-ip}/swaggerapi
[restful] 2018/02/09 09:50:04 log.go:33: [restful/swagger] https://{public-ip}/swaggerui/ is mapped to folder /swagger-ui/
I0209 09:50:04.839076 5 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
I0209 09:50:07.354052 5 serve.go:85] Serving securely on 0.0.0.0:443
I0209 09:50:07.354245 5 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0209 09:50:07.354268 5 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0209 09:50:07.354290 5 controller.go:84] Starting OpenAPI AggregationController
I0209 09:50:07.354414 5 crd_finalizer.go:242] Starting CRDFinalizer
I0209 09:50:07.354436 5 available_controller.go:192] Starting AvailableConditionController
I0209 09:50:07.354446 5 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0209 09:50:07.354479 5 crdregistration_controller.go:112] Starting crd-autoregister controller
I0209 09:50:07.354489 5 controller_utils.go:1041] Waiting for caches to sync for crd-autoregister controller
I0209 09:50:07.354502 5 customresource_discovery_controller.go:152] Starting DiscoveryController
I0209 09:50:07.354518 5 naming_controller.go:277] Starting NamingConditionController
I0209 09:50:33.470964 5 trace.go:76] Trace[1055602432]: "Create /api/v1/namespaces/kube-system/pods" (started: 2018-02-09 09:50:23.470487469 +0000 UTC) (total time: 10.00043381s):
Trace[1055602432]: [10.00043381s] [10.000316949s] END
I0209 09:50:38.355775 5 trace.go:76] Trace[2073516846]: "Create /api/v1/namespaces" (started: 2018-02-09 09:50:08.355292443 +0000 UTC) (total time: 30.000458182s):
Trace[2073516846]: [30.000458182s] [30.000379983s] END
E0209 09:50:38.356286 5 client_ca_hook.go:78] Timeout: request did not complete within allowed duration
I0209 09:50:45.212831 5 trace.go:76] Trace[916872897]: "Create /api/v1/nodes" (started: 2018-02-09 09:50:15.212253831 +0000 UTC) (total time: 30.000553967s):
Trace[916872897]: [30.000553967s] [30.000416454s] END
I0209 09:50:50.964274 5 trace.go:76] Trace[1282210152]: "Create /api/v1/nodes" (started: 2018-02-09 09:50:20.963753931 +0000 UTC) (total time: 30.000496712s):
Trace[1282210152]: [30.000496712s] [30.000346264s] END
I0209 09:50:51.218273 5 trace.go:76] Trace[2066235820]: "Create /api/v1/nodes" (started: 2018-02-09 09:50:21.217643067 +0000 UTC) (total time: 30.00059804s):
Trace[2066235820]: [30.00059804s] [30.000444768s] END
I0209 09:50:51.406412 5 trace.go:76] Trace[2013168133]: "Create /api/v1/nodes" (started: 2018-02-09 09:50:21.405856313 +0000 UTC) (total time: 30.000531953s):
Trace[2013168133]: [30.000531953s] [30.000399377s] END
E0209 09:51:07.370854 5 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0209 09:51:07.371962 5 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:61: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0209 09:51:07.372021 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io)
E0209 09:51:07.372134 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *api.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas)
E0209 09:51:07.373788 5 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E0209 09:51:07.373796 5 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:61: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
E0209 09:51:07.391639 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io)
E0209 09:51:07.391741 5 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E0209 09:51:07.391794 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *api.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0209 09:51:07.391835 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io)
E0209 09:51:07.392139 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *api.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts)
E0209 09:51:07.392196 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
E0209 09:51:07.392865 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *api.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)
E0209 09:51:07.393203 5 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io)
E0209 09:51:08.355816 5 storage_rbac.go:172] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
I0209 09:51:08.357182 5 trace.go:76] Trace[1216105872]: "Create /api/v1/namespaces" (started: 2018-02-09 09:50:38.356848784 +0000 UTC) (total time: 30.000315483s):
Trace[1216105872]: [30.000315483s] [30.000273123s] END
E0209 09:51:08.357487 5 client_ca_hook.go:78] Timeout: request did not complete within allowed duration
F0209 09:51:08.357509 5 hooks.go:133] PostStartHook "ca-registration" failed: unable to initialize client CA configmap: timed out waiting for the condition
core@ip-10-102-39-231 ~ $
my config looks like this:
module "kubernetes" {
source = "coreos/kubernetes/aws"
tectonic_aws_assets_s3_bucket_name = "${var.s3_asset_bucket}"
tectonic_aws_etcd_ec2_type = "t2.small"
tectonic_aws_etcd_root_volume_iops = "100"
tectonic_aws_etcd_root_volume_size = "30"
tectonic_aws_etcd_root_volume_type = "gp2"
tectonic_aws_external_private_zone = "${data.aws_route53_zone.private_zone.zone_id}"
tectonic_aws_master_ec2_type = "t2.medium"
tectonic_aws_master_root_volume_iops = "100"
tectonic_aws_master_root_volume_size = "30"
tectonic_aws_master_root_volume_type = "gp2"
tectonic_aws_private_endpoints = true
tectonic_aws_profile = "${var.aws_profile}"
tectonic_aws_public_endpoints = true
tectonic_aws_region = "${var.primary_region}"
tectonic_aws_ssh_key = "${var.keypair_name}"
tectonic_aws_vpc_cidr_block = "${data.external.cidr.result.value}"
tectonic_aws_worker_ec2_type = "t2.medium"
tectonic_aws_worker_root_volume_iops = "100"
tectonic_aws_worker_root_volume_size = "30"
tectonic_aws_worker_root_volume_type = "gp2"
tectonic_base_domain = "${var.domain}"
tectonic_cluster_name = "${var.cluster_name}"
tectonic_container_linux_channel = "stable"
tectonic_container_linux_version = "latest"
tectonic_etcd_count = "0"
tectonic_etcd_tls_enabled = true
tectonic_license_path = ""
tectonic_master_count = "1"
tectonic_networking = "calico"
tectonic_pull_secret_path = ""
tectonic_tls_validity_period = "26280"
tectonic_vanilla_k8s = true
tectonic_worker_count = "3"
tectonic_admin_email = "${var.admin_email}"
tectonic_admin_password = "${data.external.admin_password.result.value}"
}
Any idea what might have gone wrong? Any help appreciated.
NB: logs sanitised.
Error: Error refreshing state: 1 error(s) occurred:
* module.kubernetes.module.kube_certs.output.id: At column 5, line 2: join: argument 1 should be type list, got type string in:
${sha1("
${join(" ",
local_file.apiserver_key.id,
local_file.apiserver_crt.id,
local_file.kube_ca_key.id,
local_file.kube_ca_crt.id,
local_file.kubelet_key.id,
local_file.kubelet_crt.id,
)}
")}
i was fixed in this way
output "id" {
value = "${sha1(join(" ", var.ca_list))}"
}
variable "ca_list" {
type = "list"
default = [
"local_file.apiserver_key.id",
"local_file.apiserver_crt.id",
"local_file.kube_ca_key.id",
"local_file.kube_ca_crt.id",
"local_file.kubelet_key.id",
"local_file.kubelet_crt.id",
]
}
I'm trying to test this out in an empty AWS account and given I had to create and supply the route53 zone name I figured I could do something like this:
resource "aws_route53_zone" "tectonic" {
name = "tectonic.example.com"
}
module "kubernetes" {
source = "coreos/kubernetes/aws"
tectonic_base_domain = "${aws_route53_zone.tectonic.name}"
tectonic_cluster_name = "..."
tectonic_admin_email = "..."
tectonic_admin_password_hash = "..."
tectonic_aws_ssh_key = "..."
tectonic_license_path = "${path.module}/tectonic-license.txt"
tectonic_pull_secret_path = "${path.module}/config.json"
}
However this results in the following error:
$ terraform plan
Error: module.kubernetes.module.etcd_certs.tls_cert_request.etcd_peer: dns_names: should be a list
Error: module.kubernetes.module.etcd_certs.tls_cert_request.etcd_server: dns_names: should be a list
If I change my manifest to the below, then it works but there's now no guarantee that the zone would exist before the module needs it:
resource "aws_route53_zone" "tectonic" {
name = "tectonic.example.com"
}
module "kubernetes" {
source = "coreos/kubernetes/aws"
tectonic_base_domain = "tectonic.example.com"
tectonic_cluster_name = "..."
tectonic_admin_email = "..."
tectonic_admin_password_hash = "..."
tectonic_aws_ssh_key = "..."
tectonic_license_path = "${path.module}/tectonic-license.txt"
tectonic_pull_secret_path = "${path.module}/config.json"
}
If I add an output that just returns the value of ${aws_route53_zone.tectonic.name}
then it returns a flat string exactly as it should so I'm not sure why it's exploding the way it is.
This is with Terraform 0.10.8 and version 1.7.5-tectonic.1 of the module.
Hi,
I noticed that sometimes when I try to bring up a cluster, I get the following errors for the first run of plan/apply, which looks like some kind of eventual consistency issue, as doing another plan/apply after these errors would then successfully bring up the cluster. has anyone else seen this?
Error applying plan:
4 error(s) occurred:
Get error when try to init
- module "container_linux": missing required argument "version"
this is fix
source = "github.com/coreos/tectonic-installer//modules/container_linux?ref=0a22c73d39f67ba4bb99106a9e72322a47179736"
release_channel = "${var.tectonic_container_linux_channel}"
release_version = "${var.tectonic_container_linux_version}"
Trying to customise the setup , to separate master and workers in different subnets(public & private),
need workers to communicate using nat gateway, with below tf script
provider "aws" {
region = "${var.aws_region}"
}
resource "aws_eip" "nat" {
count = 1
vpc = true
}
resource "aws_default_security_group" "default" {
vpc_id = "${module.vpc.vpc_id}"
ingress {
from_port = 8
to_port = 0
protocol = "icmp"
cidr_blocks = [
"0.0.0.0/0"]
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "${var.tectonic_cluster_name}"
cidr = "${var.vpc_cidr}"
azs = [
"us-west-1a"]
public_subnets = [
"10.10.11.0/24"]
private_subnets = [
"10.10.1.0/24"]
database_subnets = [
"10.10.21.0/24"]
elasticache_subnets = [
"10.10.31.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
reuse_nat_ips = true
external_nat_ip_ids = [
"${aws_eip.nat.*.id}"]
enable_vpn_gateway = false
create_database_subnet_group = true
tags = "${var.tags}"
private_subnet_tags = {
"kubernetes.io/cluster/${var.tectonic_cluster_name}" = "shared"
Owner = "rohit"
Environment = "${var.tectonic_cluster_name}"
Name = "${var.tectonic_cluster_name}"
}
database_subnet_tags = {
Owner = "rohit"
Environment = "${var.tectonic_cluster_name}"
Name = "${var.tectonic_cluster_name}"
}
elasticache_subnet_tags = {
Owner = "rohit"
Environment = "${var.tectonic_cluster_name}"
Name = "${var.tectonic_cluster_name}"
}
}
module "kubernetes" {
source = "coreos/kubernetes/aws"
tectonic_aws_assets_s3_bucket_name = "tectonic-cf"
tectonic_aws_region = "${var.aws_region}"
tectonic_aws_ssh_key = "itops"
tectonic_aws_vpc_cidr_block = "${var.vpc_cidr}"
tectonic_aws_public_endpoints = true
tectonic_base_domain = "${var.tectonic_base_domain}"
tectonic_cluster_name = "${var.tectonic_cluster_name}"
tectonic_container_linux_version = "latest"
tectonic_license_path = "/Users/rverma/dev/tectonic/tectonic-license.txt"
tectonic_pull_secret_path = "/Users/rverma/dev/tectonic/config.json"
tectonic_networking = "flannel"
tectonic_tls_validity_period = "26280"
tectonic_vanilla_k8s = false
tectonic_admin_email = "${var.tectonic_admin_email}"
tectonic_admin_password = "${var.tectonic_admin_password}"
tectonic_aws_external_vpc_id = "${module.vpc.vpc_id}"
tectonic_aws_external_private_zone = "***"
// tectonic_ca_cert = ""
// tectonic_ca_key = ""
// tectonic_ca_key_alg = "RSA"
tectonic_etcd_count = "0"
tectonic_aws_etcd_ec2_type = "${var.master_instance_type}"
tectonic_aws_etcd_root_volume_iops = "100"
tectonic_aws_etcd_root_volume_size = "30"
tectonic_aws_etcd_root_volume_type = "gp2"
tectonic_master_count = "1"
tectonic_aws_master_ec2_type = "${var.master_instance_type}"
tectonic_aws_external_master_subnet_ids = "${module.vpc.public_subnets}"
tectonic_aws_master_root_volume_iops = "100"
tectonic_aws_master_root_volume_size = "30"
tectonic_aws_master_root_volume_type = "gp2"
tectonic_worker_count = "${var.min_worker_count}"
tectonic_aws_external_worker_subnet_ids = "${module.vpc.private_subnets}"
tectonic_aws_worker_ec2_type = "${var.worker_instance_type}"
tectonic_aws_worker_root_volume_iops = "100"
tectonic_aws_worker_root_volume_size = "30"
tectonic_aws_worker_root_volume_type = "gp2"
}
Getting warnings as
Warning: output "etcd_sg_id": must use splat syntax to access aws_security_group.etcd attribute "id", because it has "count" set; use aws_security_group.etcd.*.id to obtain a list of the attributes across all instances
Warning: output "aws_api_external_dns_name": must use splat syntax to access aws_elb.api_external attribute "dns_name", because it has "count" set; use aws_elb.api_external.*.dns_name to obtain a list of the attributes across all instances
Warning: output "aws_elb_api_external_zone_id": must use splat syntax to access aws_elb.api_external attribute "zone_id", because it has "count" set; use aws_elb.api_external.*.zone_id to obtain a list of the attributes across all instances
Warning: output "aws_api_internal_dns_name": must use splat syntax to access aws_elb.api_internal attribute "dns_name", because it has "count" set; use aws_elb.api_internal.*.dns_name to obtain a list of the attributes across all instances
Warning: output "aws_elb_api_internal_zone_id": must use splat syntax to access aws_elb.api_internal attribute "zone_id", because it has "count" set; use aws_elb.api_internal.*.zone_id to obtain a list of the attributes across all instances
And Exceptions as
module.kubernetes.module.vpc.data.aws_subnet.external_worker: data.aws_subnet.external_worker: value of 'count' cannot be computed
module.kubernetes.module.vpc.data.aws_subnet.external_master: data.aws_subnet.external_master: value of 'count' cannot be computed
module "kubernetes" {
source = "coreos/kubernetes/aws"
tectonic_admin_email = "[email protected]"
tectonic_admin_password = "xxxx"
tectonic_aws_etcd_ec2_type = "t2.medium"
tectonic_aws_worker_ec2_type = "t2.medium"
tectonic_aws_master_ec2_type = "t2.medium"
// The number of etcd nodes to be created.
// If set to zero, the count of etcd nodes will be determined automatically.
tectonic_etcd_count = "0"
tectonic_master_count = "1"
tectonic_worker_count = "2"
...
}
i see following on terraform init
or terraform get
:
terraform get
Get: https://api.github.com/repos/coreos/terraform-aws-kubernetes/tarball/1.7.5-tectonic.1-rc.1?archive=tar.gz
Get: git::https://github.com/coreos/tectonic-installer.git?ref=1.7.5-tectonic.1-rc.1
Get: git::https://github.com/coreos/tectonic-installer.git?ref=1.7.5-tectonic.1-rc.1
Get: git::https://github.com/coreos/tectonic-installer.git?ref=1.7.5-tectonic.1-rc.1
Get: git::https://github.com/coreos/tectonic-installer.git?ref=1.7.5-tectonic.1-rc.1
Get: git::https://github.com/coreos/tectonic-installer.git?ref=1.7.5-tectonic.1-rc.1
Error loading modules: error downloading 'https://github.com/coreos/tectonic-installer.git?ref=1.7.5-tectonic.1-rc.1': /usr/bin/git exited with 128: Cloning into '.terraform/modules/8fd23db79a9f2409cd095466e7a3b7a9'...
fatal: BUG: initial ref transaction called with existing refs
Version:
Terraform v0.10.7
any ideas?
e.g.:
contexts:
- context:
cluster: lab
user: kubelet
Should probably be:
contexts:
- name: lab
context:
cluster: lab
user: kubelet
If you do a config merge without setting a name
, kubectl
will auto-insert a name value of ""
.
what file do i modify to get this working the file in dir example/kubernetes or the variable.tf in the top level directory, documentation was not clear. This will be a stock build without Tectonic features.
I'm trying out the module from the new Terraform registry. My configuration is below, with sensitive parts redacted:
module "kubernetes" {
source = "coreos/kubernetes/aws"
tectonic_admin_email = "[email protected]"
tectonic_admin_password_hash = "<redacted>"
tectonic_aws_ssh_key = "rubikloud-master"
tectonic_base_domain = "rubikloudcorp.com"
tectonic_cluster_name = "k8test"
tectonic_vanilla_k8s = true
tectonic_aws_private_endpoints = false
tectonic_aws_external_private_zone = "<redacted>"
tectonic_autoscaling_group_extra_tags = [ .. some tags .. ]
tectonic_aws_extra_tags {
.. some tags ..
}
}
The terraform apply
fails after creating a few dozen resources with the following errors:
Error applying plan:
10 error(s) occurred:
* module.kubernetes.module.kube_certs.tls_private_key.kube_ca: 1 error(s) occurred:
* tls_private_key.kube_ca: unexpected EOF
* module.kubernetes.module.identity_certs.tls_cert_request.identity_server: 1 error(s) occurred:
* tls_cert_request.identity_server: unexpected EOF
* module.kubernetes.module.etcd_certs.tls_private_key.etcd_client: 1 error(s) occurred:
* tls_private_key.etcd_client: unexpected EOF
* module.kubernetes.module.etcd_certs.tls_private_key.etcd_peer: 1 error(s) occurred:
* tls_private_key.etcd_peer: unexpected EOF
* module.kubernetes.module.ingress_certs.tls_cert_request.ingress: 1 error(s) occurred:
* tls_cert_request.ingress: unexpected EOF
* module.kubernetes.module.etcd_certs.tls_cert_request.etcd_server: 1 error(s) occurred:
* tls_cert_request.etcd_server: unexpected EOF
* module.kubernetes.module.etcd_certs.tls_self_signed_cert.etcd_ca: unexpected EOF
* module.kubernetes.module.identity_certs.tls_cert_request.identity_client: connection is shut down
* module.kubernetes.module.kube_certs.tls_cert_request.apiserver: connection is shut down
* module.kubernetes.module.kube_certs.tls_cert_request.kubelet: connection is shut down
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
I tried setting tectonic_etcd_tls_enabled = false
but the same thing occurred.
It feels like I'm supposed to be providing some sort of cert for etcd, but the documentation seems to imply that this cert should be created for me by the module itself. I can't find much else to help diagnose.
Let me know if there's any additional information I can provide. Thanks!
I'm trying to frame up a kubernetes demo for testing.
Although I don't see anywhere how to create a module, I looked at the example and even though there is some mention of not editing the file because of some make overwrite, I used it as the base of my terraform config.
When I run terraform init
it goes through the process of loading all the supporting modules from github. Then it throws some errors:
MacBook-Air-2:kuber cjefferies$ terraform init
Initializing modules...
- module.kubernetes
- module.kubernetes.container_linux
- module.kubernetes.vpc
- module.kubernetes.etcd
- module.kubernetes.ignition_masters
- module.kubernetes.masters
- module.kubernetes.ignition_workers
- module.kubernetes.workers
- module.kubernetes.dns
- module.kubernetes.kube_certs
- module.kubernetes.etcd_certs
- module.kubernetes.ingress_certs
- module.kubernetes.identity_certs
- module.kubernetes.bootkube
- module.kubernetes.tectonic
- module.kubernetes.flannel_vxlan
- module.kubernetes.calico
- module.kubernetes.canal
Warning: output "etcd_sg_id": must use splat syntax to access aws_security_group.etcd attribute "id", because it has "count" set; use aws_security_group.etcd.*.id to obtain a list of the attributes across all instances
Warning: output "aws_api_external_dns_name": must use splat syntax to access aws_elb.api_external attribute "dns_name", because it has "count" set; use aws_elb.api_external.*.dns_name to obtain a list of the attributes across all instances
Warning: output "aws_elb_api_external_zone_id": must use splat syntax to access aws_elb.api_external attribute "zone_id", because it has "count" set; use aws_elb.api_external.*.zone_id to obtain a list of the attributes across all instances
Warning: output "aws_api_internal_dns_name": must use splat syntax to access aws_elb.api_internal attribute "dns_name", because it has "count" set; use aws_elb.api_internal.*.dns_name to obtain a list of the attributes across all instances
Warning: output "aws_elb_api_internal_zone_id": must use splat syntax to access aws_elb.api_internal attribute "zone_id", because it has "count" set; use aws_elb.api_internal.*.zone_id to obtain a list of the attributes across all instances
Error: module "kubernetes": missing required argument "tectonic_admin_email"
Error: module "kubernetes": missing required argument "tectonic_admin_password"
Here is my main.tf:
module "kubernetes" {
source = "../../modules/terraform-aws-kubernetes"
// (optional) Extra AWS tags to be applied to created autoscaling group resources.
// This is a list of maps having the keys `key`, `value` and `propagate_at_launch`.
//
// Example: `[ { key = "foo", value = "bar", propagate_at_launch = true } ]`
// tectonic_autoscaling_group_extra_tags = ""
// (optional) Unique name under which the Amazon S3 bucket will be created. Bucket name must start with a lower case name and is limited to 63 characters.
// The Tectonic Installer uses the bucket to store tectonic assets and kubeconfig.
// If name is not provided the installer will construct the name using "tectonic_cluster_name", current AWS region and "tectonic_base_domain"
// tectonic_aws_assets_s3_bucket_name = ""
// Instance size for the etcd node(s). Example: `t2.medium`. Read the [etcd recommended hardware](https://coreos.com/etcd/docs/latest/op-guide/hardware.html) guide for best performance
tectonic_aws_etcd_ec2_type = "t2.medium"
// (optional) List of additional security group IDs for etcd nodes.
//
// Example: `["sg-51530134", "sg-b253d7cc"]`
// tectonic_aws_etcd_extra_sg_ids = ""
// The amount of provisioned IOPS for the root block device of etcd nodes.
// Ignored if the volume type is not io1.
tectonic_aws_etcd_root_volume_iops = "100"
// The size of the volume in gigabytes for the root block device of etcd nodes.
tectonic_aws_etcd_root_volume_size = "30"
// The type of volume for the root block device of etcd nodes.
tectonic_aws_etcd_root_volume_type = "gp2"
// (optional) List of subnet IDs within an existing VPC to deploy master nodes into.
// Required to use an existing VPC and the list must match the AZ count.
//
// Example: `["subnet-111111", "subnet-222222", "subnet-333333"]`
// tectonic_aws_external_master_subnet_ids = ""
// (optional) If set, the given Route53 zone ID will be used as the internal (private) zone.
// This zone will be used to create etcd DNS records as well as internal API and internal Ingress records.
// If set, no additional private zone will be created.
//
// Example: `"Z1ILINNUJGTAO1"`
// tectonic_aws_external_private_zone = ""
// (optional) ID of an existing VPC to launch nodes into.
// If unset a new VPC is created.
//
// Example: `vpc-123456`
// tectonic_aws_external_vpc_id = ""
// (optional) List of subnet IDs within an existing VPC to deploy worker nodes into.
// Required to use an existing VPC and the list must match the AZ count.
//
// Example: `["subnet-111111", "subnet-222222", "subnet-333333"]`
// tectonic_aws_external_worker_subnet_ids = ""
// (optional) Extra AWS tags to be applied to created resources.
// tectonic_aws_extra_tags = ""
// (optional) This configures master availability zones and their corresponding subnet CIDRs directly.
//
// Example:
// `{ eu-west-1a = "10.0.0.0/20", eu-west-1b = "10.0.16.0/20" }`
// tectonic_aws_master_custom_subnets = ""
// Instance size for the master node(s). Example: `t2.medium`.
tectonic_aws_master_ec2_type = "t2.medium"
// (optional) List of additional security group IDs for master nodes.
//
// Example: `["sg-51530134", "sg-b253d7cc"]`
// tectonic_aws_master_extra_sg_ids = ""
// (optional) Name of IAM role to use for the instance profiles of master nodes.
// The name is also the last part of a role's ARN.
//
// Example:
// * Role ARN = arn:aws:iam::123456789012:role/tectonic-installer
// * Role Name = tectonic-installer
// tectonic_aws_master_iam_role_name = ""
// The amount of provisioned IOPS for the root block device of master nodes.
// Ignored if the volume type is not io1.
tectonic_aws_master_root_volume_iops = "100"
// The size of the volume in gigabytes for the root block device of master nodes.
tectonic_aws_master_root_volume_size = "30"
// The type of volume for the root block device of master nodes.
tectonic_aws_master_root_volume_type = "gp2"
// (optional) If set to true, create private-facing ingress resources (ELB, A-records).
// If set to false, no private-facing ingress resources will be provisioned and all DNS records will be created in the public Route53 zone.
// tectonic_aws_private_endpoints = true
// (optional) This declares the AWS credentials profile to use.
tectonic_aws_profile = "acct_dev"
// (optional) If set to true, create public-facing ingress resources (ELB, A-records).
// If set to false, no public-facing ingress resources will be created.
// tectonic_aws_public_endpoints = true
// The target AWS region for the cluster.
tectonic_aws_region = "us-west-1"
// Name of an SSH key located within the AWS region. Example: coreos-user.
tectonic_aws_ssh_key = "myPemKey"
// Block of IP addresses used by the VPC.
// This should not overlap with any other networks, such as a private datacenter connected via Direct Connect.
tectonic_aws_vpc_cidr_block = "10.0.0.0/16"
// (optional) This configures worker availability zones and their corresponding subnet CIDRs directly.
//
// Example: `{ eu-west-1a = "10.0.64.0/20", eu-west-1b = "10.0.80.0/20" }`
// tectonic_aws_worker_custom_subnets = ""
// Instance size for the worker node(s). Example: `t2.medium`.
tectonic_aws_worker_ec2_type = "t2.medium"
// (optional) List of additional security group IDs for worker nodes.
//
// Example: `["sg-51530134", "sg-b253d7cc"]`
// tectonic_aws_worker_extra_sg_ids = ""
// (optional) Name of IAM role to use for the instance profiles of worker nodes.
// The name is also the last part of a role's ARN.
//
// Example:
// * Role ARN = arn:aws:iam::123456789012:role/tectonic-installer
// * Role Name = tectonic-installer
// tectonic_aws_worker_iam_role_name = ""
// (optional) List of ELBs to attach all worker instances to.
// This is useful for exposing NodePort services via load-balancers managed separately from the cluster.
//
// Example:
// * `["ingress-nginx"]`
// tectonic_aws_worker_load_balancers = ""
// The amount of provisioned IOPS for the root block device of worker nodes.
// Ignored if the volume type is not io1.
tectonic_aws_worker_root_volume_iops = "100"
// The size of the volume in gigabytes for the root block device of worker nodes.
tectonic_aws_worker_root_volume_size = "30"
// The type of volume for the root block device of worker nodes.
tectonic_aws_worker_root_volume_type = "gp2"
// The base DNS domain of the cluster. It must NOT contain a trailing period. Some
// DNS providers will automatically add this if necessary.
//
// Example: `openstack.dev.coreos.systems`.
//
// Note: This field MUST be set manually prior to creating the cluster.
// This applies only to cloud platforms.
//
// [Azure-specific NOTE]
// To use Azure-provided DNS, `tectonic_base_domain` should be set to `""`
// If using DNS records, ensure that `tectonic_base_domain` is set to a properly configured external DNS zone.
// Instructions for configuring delegated domains for Azure DNS can be found here: https://docs.microsoft.com/en-us/azure/dns/dns-delegate-domain-azure-dns
tectonic_base_domain = ""
// (optional) The content of the PEM-encoded CA certificate, used to generate Tectonic Console's server certificate.
// If left blank, a CA certificate will be automatically generated.
// tectonic_ca_cert = ""
// (optional) The content of the PEM-encoded CA key, used to generate Tectonic Console's server certificate.
// This field is mandatory if `tectonic_ca_cert` is set.
// tectonic_ca_key = ""
// (optional) The algorithm used to generate tectonic_ca_key.
// The default value is currently recommended.
// This field is mandatory if `tectonic_ca_cert` is set.
// tectonic_ca_key_alg = "RSA"
// (optional) This declares the IP range to assign Kubernetes pod IPs in CIDR notation.
// tectonic_cluster_cidr = "10.2.0.0/16"
// The name of the cluster.
// If used in a cloud-environment, this will be prepended to `tectonic_base_domain` resulting in the URL to the Tectonic console.
//
// Note: This field MUST be set manually prior to creating the cluster.
// Warning: Special characters in the name like '.' may cause errors on OpenStack platforms due to resource name constraints.
tectonic_cluster_name = ""
// (optional) The Container Linux update channel.
//
// Examples: `stable`, `beta`, `alpha`
// tectonic_container_linux_channel = "stable"
// The Container Linux version to use. Set to `latest` to select the latest available version for the selected update channel.
//
// Examples: `latest`, `1465.6.0`
tectonic_container_linux_version = "latest"
// (optional) A list of PEM encoded CA files that will be installed in /etc/ssl/certs on etcd, master, and worker nodes.
// tectonic_custom_ca_pem_list = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key algorithm.
// tectonic_ddns_key_algorithm = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key name.
// tectonic_ddns_key_name = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key secret.
// tectonic_ddns_key_secret = ""
// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server IP/host to register IP addresses to.
// tectonic_ddns_server = ""
// (optional) DNS prefix used to construct the console and API server endpoints.
// tectonic_dns_name = ""
// (optional) The size in MB of the PersistentVolume used for handling etcd backups.
// tectonic_etcd_backup_size = "512"
// (optional) The name of an existing Kubernetes StorageClass that will be used for handling etcd backups.
// tectonic_etcd_backup_storage_class = ""
// (optional) The path of the file containing the CA certificate for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variable `tectonic_etcd_servers` must also be set.
// tectonic_etcd_ca_cert_path = "/dev/null"
// (optional) The path of the file containing the client certificate for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_key_path` must also be set.
// tectonic_etcd_client_cert_path = "/dev/null"
// (optional) The path of the file containing the client key for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_cert_path` must also be set.
// tectonic_etcd_client_key_path = "/dev/null"
// The number of etcd nodes to be created.
// If set to zero, the count of etcd nodes will be determined automatically.
//
// Note: This is not supported on bare metal.
tectonic_etcd_count = "0"
// (optional) List of external etcd v3 servers to connect with (hostnames/IPs only).
// Needs to be set if using an external etcd cluster.
// Note: If this variable is defined, the installer will not create self-signed certs.
// To provide a CA certificate to trust the etcd servers, set "tectonic_etcd_ca_cert_path".
//
// Example: `["etcd1", "etcd2", "etcd3"]`
// tectonic_etcd_servers = ""
// (optional) If set to `true`, all etcd endpoints will be configured to use the "https" scheme.
//
// Note: If `tectonic_experimental` is set to `true` this variable has no effect, because the experimental self-hosted etcd always uses TLS.
// tectonic_etcd_tls_enabled = true
// The path to the tectonic licence file.
// You can download the Tectonic license file from your Account overview page at [1].
//
// [1] https://account.coreos.com/overview
//
// Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.
tectonic_license_path = ""
// The number of master nodes to be created.
// This applies only to cloud platforms.
tectonic_master_count = "1"
// (optional) Configures the network to be used in Tectonic. One of the following values can be used:
//
// - "flannel": enables overlay networking only. This is implemented by flannel using VXLAN.
//
// - "canal": [ALPHA] enables overlay networking including network policy. Overlay is implemented by flannel using VXLAN. Network policy is implemented by Calico.
//
// - "calico": [ALPHA] enables BGP based networking. Routing and network policy is implemented by Calico. Note this has been tested on baremetal installations only.
// tectonic_networking = "flannel"
// The path the pull secret file in JSON format.
// This is known to be a "Docker pull secret" as produced by the docker login [1] command.
// A sample JSON content is shown in [2].
// You can download the pull secret from your Account overview page at [3].
//
// [1] https://docs.docker.com/engine/reference/commandline/login/
//
// [2] https://coreos.com/os/docs/latest/registry-authentication.html#manual-registry-auth-setup
//
// [3] https://account.coreos.com/overview
//
// Note: This field MUST be set manually prior to creating the cluster unless `tectonic_vanilla_k8s` is set to `true`.
tectonic_pull_secret_path = ""
// (optional) This declares the IP range to assign Kubernetes service cluster IPs in CIDR notation.
// The maximum size of this IP range is /12
// tectonic_service_cidr = "10.3.0.0/16"
// Validity period of the self-signed certificates (in hours).
// Default is 3 years.
// This setting is ignored if user provided certificates are used.
tectonic_tls_validity_period = "26280"
// If set to true, a vanilla Kubernetes cluster will be deployed, omitting any Tectonic assets.
tectonic_vanilla_k8s = true
// The number of worker nodes to be created.
// This applies only to cloud platforms.
tectonic_worker_count = "3"
}
I wish there was a little more explanation to get a first frame-up running. Perhaps an example.tf
that provides the minimum workable variables to configure for an initial install.
Thanks,
Chris.
Even i've not successed to setup running cluster right now it looks like there is only predefined node configuration roles. Master, etcd and workers have their exclusive nodes.
Here example config, that also points to that.
tectonic_aws_etcd_ec2_type = "t2.medium"
tectonic_aws_worker_ec2_type = "t2.medium"
tectonic_aws_master_ec2_type = "t2.medium"
tectonic_etcd_count = "0"
tectonic_master_count = "1"
tectonic_worker_count = "2"
I can understand that this is maybe preferable setup for larger clusters, but for smaller teams it might be to much and maybe, one can save nodes and cost, by assigning master, etcd, and workers to same nodes.
Do you tend to disagree probably and there are no plans to allow such kind of setup? Or would int be possible to include more flexible cluster config?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.