kubernauts / tk8 Goto Github PK
View Code? Open in Web Editor NEWCLI to deploy Kubernetes with RKE, EKS or Kubeadm and deploy additional addons
License: Apache License 2.0
CLI to deploy Kubernetes with RKE, EKS or Kubeadm and deploy additional addons
License: Apache License 2.0
Problem: As described here OSX version of sed
behaves a little different than the Linux variant which can cause certain abnomilities in the code. Therefore we need to replace usage sed
with a golang native solution.
Example solutions:
https://gist.github.com/tdegrunt/045f6b3377f3f7ffa408
https://gist.github.com/dallarosa/b58b0e3425761e0a7cf6
https://socketloop.com/tutorials/golang-read-a-text-file-and-replace-certain-words
Describe the bug
When executing the process described here, TK8 Cattle AWS Provisioner with Terraform Rancher Provider, on Ubuntu 18.04 and CentOS Linux 7.6, both amd64, in the end of the process, the output of the command tk8 cluster install cattle-aws, says everything were deployed, yet, any resource is created on AWS or Rancher.
Additional context
TK8-v0.7.1
Terraform-v0.12.2
config.yaml file:
config.txt
tk8 cluster install cattle-aws command output:
tk8_cluster_install_cattle-aws-output.txt
pip install -r kubespray/requirements.txt
fails in the tk8 container image:
Command "/usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-3xxUgv/bcrypt/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-Z8WEoI/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-3xxUgv/bcrypt/
which leads to:
/tk8 # tk8 cluster aws -i
Found Ansible at /usr/bin/ansible
ansible 2.5.5
config file = /tk8/ansible.cfg
configured module search path = ['/tk8/library']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.4 (default, May 1 2018, 11:38:09) [GCC 6.4.0]
Configuration folder already exists
coreos
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/tk8/kubespray/roles/vault/handlers/main.yml': line 44, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: unseal vault
^ here
Deploy AKS:
$ tk8 cluster aks install
Destroy AKS:
$ tk8 cluster aks destroy
Scale AKS:
$ tk8 cluster aks scale
Upgrade AKS:
$ tk8 cluster aks upgrade
Add REST API for TK8 Web, extend TK8 Web Interface
Prerequisites:
Terraform Scripts are provided in seperate repo
config.yaml extended with aks section
Issue: When no config file is found the CLI exit with a panic, instead it should exit by logging a Fatal
message
[ec2-user@ip-10-0-0-137 bin]$ ./tk8 cluster aws -c
Found terraform at /usr/bin/terraform
Terraform v0.11.7
panic: fatal error config file: Config File "config" Not Found in "[/home/ec2-user /home/ec2-user/go/bin /tk8]"
goroutine 1 [running]:
github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd.glob..func2(0xae3560, 0xc420043040, 0x0, 0x1)
/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd/aws.go:248 +0x3fb5
github.com/kubernauts/tk8/vendor/github.com/spf13/cobra.(*Command).execute(0xae3560, 0xc420043030, 0x1, 0x1, 0xae3560, 0xc420043030)
/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/kubernauts/tk8/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xae3ee0, 0xae4140, 0xc4200e0700, 0xae4030)
/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/spf13/cobra/command.go:852 +0x334
github.com/kubernauts/tk8/vendor/github.com/spf13/cobra.(*Command).Execute(0xae3ee0, 0x0, 0xc420117f48)
/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd.Execute()
/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd/root.go:43 +0x31
main.main()
/home/ec2-user/go/src/github.com/kubernauts/tk8/main.go:20 +0x20
$ tk8 cluster install openshift
Use Dave Kerr's implementation:
https://github.com/dwmkerr/terraform-aws-openshift
Enhance the implementation:
Make the number of nodes configurable
Make admin pwd configurable
Add users with project admin and cluster-admin roles
Add helm support
Investigate OKD 3.11 support
Investigate if Keycloak integration works
Is your feature request related to a problem? Please describe.
I'm always frustrated when [...] my rigid corporate environment has an existing VPC with security controls which they want me to use for hosting rancher.
Describe the solution you'd like
Please let me specify a VPC to install rancher into rather than creating a new VPC.
Describe alternatives you've considered
Jumping off a bridge. Becoming a chicken farmer. Going into politics.
Additional context
Describe the bug
When tk8 is run in a directory which does not have config.yaml
, it tries to auto generate it by saying:
No default config was provided. Generating one for you.
The auto-generated config.yaml
is only usable for aws provisioner and not for others.
To Reproduce
Steps to reproduce the behavior:
tk8
binary in empty folder.config.yaml
: tk8 cluster install cattle-aws
Expected behavior
We have two options in this case:
Instead of auto-generating config.yaml
, we should tell the user to create the config.yaml
before running the command since we cannot generate multiple configs for multiple provisioners.
Implement the code for mapping between different config.yaml
's for different provisioners.
Kazimierz Sroka wrote to me:
I'm trying to set up kubernetes cluster using https://github.com/kubernauts/tk8 at v0.4.3.
I configure everything properly and after running tk8 cluster openstack --create terraform complete successfully but I can not ssh into bastion.
I am sure I am using correct key (there's only one on the openstack and when I create a new VM manually I can ssh properly). I am getting Permission denied when trying with root, centos, ubuntu or whatever i put in cluster.tfvars.
Could you help me debug this or point to someone who can?
I noticed that openstack folder was removed from master branch, why is that?
The tk8 installation for AWS uses CoreOS by default, which is somehow hard coded.
Changing the os from coreos to centos in config.yaml requires to run the playbook manually with:
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=centos -e bootstrap_os=centos -b --become-user=root --flush-cache
The installation with CoreOS doesn't work somehow anymore with the latest kubespray (or our own extended version for OpenStack), after the installation some pods keep in ContainerCreating
state:
$ k get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-527gv 0/1 ContainerCreating 0 12m
kube-system calico-node-84nvz 0/1 ContainerCreating 0 12m
and this is apparently due to:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned kube-dns-7bd4d5fbb6-cths7 to ip-10-250-208-170.eu-central-1.compute.internal
Warning FailedMount 1m (x11 over 7m) kubelet, ip-10-250-208-170.eu-central-1.compute.internal MountVolume.SetUp failed for volume "kube-dns-config" : configmaps "kube-dns" is forbidden: User "system:node:kubernetes-k8s1-worker1" cannot get configmaps in the namespace "kube-system": no path found to object
Warning FailedMount 1m (x11 over 7m) kubelet, ip-10-250-208-170.eu-central-1.compute.internal MountVolume.SetUp failed for volume "kube-dns-token-mvppm" : secrets "kube-dns-token-mvppm" is forbidden: User "system:node:kubernetes-k8s1-worker1" cannot get secrets in the namespace "kube-system": no path found to object
Warning FailedMount 1m (x3 over 5m) kubelet, ip-10-250-208-170.eu-central-1.compute.internal Unable to mount volumes for pod "kube-dns-7bd4d5fbb6-cths7_kube-system(d8672a57-7b12-11e8-826a-02336ebc6866)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"kube-dns-7bd4d5fbb6-cths7". list of unmounted volumes=[kube-dns-config kube-dns-token-mvppm]. list of unattached volumes=[kube-dns-config kube-dns-token-mvppm]
We shall change the default OS type to CentOS for now, since we're using CentOS on OpenStack as well :-)
Describe the bug
If the user install a addon which is not local the cmd get the code but do not install the add-on on the cluster. in the second run of the cmd it will work.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
with the first command, the addon get cloned and installed in one step
Desktop (please complete the following information):
As suggested by Arush:
Instead of depending on the user to provide their private SSH key, we should add an option in tk8 to create an SSH key, add that to the cloud provider(I am majorly talking AWS here but we can look into OpenStack also) and then use that key for our purpose just as Flux does it.
The AWS IAM Authenticator is not needed for TK8 K8s deployment, please remove the link from the prerequisites:
Add version command to tk8.
For example,
$ tk8 version
0.0.1
We need to add support for goreleaser for the next release as it not only ease the release process but does provide various other features such as built-in CI tests which supports all the major CI platforms and various release format including, but not limited to binary, deb, rpm, etc.
Is your feature request related to a problem? Please describe.
Provide users a way to set up kube-prometheus
stack via TK8 CLI.
Describe the solution you'd like
Create a TK8 addon which will set up all the necessary components involved in kube-prometheus
stack.
tk8 cluster install rke doesn't work (e.g. as mentioned on this post):
https://blog.kubernauts.io/using-rke-as-a-cluster-provisioner-with-tk8-543c13a5ec35
Is your feature request related to a problem? Please describe.
*ctl is a common pattern for using CLIs, tk8ctl would be a better name for tk8 cli tool :-)
Describe the solution you'd like
Rename tk8 to tk8ctl by the next release.
As an add-on it would be desirable to be able to install your own registry.
Habor could be an interesting solution here.
harbor-git
CNCF to Host Harbor in the Sandbox
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
GitHub has the feature to clearly define code owners i.e which file in the repo is owned by whom.
Describe alternatives you've considered
none
Additional context
More Info can be found at: https://blog.github.com/2017-07-06-introducing-code-owners/
Please provide vault-operator support as an add-on:
Via CLI:
$ tk8 addon install vault-operator
Via TK8 Web (REST API)
tk8 cluster eks --create
should install an eks cluster (implementation is already done by infinitydon)
(pls. provide darwin and linux binaries)
wget https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/linux/amd64/heptio-authenticator-aws
sudo cp heptio-authenticator-aws /usr/local/bin/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxx
tk8 cluster eks --create
export KUBECONFIG=kubeconfig
k get nodes
Flags in general are used to report deviations from the standard. For example, you could tell the CLI where to look for a config or which OS to use.
However, the main procedures should not be controlled with flags.
For example, the command 'tk8 cluster aws -c -i -d' would be a complete contradiction in itself and is not visible at first glance.
Here the CLI parameters should be used and flags as additional control. For example: 'tk8 cluster aws -f /myconfig.yml create'
Hi,
I have trying to install EKS cluster on AWS and using TK8 repository as following EKS installation content.But i when i execute below command i have get a terraform error.
○ → pwd
/opt/kubernetes
2019-05-02 23:09:20 ⌚ cloudyanke in /opt/kubernetes
Cloning into 'eks'...
| | __ _ _ | |__ ___ _ __ _ __ __ _ _ _ | |_ ___ ___ | | __ ___ ___ | |()
| |/ /| | | || ' \ / _ | '|| '_ \ / ` || | | || __|/ __| / _ | |/ // __| / __|| || |
| < | || || |) || __/| | | | | || (| || || || | __ \ | /| < _ \ | ( | || |
||_\ _,||./ _||| || || _,| _,| _||/ _|||_|/ _|||||
Found kubectl at /usr/bin/kubectl
2019/05/02 23:05:33 Client Version: v1.13.4
2019/05/02 23:05:33 Terraform binary exists in the installation folder, terraform version:
2019/05/02 23:05:33 Terraform v0.11.13
2019/05/02 23:05:33 starting terraform init
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
2019/05/02 23:05:33 starting terraform apply
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration
would mark everything for destruction, which is normally not what is desired.
If you would like to destroy everything, please run 'terraform destroy' which
does not require any configuration files.
2019/05/02 23:05:33 Exporting kubeconfig file to the installation folder
2019/05/02 23:05:33 To use the kubeconfig file, do the following:
2019/05/02 23:05:33 export KUBECONFIG=~/inventory/TK8EKS/provisioner/kubeconfig
2019/05/02 23:05:33 Exporting Worker nodes config-map to the installation folder
2019/05/02 23:05:33 Creating Worker Nodes
error: no objects passed to apply
2019/05/02 23:05:33 Worker nodes are coming up one by one, it will take some time depending on the number of worker nodes you specified
○ → ls
aws-iam-authenticator config.yaml inventory provisioner terraform tk8.pem
When i investigate common.go file the terraform init command command trying to run wrong path so terraform show us to i could not initialized configuration files.
2019-05-02 23:08:19 ⌚ cloudyanke in /opt/kubernetes/inventory/TK8EKS/provisioner/eks
○ → terraform init
Initializing modules...
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
When creating default config it should prompt for AWS credentials when no other credential source(i.e IAM instance or ~/.aws/credentials file) is available.
We have the following external packages imported in the code:
import (
...
“github.com/spf13/cobra”
“github.com/spf13/viper”
)
Now everyone who wishes to build the code will have to import these packages individually which could become a challenge as the project size and the number of imported packages increases. Therefore we should use a package manager(recommended godep) to take care of this problem. So that the users will just have to issue a single command i.e dep ensure
to get all the external packages and they will be all good to build the code.
Please check with kube-bench the grade of security by tk8 made clusters:
https://github.com/aquasecurity/kube-bench
$SUBJECT
[ec2-user@ip-172-31-... ~]$ sudo yum update
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main | 2.1 kB 00:00:00
amzn-updates | 2.5 kB 00:00:00
No packages marked for update
[ec2-user@ip-172-31-... ~]$ sudo yum install gnupg gnupg1 gnupg2
Loaded plugins: priorities, update-motd, upgrade-helper
Package gnupg-1.4.19-1.28.amzn1.x86_64 already installed and latest version
Package gnupg-1.4.19-1.28.amzn1.x86_64 already installed and latest version
Package gnupg2-2.0.28-1.30.amzn1.x86_64 already installed and latest version
Nothing to do
[ec2-user@ip-172-31-... ~]$ sudo docker build -t tk8 ./tk8/
Sending build context to Docker daemon 20.15MB
Step 1/4 : FROM ubuntu
---> 452a96d81c30
Step 2/4 : ARG TERRVERSION=0.11.5
---> Using cache
---> b9546c0ac7c2
Step 3/4 : COPY ./tk8 /usr/local/bin/tk8
---> Using cache
---> 568cdde3aa48
Step 4/4 : RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 && apt-get update && apt-get install -y python-pip zip wget git && pip install ansible netaddr && wget https://releases.hashicorp.com/terraform/${TERRVERSION}/terraform_${TERRVERSION}_linux_amd64.zip && unzip terraform_${TERRVERSION}_linux_amd64.zip -d /usr/local/bin/ && rm terraform_${TERRVERSION}_linux_amd64.zip && mkdir /tk8 && chmod +x /usr/local/bin/tk8
---> Running in f4a7ea87a2ec
E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
The command '/bin/sh -c apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 && apt-get update && apt-get install -y python-pip zip wget git && pip install ansible netaddr && wget https://releases.hashicorp.com/terraform/${TERRVERSION}/terraform_${TERRVERSION}_linux_amd64.zip && unzip terraform_${TERRVERSION}_linux_amd64.zip -d /usr/local/bin/ && rm terraform_${TERRVERSION}_linux_amd64.zip && mkdir /tk8 && chmod +x /usr/local/bin/tk8' returned a non-zero code: 255
[ec2-user@ip-172-31-... ~]$
Is there any plans to support the pulling up of a cluster on bare metal/Vagrant based environments?
As we found out at yesterday's meetup ini cologne, the baremetal
command is currently not available in tk8.
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
An interactive prompt in the CLI. The user can select the provisioner and the config.yaml
for the provisioner can be applied. The user does not need to manually specify the provisioner name in the command.
Describe alternatives you've considered
NA
Additional context
NA
There are some issues with calico. So we need to make it configurable which kind of network plugin to use. with flannel we have currently no issues.
TODO:
Describe the bug
Error: UPGRADE FAILED: Internal error occurred: failed calling admission webhook "issuers.admission.certmanager.k8s.io": the server is currently unable to handle the request
To Reproduce
tk8 cluster install aws
tk8 addon install rancher
Expected behavior
Rancher should Just Work.
Additional Information
I don't even understand how cert-manager is in play here since it's not in the main.yaml.
A quick google implies to me that the cert-manager CRDs might have to be installed first which is absent from the main.yaml but I'm not sure if this is a rancher problem or a tk8 problem: cert-manager/cert-manager#1149
Currently --config
is configured as a global flag:
Global Flags:
--config string uses the config.yaml
Ideally, config.yaml is only required while working with the cluster aws
command and not in any other case. Therefore the flag shouldn't be mentioned as a global flag but instead should be specific to aws
or cluster
command.
Describe the bug
Currently, there is not much difference between Install
and Upgrade
functionality. Install
with modified configuration does an Upgrade
as well.
To Reproduce
Steps to reproduce the behavior:
tk8 cluster install rke/aws
.config.yaml
and run tk8 cluster install rke/aws
.Upgrade
and not Install
.Expected behavior
Install
should only set up a cluster and infra if there is no current cluster with the same name.
Upgrade
should only work if there is an existing cluster.
contents to follow...
Instead of having all the application logic in the cmd
folder we should have a separate package to contain the business logic for better code structuring. E.g:
move addon.go to internal/addon/addon.go
move aws.go to internal/cluster/aws.go
move bare.go to internal/cluster/baremetal.go
move openstack.go to internal/cluster/openstack.go
Please investigate how to add Kafka Confluent & Strimzi support as an add-on with tk8.
Currently with Rancher support, we can run Kafka on tk8 made k8s clusters with confluent from Rancher Catalog, but it doesn't work as it should (?).
Please provide first a manual install guide and automate it with tk8 and | or helm later.
The getting started link on the SUMMARY.md gives 404:
https://github.com/kubernauts/tk8/blob/master/docs/en/SUMMARY.md
Click on getting started:
https://github.com/kubernauts/tk8/blob/master/docs/en/README.md
tk8 addon --rancher
should install this manifest:
https://raw.githubusercontent.com/arashkaffamanesh/tk8/master/rancher/rancher.yaml
And we need to provide an how-to about exposing the rancher server via ingress / loadbalancer and write a blog post on medium.
and the help should be extended:
Flags:
-m, --heapster Deploy Heapster
-h, --help help for addon
-l, --ltaas Deploy Load Testing As A Service
-p, --prom Deploy prometheus
-r, --rancher Deploy Rancher Server
Is this project still active?
Add-ons should be able to be added to the project in a simple form. The aim should be for third party developers to be able to write add-ons and use or make them available with TK8. Ideally, these add-ons can be installed and used simply by specifying a git repository.
It would also be possible to provide the add-on, which the later RestAPI can extend with its own endpoints. This should be taken into account in the conceptual design.
An autocomplete feature should be added through which we can generate an autocomplete shellscript at least for zsh and bash.
@MuellerMH Can we also add the number of bastion hosts in the config as an option?
As it really doesn't make sense to me to have 2 bastion hosts.
We're using our own extended version of kubespray, mainly for OpenStack support, the OpenStack support is now available in upstream kubespray, please adapt init.go to pull from upstream.
@infinitydon
As we discussed in the meetup, here is my bug ticket:
I tried to create a cluster on aws with this config:
...
aws_ssh_keypair: "[email protected]"
...
Error message in tk8 cli:
Error: Error applying plan:
6 error(s) occurred:
* aws_instance.k8s-master: 1 error(s) occurred:
* aws_instance.k8s-master: Error launching source instance: InvalidKeyPair.NotFound: The key pair 'namedomaincom.pem' does not exist
status code: 400, request id: e215e873-d547-4a85-80a1-a6f0d0897db6
* aws_instance.bastion-server[0]: 1 error(s) occurred:
My key location and name:
sudo docker run -it --name tk8 -v ~/.ssh/:/root/.ssh/ -v "$(pwd)":/tk8 kubernautslabs/tk8 sh
ls /root/.ssh/
[email protected]
We have our keys named like this in AWS:
"[email protected]"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.