Git Product home page Git Product logo

tk8's Introduction

Logo

TK8: A multi-cloud, multi-cluster Kubernetes platform installation and integration tool

TK8 is a command line tool written in Go. It fully automatates the installation of Kubernetes on any environment. With TK8, you are able to centrally manage different Kubernetes clusters with different configurations. In addition, TK8 with its simple add-on integration offers the possibility to quickly, cleanly and easily distribute extensions to the different Kubernetes clusters.

These include a Jmeter cluster for load testing, Prometheus for monitoring, Jaeger, Linkerd or Zippkin for tracing, Ambassador API Gateway with Envoy for Ingress and Load Balancing, Istio as mesh support solution, Jenkins-X for CI/CD integration. In addition, the add-on system also supports the management of Helm packages.

Table of contents

The documentation as well as a detailed table of contents can be found here.

Installation

The TK8 CLI requires some dependencies to perform its tasks. At the moment we still need your help here, but we are working on a setup script that will do these tasks for you.

Terraform

Terraform is required to automatically set up the infrastructure in the desired environment. Terraform Installation

Ansible

Ansible is required to run the automated installation routines in the desired and created environment. Ansible Installation

Kubectl

Kubectl is needed by the CLI to roll out the add-ons and by you to access your clusters. Kubectl Installation

Python and pip

In the automated routines Python scripts are used which uses Pip to load its dependencies. Python Installation pip Installation

AWS IAM Authenticator

If you want to install an EKS cluster with TK8, the AWS IAM Authenticator needs to be available on your system and must be executable (chmod +x <path-to-binary>). It is preffered to have the binary in your $PATH location e.g: (/usr/local/bin). This is included in the provisioner package EKS of the TK8 CLI or can be found in the given link.

Usage

We have described the different target platforms separately in detail in the documentation. But we would like to give you just one example using AWS.

You can get the binary in following ways:

  • Download the executable file for your operating system from the release section.
  • Use go get -u github.com/kubernauts/tk8 to let go fetch the repo along with its dependencies and build the executable for you.
  • Build your own version using the go build command.

Create a separate folder and store the executable binary file there, a configuration file is also required. An example config file is available by the name config.yaml.example. Add the necessary parameters for your cluster along with the AWS API credentials. Alternatively you should export the AWS API credentials in the environment variables because parts of the CLI (EKS cluster) needs them there.

export AWS_SECRET_ACCESS_KEY=xxx export AWS_ACCESS_KEY_ID=xxx

Then execute the CLI with the command: tk8 cluster install aws

With this command the TK8 CLI will create all of the required resources in AWS and installs Kubernetes on it.

If you no longer need the cluster, you can use the command: tk8 cluster destroy aws to automatically remove all of the resources.

Add-Ons

You might want to check out our numerous add-ons for TK8:

(Please note, that the ReadMe's for the bottom two (Vault Operator, Rancher) are not ready yet. We will provide them shortly so you can explore these addons too!)

Stay tuned as there is more to come from our lovely community and ourselfs! You can also develop your own add-ons, just check the passage below

Contributing

For provisioning the add-ons we have a separate documentation and examples how you can build your extensions and integrate them into the TK8 project. You can also reach us at Slack.

For a platform provider we have a separate documentation which is only about integrating a platform in TK8. Here you will find detailed instructions and examples on how TK8 will execute your integration or you can also reach us in slack.

To join the community and participate in the discussions going around, you can create an issue or get in touch with us in Slack.

Join us on Kubernauts Slack Channel

Credits

Founder and initiator of this project is Arash Kaffamanesh Founder and CEO of Clouds Sky GmbH and Kubernauts GmbH

The project is supported by cloud computing experts from cloudssky GmbH and Kubernauts GmbH. Christopher Adigun, Arush Salil, Manuel Müller, Nikita, Anoop

A big thanks goes to the contributors of Kubespray whose great work we use as a basis for the setup and installation of Kubernetes in the AWS Cloud.

Furthermore we would like to thank the contributors of kubeadm which is currently not only part of the Kubespray project, but also of the TK8.

Also a big thank you to Wesley Charles Blake, on the basis of which we were able to offer our EKS integration.

License

Tk8 Apache License MIT License EKS MIT License EKS

tk8's People

Contributors

anoopl avatar arashkaffamanesh avatar arush-sal avatar gitbook-bot avatar infinitydon avatar ishantanu avatar mollotoff avatar muellermh avatar nikita-mk avatar surajnarwade avatar zirconias avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tk8's Issues

Use a dependency manager to handle external packages

We have the following external packages imported in the code:

import (
    ...github.com/spf13/cobra”
    “github.com/spf13/viper”
)

Now everyone who wishes to build the code will have to import these packages individually which could become a challenge as the project size and the number of imported packages increases. Therefore we should use a package manager(recommended godep) to take care of this problem. So that the users will just have to issue a single command i.e dep ensure to get all the external packages and they will be all good to build the code.

Make kube-network-plugin configurable

There are some issues with calico. So we need to make it configurable which kind of network plugin to use. with flannel we have currently no issues.

TODO:

  • make inventory/sample/group_vars/k8s-cluster.yml configurable
  • set flannel as default

Remove/Modify auto-generation of config.yaml in empty directory

Describe the bug
When tk8 is run in a directory which does not have config.yaml, it tries to auto generate it by saying:

No default config was provided. Generating one for you.

The auto-generated config.yaml is only usable for aws provisioner and not for others.

To Reproduce
Steps to reproduce the behavior:

  1. Download tk8 binary in empty folder.
  2. Setup cattle-aws cluster without config.yaml: tk8 cluster install cattle-aws

Expected behavior
We have two options in this case:

  1. Instead of auto-generating config.yaml, we should tell the user to create the config.yaml before running the command since we cannot generate multiple configs for multiple provisioners.

  2. Implement the code for mapping between different config.yaml's for different provisioners.

EKS INSTALL Terraform issue

Hi,
I have trying to install EKS cluster on AWS and using TK8 repository as following EKS installation content.But i when i execute below command i have get a terraform error.

○ → pwd
/opt/kubernetes

2019-05-02 23:09:20 ⌚ cloudyanke in /opt/kubernetes

Cloning into 'eks'...


| | __ _ _ | |__ ___ _ __ _ __ __ _ _ _ | |_ ___ ___ | | __ ___ ___ | |()
| |/ /| | | || '
\ / _ | '|| '_ \ / ` || | | || __|/ __| / _ | |/ // __| / __|| || |
| < | |
| || |) || __/| | | | | || (| || || || | __ \ | /| < _ \ | (
| || |
||_\ _,||.
/ _||| || || _,| _,| _||/ _|||_|/ _|||||

Found kubectl at /usr/bin/kubectl
2019/05/02 23:05:33 Client Version: v1.13.4

2019/05/02 23:05:33 Terraform binary exists in the installation folder, terraform version:
2019/05/02 23:05:33 Terraform v0.11.13

2019/05/02 23:05:33 starting terraform init
Terraform initialized in an empty directory!

The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
2019/05/02 23:05:33 starting terraform apply

Error: No configuration files found!

Apply requires configuration to be present. Applying without a configuration
would mark everything for destruction, which is normally not what is desired.
If you would like to destroy everything, please run 'terraform destroy' which
does not require any configuration files.

2019/05/02 23:05:33 Exporting kubeconfig file to the installation folder
2019/05/02 23:05:33 To use the kubeconfig file, do the following:
2019/05/02 23:05:33 export KUBECONFIG=~/inventory/TK8EKS/provisioner/kubeconfig
2019/05/02 23:05:33 Exporting Worker nodes config-map to the installation folder
2019/05/02 23:05:33 Creating Worker Nodes
error: no objects passed to apply
2019/05/02 23:05:33 Worker nodes are coming up one by one, it will take some time depending on the number of worker nodes you specified

○ → ls
aws-iam-authenticator config.yaml inventory provisioner terraform tk8.pem

When i investigate common.go file the terraform init command command trying to run wrong path so terraform show us to i could not initialized configuration files.

2019-05-02 23:08:19 ⌚ cloudyanke in /opt/kubernetes/inventory/TK8EKS/provisioner/eks
○ → terraform init
Initializing modules...

  • module.eks
    Getting source "./modules/eks"

Initializing provider plugins...

  • Checking for available provider plugins on https://releases.hashicorp.com...
  • Downloading plugin for provider "aws" (2.8.0)...
  • Downloading plugin for provider "http" (1.1.1)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

  • provider.aws: version = "~> 2.8"
  • provider.http: version = "~> 1.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Deploy EKS with TK8 (tk8 cluster eks --create)

tk8 cluster eks --create
should install an eks cluster (implementation is already done by infinitydon)
(pls. provide darwin and linux binaries)

wget https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/linux/amd64/heptio-authenticator-aws
sudo cp heptio-authenticator-aws /usr/local/bin/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxx
tk8 cluster eks --create
export KUBECONFIG=kubeconfig
k get nodes

Provide OKD 3.10 support

$ tk8 cluster install openshift

Use Dave Kerr's implementation:

https://github.com/dwmkerr/terraform-aws-openshift

Enhance the implementation:

Make the number of nodes configurable
Make admin pwd configurable
Add users with project admin and cluster-admin roles
Add helm support
Investigate OKD 3.11 support
Investigate if Keycloak integration works

Deploy Azure AKS with TK8

Deploy AKS:
$ tk8 cluster aks install

Destroy AKS:
$ tk8 cluster aks destroy

Scale AKS:
$ tk8 cluster aks scale

Upgrade AKS:
$ tk8 cluster aks upgrade

Add REST API for TK8 Web, extend TK8 Web Interface

Prerequisites:

Terraform Scripts are provided in seperate repo
config.yaml extended with aks section

Add Kafka Confluent support as an add-on with tk8

Please investigate how to add Kafka Confluent & Strimzi support as an add-on with tk8.

Currently with Rancher support, we can run Kafka on tk8 made k8s clusters with confluent from Rancher Catalog, but it doesn't work as it should (?).

Please provide first a manual install guide and automate it with tk8 and | or helm later.

certmanager seems to be broke somehow

Describe the bug

Error: UPGRADE FAILED: Internal error occurred: failed calling admission webhook "issuers.admission.certmanager.k8s.io": the server is currently unable to handle the request

To Reproduce

tk8 cluster install aws
tk8 addon install rancher

Expected behavior

Rancher should Just Work.

Additional Information

I don't even understand how cert-manager is in play here since it's not in the main.yaml.

A quick google implies to me that the cert-manager CRDs might have to be installed first which is absent from the main.yaml but I'm not sure if this is a rancher problem or a tk8 problem: cert-manager/cert-manager#1149

SSH Key is not parsed correctly if you use "@" in the filename itself

@infinitydon
As we discussed in the meetup, here is my bug ticket:
I tried to create a cluster on aws with this config:

...
   aws_ssh_keypair: "[email protected]"
...

Error message in tk8 cli:

Error: Error applying plan:

6 error(s) occurred:

* aws_instance.k8s-master: 1 error(s) occurred:

* aws_instance.k8s-master: Error launching source instance: InvalidKeyPair.NotFound: The key pair 'namedomaincom.pem' does not exist
	status code: 400, request id: e215e873-d547-4a85-80a1-a6f0d0897db6
* aws_instance.bastion-server[0]: 1 error(s) occurred:

My key location and name:

sudo docker run -it --name tk8 -v ~/.ssh/:/root/.ssh/ -v "$(pwd)":/tk8 kubernautslabs/tk8 sh
ls /root/.ssh/
[email protected]

We have our keys named like this in AWS:
"[email protected]"

Provide kube-prometheus support as an TK8 add-on

Is your feature request related to a problem? Please describe.
Provide users a way to set up kube-prometheus stack via TK8 CLI.

Describe the solution you'd like
Create a TK8 addon which will set up all the necessary components involved in kube-prometheus stack.

Add terraform state validation for Install and Upgrade operations in TK8

Describe the bug
Currently, there is not much difference between Install and Upgrade functionality. Install with modified configuration does an Upgrade as well.

To Reproduce
Steps to reproduce the behavior:

  1. Create a cluster with tk8 cluster install rke/aws.
  2. Modify config.yaml and run tk8 cluster install rke/aws.
  3. The above operation should be only an Upgrade and not Install.

Expected behavior
Install should only set up a cluster and infra if there is no current cluster with the same name.

Upgrade should only work if there is an existing cluster.

AWS installation with CoreOS doesn't work

The tk8 installation for AWS uses CoreOS by default, which is somehow hard coded.
Changing the os from coreos to centos in config.yaml requires to run the playbook manually with:

ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=centos -e bootstrap_os=centos -b --become-user=root --flush-cache

The installation with CoreOS doesn't work somehow anymore with the latest kubespray (or our own extended version for OpenStack), after the installation some pods keep in ContainerCreating state:

$  k get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   calico-node-527gv              0/1       ContainerCreating   0                 12m
kube-system   calico-node-84nvz              0/1       ContainerCreating   0                 12m

and this is apparently due to:

Events:
  Type     Reason       Age               From                                                      Message
  ----     ------       ----              ----                                                      -------
  Normal   Scheduled    7m                default-scheduler                                         Successfully assigned kube-dns-7bd4d5fbb6-cths7 to ip-10-250-208-170.eu-central-1.compute.internal
  Warning  FailedMount  1m (x11 over 7m)  kubelet, ip-10-250-208-170.eu-central-1.compute.internal  MountVolume.SetUp failed for volume "kube-dns-config" : configmaps "kube-dns" is forbidden: User "system:node:kubernetes-k8s1-worker1" cannot get configmaps in the namespace "kube-system": no path found to object
  Warning  FailedMount  1m (x11 over 7m)  kubelet, ip-10-250-208-170.eu-central-1.compute.internal  MountVolume.SetUp failed for volume "kube-dns-token-mvppm" : secrets "kube-dns-token-mvppm" is forbidden: User "system:node:kubernetes-k8s1-worker1" cannot get secrets in the namespace "kube-system": no path found to object
  Warning  FailedMount  1m (x3 over 5m)   kubelet, ip-10-250-208-170.eu-central-1.compute.internal  Unable to mount volumes for pod "kube-dns-7bd4d5fbb6-cths7_kube-system(d8672a57-7b12-11e8-826a-02336ebc6866)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"kube-dns-7bd4d5fbb6-cths7". list of unmounted volumes=[kube-dns-config kube-dns-token-mvppm]. list of unattached volumes=[kube-dns-config kube-dns-token-mvppm]

We shall change the default OS type to CentOS for now, since we're using CentOS on OpenStack as well :-)

Add-On install bug

Describe the bug
If the user install a addon which is not local the cmd get the code but do not install the add-on on the cluster. in the second run of the cmd it will work.

To Reproduce
Steps to reproduce the behavior:

  1. delete the addons folder if present
  2. tk8 addon install rancher
  3. cmd clone the repository but dosn´t install it
  4. Run again tk8 addon install rancher
  5. addon get installed

Expected behavior
with the first command, the addon get cloned and installed in one step

Desktop (please complete the following information):

  • OS: mac
  • Version latest

Move the application logic out of cmd folder to a different package folder

Instead of having all the application logic in the cmd folder we should have a separate package to contain the business logic for better code structuring. E.g:

move addon.go to internal/addon/addon.go
move aws.go to internal/cluster/aws.go
move bare.go to internal/cluster/baremetal.go
move openstack.go to internal/cluster/openstack.go

Pull kubespray from upstream

We're using our own extended version of kubespray, mainly for OpenStack support, the OpenStack support is now available in upstream kubespray, please adapt init.go to pull from upstream.

Install Rancher Server with TK8 as addon and provide additional how-to

tk8 addon --rancher
should install this manifest:
https://raw.githubusercontent.com/arashkaffamanesh/tk8/master/rancher/rancher.yaml

And we need to provide an how-to about exposing the rancher server via ingress / loadbalancer and write a blog post on medium.

and the help should be extended:

Flags:
-m, --heapster Deploy Heapster
-h, --help help for addon
-l, --ltaas Deploy Load Testing As A Service
-p, --prom Deploy prometheus
-r, --rancher Deploy Rancher Server

Create an SSH key and add to the cloud provider

As suggested by Arush:
Instead of depending on the user to provide their private SSH key, we should add an option in tk8 to create an SSH key, add that to the cloud provider(I am majorly talking AWS here but we can look into OpenStack also) and then use that key for our purpose just as Flux does it.

Remove `--config` as a global flag

Currently --config is configured as a global flag:

Global Flags:
      --config string   uses the config.yaml

Ideally, config.yaml is only required while working with the cluster aws command and not in any other case. Therefore the flag shouldn't be mentioned as a global flag but instead should be specific to aws or cluster command.

installing on amazon linux - gnupg not found

[ec2-user@ip-172-31-... ~]$ sudo yum update
Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main                                                                     | 2.1 kB  00:00:00     
amzn-updates                                                                  | 2.5 kB  00:00:00     
No packages marked for update
[ec2-user@ip-172-31-... ~]$ sudo yum install gnupg gnupg1 gnupg2
Loaded plugins: priorities, update-motd, upgrade-helper
Package gnupg-1.4.19-1.28.amzn1.x86_64 already installed and latest version
Package gnupg-1.4.19-1.28.amzn1.x86_64 already installed and latest version
Package gnupg2-2.0.28-1.30.amzn1.x86_64 already installed and latest version
Nothing to do
[ec2-user@ip-172-31-... ~]$ sudo docker build -t tk8 ./tk8/
Sending build context to Docker daemon  20.15MB
Step 1/4 : FROM ubuntu
 ---> 452a96d81c30
Step 2/4 : ARG TERRVERSION=0.11.5
 ---> Using cache
 ---> b9546c0ac7c2
Step 3/4 : COPY ./tk8 /usr/local/bin/tk8
 ---> Using cache
 ---> 568cdde3aa48
Step 4/4 : RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 && apt-get update && apt-get install -y python-pip zip wget git && pip install ansible netaddr && wget https://releases.hashicorp.com/terraform/${TERRVERSION}/terraform_${TERRVERSION}_linux_amd64.zip && unzip terraform_${TERRVERSION}_linux_amd64.zip -d /usr/local/bin/ && rm terraform_${TERRVERSION}_linux_amd64.zip && mkdir /tk8 && chmod +x /usr/local/bin/tk8
 ---> Running in f4a7ea87a2ec
E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
The command '/bin/sh -c apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367 && apt-get update && apt-get install -y python-pip zip wget git && pip install ansible netaddr && wget https://releases.hashicorp.com/terraform/${TERRVERSION}/terraform_${TERRVERSION}_linux_amd64.zip && unzip terraform_${TERRVERSION}_linux_amd64.zip -d /usr/local/bin/ && rm terraform_${TERRVERSION}_linux_amd64.zip && mkdir /tk8 && chmod +x /usr/local/bin/tk8' returned a non-zero code: 255
[ec2-user@ip-172-31-... ~]$

Provide Pumba CE tool as an addon via TK8 CLI

Is your feature request related to a problem? Please describe.
Provide a Chaos Engineering addon via TK8 CLI (Pumba)

Describe the solution you'd like
This TK8 addon should setup Pumba and initiate chaos in the Kubernetes cluster.

Allow the use of existing VPCs

Is your feature request related to a problem? Please describe.
I'm always frustrated when [...] my rigid corporate environment has an existing VPC with security controls which they want me to use for hosting rancher.

Describe the solution you'd like
Please let me specify a VPC to install rancher into rather than creating a new VPC.

Describe alternatives you've considered
Jumping off a bridge. Becoming a chicken farmer. Going into politics.

Additional context

Flag control contradiction

Flags in general are used to report deviations from the standard. For example, you could tell the CLI where to look for a config or which OS to use.

However, the main procedures should not be controlled with flags.

For example, the command 'tk8 cluster aws -c -i -d' would be a complete contradiction in itself and is not visible at first glance.

Here the CLI parameters should be used and flags as additional control. For example: 'tk8 cluster aws -f /myconfig.yml create'

Don't panic when config not found

Issue: When no config file is found the CLI exit with a panic, instead it should exit by logging a Fatal message

[ec2-user@ip-10-0-0-137 bin]$ ./tk8 cluster aws -c
Found terraform at /usr/bin/terraform
Terraform v0.11.7

panic: fatal error config file: Config File "config" Not Found in "[/home/ec2-user /home/ec2-user/go/bin /tk8]"

goroutine 1 [running]:
github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd.glob..func2(0xae3560, 0xc420043040, 0x0, 0x1)
	/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd/aws.go:248 +0x3fb5
github.com/kubernauts/tk8/vendor/github.com/spf13/cobra.(*Command).execute(0xae3560, 0xc420043030, 0x1, 0x1, 0xae3560, 0xc420043030)
	/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/kubernauts/tk8/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xae3ee0, 0xae4140, 0xc4200e0700, 0xae4030)
	/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/spf13/cobra/command.go:852 +0x334
github.com/kubernauts/tk8/vendor/github.com/spf13/cobra.(*Command).Execute(0xae3ee0, 0x0, 0xc420117f48)
	/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd.Execute()
	/home/ec2-user/go/src/github.com/kubernauts/tk8/vendor/github.com/kubernauts/tk8/cmd/root.go:43 +0x31
main.main()
	/home/ec2-user/go/src/github.com/kubernauts/tk8/main.go:20 +0x20

Creating a general add-on implementation

Add-ons should be able to be added to the project in a simple form. The aim should be for third party developers to be able to write add-ons and use or make them available with TK8. Ideally, these add-ons can be installed and used simply by specifying a git repository.

It would also be possible to provide the add-on, which the later RestAPI can extend with its own endpoints. This should be taken into account in the conceptual design.

rename tk8 to tk8ctl

Is your feature request related to a problem? Please describe.
*ctl is a common pattern for using CLIs, tk8ctl would be a better name for tk8 cli tool :-)

Describe the solution you'd like
Rename tk8 to tk8ctl by the next release.

TK8 and OpenStack

Kazimierz Sroka wrote to me:

I'm trying to set up kubernetes cluster using https://github.com/kubernauts/tk8 at v0.4.3.

I configure everything properly and after running tk8 cluster openstack --create terraform complete successfully but I can not ssh into bastion.
I am sure I am using correct key (there's only one on the openstack and when I create a new VM manually I can ssh properly). I am getting Permission denied when trying with root, centos, ubuntu or whatever i put in cluster.tfvars.

Could you help me debug this or point to someone who can?

I noticed that openstack folder was removed from master branch, why is that?

TK8 Cattle AWS Provisioner with Terraform Rancher Provider on Linux

Describe the bug
When executing the process described here, TK8 Cattle AWS Provisioner with Terraform Rancher Provider, on Ubuntu 18.04 and CentOS Linux 7.6, both amd64, in the end of the process, the output of the command tk8 cluster install cattle-aws, says everything were deployed, yet, any resource is created on AWS or Rancher.

Additional context
TK8-v0.7.1
Terraform-v0.12.2

config.yaml file:
config.txt

tk8 cluster install cattle-aws command output:
tk8_cluster_install_cattle-aws-output.txt

Add release support through gorealeaser

We need to add support for goreleaser for the next release as it not only ease the release process but does provide various other features such as built-in CI tests which supports all the major CI platforms and various release format including, but not limited to binary, deb, rpm, etc.

Add interactive mode for selecting Tk8 provisioners while creating cluster

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
An interactive prompt in the CLI. The user can select the provisioner and the config.yaml for the provisioner can be applied. The user does not need to manually specify the provisioner name in the command.

Describe alternatives you've considered
NA
Additional context
NA

pip install -r kubespray/requirements.txt fails in the container

pip install -r kubespray/requirements.txt

fails in the tk8 container image:

Command "/usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-3xxUgv/bcrypt/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-Z8WEoI/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-3xxUgv/bcrypt/

which leads to:

/tk8 # tk8 cluster aws -i
Found Ansible at /usr/bin/ansible
ansible 2.5.5
  config file = /tk8/ansible.cfg
  configured module search path = ['/tk8/library']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.4 (default, May  1 2018, 11:38:09) [GCC 6.4.0]
Configuration folder already exists
coreos
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in '/tk8/kubespray/roles/vault/handlers/main.yml': line 44, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: unseal vault
  ^ here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.