Git Product home page Git Product logo

cloudlinux / kuberdock-platform Goto Github PK

View Code? Open in Web Editor NEW
127.0 20.0 42.0 18.44 MB

KuberDock - is a platform that allows users to run applications using Docker container images and create SaaS / PaaS based on these applications.

License: GNU General Public License v2.0

Shell 6.52% Python 68.36% JavaScript 13.40% Ruby 0.25% PHP 0.03% CSS 5.28% HTML 2.48% Smarty 3.24% Mako 0.01% C 0.03% RobotFramework 0.40%
docker kubernetes calico etcd

kuberdock-platform's Introduction

KuberDock: platform to run and sell dockerized applications

KuberDock logo

KuberDock - is a platform that allows users to run applications using Docker container images and create SaaS / PaaS based on these applications.

KuberDock hides all complexity of underlying technologies from end-users and admins allowing to focus on creating and using Predifined applications and/or any dockerized app.


Features

  • Extremely simple UI/UX for both end-users and admins
  • Rich API to run yaml-based declarative Predifined applications
  • Real-time, centralized elasticsearch-based logs for all containers
  • Complete network isolation of users from each other
  • Continuous resource usage monitoring for Pods and cluster nodes itself
  • Easy SSH/SCP/SFTP access into containers
  • Complete persistent storage support with few backends: Ceph, LocalStorage, ZFS, ZFS-on-EBS
  • Ability to set resource limits per container(CPU, Mem, Persistent storage, Transient storage)
  • Ability to expose Pods to Internet with:
    • floating or fixed Public IPs
    • Shared IP (http/https traffic)
    • ELB on AWS
    • cPanel proxy and similar
  • Automatic SSL certificates generation with Let’s Encrypt
  • AWS support
  • Simple cluster upgrades with just a one command
  • Ability to control overselling parameters for CPU and Memory
  • Backups for KuberDock cluster(master and nodes)
  • Backups for users' pods, and persistent volumes
  • Improved security with SELinux

Built-in integrations with:

  • WHMCS billing system
  • Various control panels like cPanel, Plesk, DirectAdmin
  • DNS management systems like CloudFlare, cPanel, AWS route 53

Under the hood


How the project is run

KuberDock now is a free OSS and have no commercial support from CloudLinux right now.

However, rpm repositories will be hosted by CloudLinux for minimal reasonable time, till the project completely moved to GitHub with all dependencies and CI.

Deploy production cluster

Note: You may use this software in production only at your own risk

To install KuberDock package that is already in Cloudlinux stable repositories you can just follow "Master installation guide" or "Install KuberDock at Amazon Web Services" from docs folder.

To install custom build, you have to use deploy.sh from the same commit as your KuberDock package, and run it in the same folder where you put the rpm package. Deploy script will pick up that package instead of any existing in repositories. Something like this:

[root@your-kd-master-host] ls
deploy.sh
kuberdock-1.5.2-1.el7.noarch.rpm
[root@your-kd-master-host] bash ./deploy.sh --some-needed-options

Note: This process might be simplified and reworked in future to remove any dependencies from CloudLinux repos and build things in place or download from elsewhere automatically

Contributing to KuberDock

If you gonna hack on KuberDock, you should know few things:

  1. This is an Awesome Idea! :)
  2. We are opened to discussions and PRs. Feel free to open Github issues.
  3. You can contact some contributors for help. See CONTRIBUTORS.md

Deploy cluster for development

KuberDock has scripts that automatically provision KD cluster and doing preliminary configuration.

Note: Current rpm package repositories is still hosted by CloudLinux, but this support will be eventually discontinued, so appropriate PRs are welcome;)

Release branches (like 1.5.2) are intended to be production ready and should recieve only bug fixes.

Master branch should be stable enough to use for development and testing purposes but may contain some new features with bugs.

Development branch is experimental and could be unstable.

See versioning policy also.

Requirements:

KuberDock development cluster could be created in VMs with Vagrant either in VirtualBox or OpenNebula.

Note: If you need to work on more than 1 cluster at time you have to make a separate repo clone because vagrant doesn't support switching clusters in place. Another way to do this you can destroy your previous cluster (vagrant destroy -f) and create new one.

If you are going to use OpenNebula, make sure you have configured password-less ssh-key in it.

Also, for OpenNebula clusters, it's recommended(but not required) to use our docker-wrapped version of vagrant, because latest Vagrant often breaks backward compatibility in various places. However, this also has own limitations:

  • No VBox support. The only cross-platform way to make it is reverse-ssh: http://stackoverflow.com/a/19364263/923620 we did not implement this yet
  • vagrant global-status will not show all clusters - obviously: each of vagrants is isolated in own container

In case of docker-wrapped Vagrant:

  • docker 1.11 (or later) is running
  • export PATH=dev-utils:$PATH

In case of native Vagrant(either for VBox or OpenNebula) you will need:

  • Vagrant 1.8.4+
  • gatling-rsync-plugin (Installation is required, but usage is optional, only if you not satisfied with vagrant built-in "rsync-auto" performance)
    • vagrant plugin install vagrant-gatling-rsync
  • rsync --version (2.6.9 or later)
  • ansible --version (ansible 2.0.2.0 or later)
  • python2-netaddr and python2-passlib (python2 !) for ansible filters
  • OpenNebula plugin (Optional, if you will provision into OpenNebula)
    • vagrant plugin install opennebula-provider --plugin-version 1.1.2
  • VirtualBox (Optional, if you will provision into VirtualBox)
    • vboxmanage —version (v5.0.18r106667 or later)
Developer Flow (KD_INSTALL_TYPE=dev):
    git clone https://github.com/cloudlinux/kuberdock-platform
    cd AppCloud
    cp dev-utils/dev-env/kd_cluster_settings.sample ~/my_cluster_settings
    # Edit my_cluster_settings:
    # - Set KD_INSTALL_TYPE=dev.
    # - Set KD_NEBULA_TEMPLATE to one of the predefined for this purposes
    # Customize other settings if needed.
    # Import settings
    source ~/my_cluster_settings
    # Build cluster (run from AppCloud/ dir)
    vagrant up --provider=opennebula && vagrant provision
    # (for VirtualBox it's just "vagrant up")
    # Done
    # Find KD IP in deploy output, access it with creds admin/admin
    # For ssh use:
    vagrant ssh kd_master
    vagrant ssh kd_node1
    ...
    # Done

What does it do:

  • provisions few VMs in your VirtualBox or OpenNebula
  • builds "kuberdock.rpm", "kdctl.rpm", "kcli.rpm" RPMs inside master
  • runs deploy.sh
  • resets password to "admin"
  • runs wsgi app from tmux screen (tmux at to attach). To restart tmux session use run_wsgi_in_tmux.sh
  • add nodes
  • Turns off billing dependency
  • Creates test users
  • Creates IPPool
  • Adds all Predefined Apps from Github to KD cluster
  • setups everything needed to run unit tests inside master
  • some more dev-specific tune-ups

Continuous code syncing to OpenNebula:

  • Way 1: vagrant rsync-auto may be less performant than Way 2, but it works and more stable. Also, it's built-in and no plugins required.
  • Way 2: vagrant gatling-rsync-auto This may be slow for interactive development, takes about 6 seconds to sync. Pros: in-place code editing; Cons: async.
  • Way 3: Use sshfs Pros: Blocking, suitable for interactive coding; Cons: not in-place. You edit code in a mount point, not the initial repo.

Unittests

Best way is to run them in docker:

    # with tox (p.s. make sure it's installed)
    (venv)Appcloud tox -eunit
    # or directly:
    (venv)Appcloud bash ./_run_tests_in_docker.sh

Front-end stuff

see this README.md

Integration tests

Integration tests have been strongly integrated with CloudLinux infrastructure, so after moving project to Github and quick code changes they, of course, will not work out of the box anymore. This is a "number one TODO" to rework them. However, it's not easy and requires big infrastructure(OpenNebula cloud, Ceph cluster, some AWS if needed, etc.) and/or rework.

Tests are developed to use OpenNebula as a VM provider, but some tests(and it's a good news) could be run locally with a VirtualBox-based cluster.

From a high-level perspective they do the following things:

  1. Create one or more pipelines (each pipeline is a KuberDock cluster consisting of few VMs)
  2. Deploy KuberDock in each pipeline with some configuration (e.g. Ceph as persistent storage, or LocalStorage etc.)
  3. Provision test users, Pods etc.
  4. Run tests
  5. Teardown the cluster (or leave it as is in case of failed tests)

Typical workflow on local cluster will look like:

(venv)AppCloud source ./your-kd_cluster_settings
(venv)AppCloud source ./your-kuberdock-ci-env
# run all tests and pipelines or comma-separated list of them
# if BUILD_CLUSTER=0 then current cluster will be used, but make sure
# that it has appropriate configuration(incl. number of Nodes, rHosts, kube-types etc.)
(venv)AppCloud BUILD_CLUSTER=1 python run_integration_tests.py --pipelines main --all-tests --live-log

Pipelines could be started in parallel

TODO

To make tests work we need to fix at least this files to use correct values from some configuration files and/or envvars instead of hardcoded things:

dev-utils/dev-env/ansible/roles/master/defaults/main.yml
dev-utils/dev-env/ansible/roles/common/vars/main.yml
dev-utils/dev-env/ansible/roles/node/tasks/main.yml
dev-utils/dev-env/ansible/roles/rhost/tasks/routes.yml
dev-utils/dev-env/ansible/roles/whmcs/tasks/main.yml
dev-utils/dev-env/ansible/roles/whmcs/vars/main.yml
dev-utils/dev-env/Vagrantfile
tests_integration/assets/cpanel_credentials.json
dev-utils/dev-env/ansible/roles/plesk/defaults/main.yml
dev-utils/dev-env/ansible/roles/whmcs/defaults/main.yml
dev-utils/dev-env/ansible/roles/common/tasks/ceph.yml
kuberdock-ci-env

Licensing

KuberDock code itself is licensed under the GPL License, Version 2.0 (see LICENSE for the full license text), but some parts and dependencies may use their own licenses, and for this components, we include their licenses to this repo as well.

  • Kubernetes AWS deploy scripts - Apache 2.0
  • pyasn - BSD
  • fonts - Apache 2.0
  • mocha-phantomjs - MIT
  • paramiko-expect - MIT

kuberdock-platform's People

Contributors

aborilov avatar aleks-v-k avatar bliss avatar cloudlinuxadmin avatar demianyk avatar histrio avatar jeffmatson avatar max-lobur avatar mystic-mirage avatar rura avatar sedpro avatar ssergiienko avatar sysradium avatar telepenin avatar tyzhnenko avatar wncm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kuberdock-platform's Issues

Native CentOS installation

Isn't there a way to install KuberDock-Platform on native Linux OS outside of AWS? Also, would appreciate if a dedicated gitter.im channel for kuberdock would be created for discussion and community engagement.

Biiliard And Celery Installation

Hello Guys, my name is ilias I spent this trying to fix this error but no success, I reinstalled my vps many times cause of failing to make kuber work, I even installed the conflicting packages alone but no success, is there any possibility or solution to fix this.
Regards

Error: Package: 1:python-celery-3.1.19-1.el7.cloudlinux.noarch (kube)
Requires: python-billiard < 1:3.4
Installing: 1:python2-billiard-3.5.0.5-2.el7.x86_64 (epel)
python-billiard = 1:3.5.0.5-2.el7

How to install nodes manually

Hello

I installed the Master server and tried to add Nodes

But the status of the node is keeping pending for two days.

When I tried to delete the node, I got

Failed to clean Local storage volumes on the node. You have to clean it manually if needed: Failed to clean storage on node: /usr/bin/python2: No module named node_storage_manage

If it's possible, can I install the nodes manually?

Installation fails due to conflicting urllib3 version

Hi, users are unable to run kuberdock-platform due to dependency conflict with urllib3 package. As shown in the following full dependency graph of kuberdock-platform, kuberdock-platform requires urllib3==1.10.2,while requests==2.22.0 requires urllib3>=1.21.1,<1.26.

According to pip’s “first found wins” installation strategy, urllib3==1.10.2 is the actually installed version.
However, urllib3==1.10.2 does not satisfy urllib3>=1.21.1,<1.26.

Dependency tree-------

kuberdock-platform(version range:)
| +-alembic(version range:==0.7.6)
| +-amqp(version range:==1.4.9)
| +-anyjson(version range:==0.3.3)
| +-argparse(version range:==1.2.1)
| +-billiard(version range:==3.3.0.18)
| +-bitmath(version range:==1.2.34)
| +-blinker(version range:==1.3)
| +-boto(version range:==2.38)
| +-celery(version range:==3.1.15)
| +-cerberus(version range:==0.9.1)
| +-click(version range:>=6.3)
| +-cloudflare(version range:==1.1.5)
| +-ecdsa(version range:==0.11)
| +-elasticsearch(version range:>=1.0)
| +-fabric(version range:==1.10.2)
| +-flask(version range:==0.10.1)
| | +-Werkzeug(version range:>=0.7)
| | +-Jinja2(version range:>=2.4)
| | +-itsdangerous(version range:>=0.21)
| +-flask-assets(version range:==0.10)
| | +-Flask(version range:>=0.8)
| | | +-Werkzeug(version range:>=0.15)
| | | +-Jinja2(version range:>=2.10.1)
| | | +-itsdangerous(version range:>=0.24)
| | | +-click(version range:>=5.1)
| | +-webassets(version range:>=0.10)
| +-flask-httpauth(version range:==2.3.0)
| +-flask-influxdb(version range:==0.1)
| +-flask-login(version range:==0.2.11)
| | +-Flask(version range:)
| | | +-Werkzeug(version range:>=0.15)
| | | +-Jinja2(version range:>=2.10.1)
| | | +-itsdangerous(version range:>=0.24)
| | | +-click(version range:>=5.1)
| +-flask-mail(version range:==0.9.1)
| | +-Flask(version range:)
| | | +-Werkzeug(version range:>=0.15)
| | | +-Jinja2(version range:>=2.10.1)
| | | +-itsdangerous(version range:>=0.24)
| | | +-click(version range:>=5.1)
| | +-blinker(version range:)
| +-flask-migrate(version range:==1.4.0)
| | +-Flask(version range:>=0.9)
| | | +-Werkzeug(version range:>=0.15)
| | | +-Jinja2(version range:>=2.10.1)
| | | +-itsdangerous(version range:>=0.24)
| | | +-click(version range:>=5.1)
| | +-Flask-SQLAlchemy(version range:>=1.0)
| | +-alembic(version range:>=0.6)
| | | +-SQLAlchemy(version range:>=0.9.0)
| | | +-Mako(version range:)
| | | +-python-editor(version range:>=0.3)
| | | +-python-dateutil(version range:)
| | | +-SQLAlchemy(version range:>=0.6.0)
| | | +-SQLAlchemy(version range:>=0.7.3)
| | | +-SQLAlchemy(version range:>=0.7.6)
| | +-Flask-Script(version range:>=0.6)
| | | +-Flask(version range:)
| | | | +-Werkzeug(version range:>=0.15)
| | | | +-Jinja2(version range:>=2.10.1)
| | | | +-itsdangerous(version range:>=0.24)
| | | | +-click(version range:>=5.1)
| +-flask-script(version range:==2.0.5)
| | +-Flask(version range:)
| | | +-Werkzeug(version range:>=0.15)
| | | +-Jinja2(version range:>=2.10.1)
| | | +-itsdangerous(version range:>=0.24)
| | | +-click(version range:>=5.1)
| +-flask-sqlalchemy(version range:==2.0)
| | +-Flask(version range:>=0.10)
| | | +-Werkzeug(version range:>=0.15)
| | | +-Jinja2(version range:>=2.10.1)
| | | +-itsdangerous(version range:>=0.24)
| | | +-click(version range:>=5.1)
| | +-SQLAlchemy(version range:)
| +-gevent(version range:==1.0.2)
| +-greenlet(version range:==0.4.7)
| +-influxdb(version range:==0.1.13)
| | +-requests(version range:>=1.0.3)
| | | +-chardet(version range:>=3.0.2,<3.1.0)
| | | +-idna(version range:>=2.5,<2.9)
| | | +-urllib3(version range:>=1.21.1,<1.26)
| | | +-certifi(version range:>=2017.4.17)
| +-ipaddress(version range:==1.0.7)
| +-itsdangerous(version range:==0.24)
| +-jinja2(version range:==2.7.3)
| | +-markupsafe(version range:)
| +-kombu(version range:==3.0.35)
| +-markupsafe(version range:==0.23)
| +-nose(version range:==1.3.4)
| +-paramiko(version range:==1.15.2)
| +-psycogreen(version range:==1.0)
| +-psycopg2(version range:==2.5.4)
| +-pyasn1(version range:==0.1.9)
| +-pycrypto(version range:==2.6.1)
| +-pyopenssl(version range:==16.2.0)
| +-python-dateutil(version range:==2.4.2)
| +-python-etcd(version range:==0.4.3)
| | +-urllib3(version range:>=1.7.1)
| | +-dnspython3(version range:)
| +-python-nginx(version range:)
| +-pytz(version range:==2014.7)
| +-pyyaml(version range:)
| +-raven(version range:>=5.12)
| +-redis(version range:==2.10.3)
| +-requests(version range:==2.4.3)
| +-simple-rbac(version range:==0.1.1)
| +-simplejson(version range:==3.8.2)
| +-sqlalchemy(version range:==0.9.7)
| +-sse(version range:==1.2)
| +-unidecode(version range:==0.04.17)
| +-urllib3(version range:==1.10.2)
| +-webassets(version range:==0.10.1)
| +-websocket-client(version range:)
| +-werkzeug(version range:==0.9.6)
| +-wsgiref(version range:==0.1.2)

Thanks for your help.
Best,
Neolith

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.