Git Product home page Git Product logo

kubevirt-ansible's Introduction

Warning This repository is obsolete. The recommendation is to use kubernetes.core for virtualization machine management.

KubeVirt Ansible

Tools to provision resources, deploy clusters and install KubeVirt.

Overview

KubeVirt Ansible consists of a set of Ansible playbooks that deploy fully functional virtual machine management add-on for Kubernetes - KubeVirt. Optionally, a Kuberenetes or OpenShift cluster can also be configured.

Contents

  • automation/: CI scripts to verify the functionality of playbooks.
  • playbooks/: Ansible playbooks to provision resources, deploy a cluster and install KubeVirt for various scenarios.
  • roles/: Roles to use in playbooks.
  • vars/: Variables to use in playbooks.
  • inventory: A template for the cluster and nodes configuration.
  • requirements.yml: A list of required Ansible-Galaxy roles to use in playbooks.
  • stdci.yaml: A configuration file for CI system.

Usage

Deploy

To deploy KubeVirt on an existing OpenShift cluster run the commands below. For more information on clusters and other deployment scenarious see playbooks instructions.

oc login -u <admin_user> -p <admin_password>

ansible-playbook -i localhost playbooks/kubevirt.yml -e@vars/all.yml

Note: Check default variables in vars/all.yml and update them if needed.

E2E Testing

  1. Ensure it is possible to login into the cluster
oc login
  1. Compile tests from the tests directory inside the docker container and copy it to the kubevirt-ansible/_out directory.
make build-tests
  1. Run all the e2e tests with the ~/.kube/config file
make test

If you'd like to run specific tests only, you can leverage ginkgo command line options as follows (run a specified suite):

FUNC_TEST_ARGS='-ginkgo.focus=sanity_test -ginkgo.regexScansFilePath' make test

or you can pass it to tests via:

./_out/tests/<name>.test -kubeconfig=your_kubeconfig -tag=kubevirt_images_tag -prefix=kubevirt -test.timeout 60m

Note: To test PVC's storage.import.endpoint with other images, use the STREAM_IMAGE_URL environment variable:

export STREAM_IMAGE_URL=<the_image_url>

Questions ? Help ? Ideas ?

Stop by the #kubevirt chat channel on freenode IRC

Contributing

Please see the contributing guidelines for information regarding the contribution process.

Automation & Testing

Please check the CI automation guidelines for information on playbooks verification.

Useful Links

kubevirt-ansible's People

Contributors

adityaramteke avatar aglitke avatar alexxa avatar booxter avatar cynepco3hahue avatar dcritch avatar duyanyan avatar eedri avatar fabiand avatar gbenhaim avatar gouyang avatar ilpinto avatar jarrpa avatar karmab avatar kbidarkar avatar ksimon1 avatar leongold avatar lukas-bednar avatar mareklibra avatar myakove avatar nellyc avatar ngavrilo avatar petrkotas avatar phoracek avatar rwsu avatar schseba avatar screeley44 avatar shaggycat avatar tareqalayan avatar tiraboschi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubevirt-ansible's Issues

Why it need to remove firewalld and NetworkManager packages

I'm trying to preparing kubernetes cluster and deploy KubeVirt on my fedora 27 workstation, my system crashed because the playbook removes NetworkManager and all its dependencies. I know it's better to use a server to run the playbooks, but it would be better to be able run it on user's workstation, especially for those who just come to the project.

role node was not found when deploy kube cluster.

# ansible-playbook -i inventory1 playbooks/cluster/kubernetes/config.yml 
ERROR! the role '/root/git/kubevirt-ansible/playbooks/cluster/kubernetes/roles/node' was not found in /root/git/kubevirt-ansible/playbooks/cluster/kubernetes/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/root/git/kubevirt-ansible/playbooks/cluster/kubernetes

#105 should fix the issue.

Add an extra 50G disk to each lago VM

In order to support a CNS deployment, the hosts need to have an extra, unused disk that can be given to gluster. Can we update the automation such that vms will be created with this added disk? The exact size is negotiable but I'd rather have too much (since it is thin provisioned) as opposed to seeing random failures when we run out of space.

Accelerate CI

  • Cache Openshift containers on the Jenkins slaves
  • Run test suite in parallel.
  • Don't run full system update, update just the required packages.
  • Install only needed deps, for example when running install-kubevirt-release.yml, there is no need to install andrewrothstein.go-dev from ansible galaxy.
  • Add a profiler to Ansible
  • Use std-ci yum repos mirrors #82
  • Don't run tests if only the docs changed #81

Update Readme, Contributing and other doc pages

After repo organization (issue #32) the current repo documentation can be organized as following:

/kubevirt-ansible
     README.md  (1)
     CONTRIBUTING.md (2)
     /playbooks
           README.md (3)

(1) main Readme.md may have the following content:

  • Title and welcome
  • Overview - short description of the repo content
  • Contents - repo structure description
  • Usage - a link to the deployment readme file for details.
  • Contributing - a link to the contributing pages
  • QUESTIONS? HELP? IDEAS? with a reference to a Freenode and Google Group
  • Extra Links

See [1] as an example
[1] https://github.com/RedHatQE/rhui3-automation/blob/master/README.md

(2) Contributing.md

As it is in PR #74 plus

  • Reference to the main Readme for the description of the repo structure and playbooks organization
  • Examples of Functional Tests or simply a link to a simple functional test or two with descriptive comments
  • Instructions on how to run tests locally. That actually can go into Readme.md (3) on playbooks deployment and usage. Yeah, it will be a better place for it, and Contributing may contain a link to it accordingly.

(3) Readme.md in playbooks

The most of PR #71 plus more details which will follow with a repo organization and writing more playbooks and introducing tests.

Use a top level variables file

The goal of a top level variables file is to provide the user with a single file that lists the top level variables of kubevirt-ansible. The variables are mostly playbook level variables that will have a significant impact on how playbooks will run. Some variables listed here can also be variables that users are likely to change or variables we want a user to be aware of.

### Cluster ###
cluster: openshift
namespace: kube-system

### KubeVirt ###
manifest_version: release
docker_tag: latest

### Storage ###
enable_cinder_ceph: false
enable_gluster: false

kubectl dependency on KubeVirt installation on an existing cluster

After successful installation of OpenShift cluster (3.7), following Readme, I run into

>> ansible-playbook -i localhost playbooks/kubevirt.yml
<...>

TASK [kubevirt : Create kube-system namespace] *********************************************************
Wednesday 28 February 2018  09:57:19 +0100 (0:00:00.254)       0:00:00.361 **** 
 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}.
Found: ns.stdout != "{{ namespace }}"

fatal: [localhost]: FAILED! => {
    "changed": true,
    "cmd": "kubectl create namespace kube-system",
    "delta": "0:00:00.001448",
    "end": "2018-02-28 09:57:19.575954",
    "rc": 127,
    "start": "2018-02-28 09:57:19.574506"
}

STDERR:

/bin/sh: kubectl: command not found


MSG:

non-zero return code

Should I have kubectl installed? If so, it should be documented. But the best is to include it in the playbook. After installing origin-clients by dnf the playbooks proceeds normally.

link for docker-storage-setup-defaults is broken

it is https://github.com/kubevirt/kubevirt-ansible/blob/master/docker-storage-setup-defaults, I think it should link to https://github.com/openshift/openshift-ansible-contrib/blob/master/roles/docker-storage-setup/defaults/main.yaml

'kubevirt_template_dir' is undefined in playbooks/kubevirt.yml

Following Readme instructions:

>> ansible-playbook -i localhost playbooks/kubevirt.yml
<...>

TASK [kubevirt : Check for kubevirt.yaml template in {{ kubevirt_template_dir }}] *****************************************************************************************************************************
Wednesday 28 February 2018  14:59:04 +0100 (0:00:00.885)       0:00:04.578 **** 
fatal: [localhost]: FAILED! => {}

MSG:

The task includes an option with an undefined variable. The error was: 'kubevirt_template_dir' is undefined

The error appears to have been in '/home/igulina/git_projects/kubevirt-ansible/roles/kubevirt/tasks/provision.yaml': line 28, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: Check for kubevirt.yaml template in {{ kubevirt_template_dir }}
  ^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes.  Always quote template expression brackets when they
start a value. For instance:

    with_items:
      - {{ foo }}

Should be written as:

    with_items:
      - "{{ foo }}"

exception type: <class 'ansible.errors.AnsibleUndefinedVariable'>
exception: 'kubevirt_template_dir' is undefined

deploy-kubernetes.yml fails on ansible >=2.4.3

>> ansible-playbook -i inventory deploy-kubernetes.yml
<...>
TASK [/home/igulina/git_projects/kubevirt-ansible/kubernetes/roles/node : deploy host as kubernetes node] *********************************************************************************************************
Friday 23 February 2018  12:32:44 +0100 (0:00:08.917)       0:07:25.616 ******* 
fatal: [XYZ]: FAILED! => {
    "changed": true,
    "cmd": [
        "kubeadm",
        "join",
        "--token",
        "abcdef.1234567890123456",
        "XYZ:6443",
        "--skip-preflight-checks"
    ],
    "delta": "0:00:01.197376",
    "end": "2018-02-23 11:32:52.838288",
    "rc": 3,
    "start": "2018-02-23 11:32:51.640912"
}

STDOUT:

[preflight] Running pre-flight checks.


STDERR:

Flag --skip-preflight-checks has been deprecated, it is now equivalent to --ignore-preflight-errors=all
	[WARNING Hostname]: hostname "igulina-master" could not be reached
	[WARNING Hostname]: hostname "igulina-master" lookup igulina-master on XYZ: no such host
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[WARNING FileExisting-crictl]: crictl not found in system path
discovery: Invalid value: "": using token-based discovery without DiscoveryTokenCACertHashes can be unsafe. set --discovery-token-unsafe-skip-ca-verification to continue


MSG:

non-zero return code

	to retry, use: --limit @/home/igulina/git_projects/kubevirt-ansible/deploy-kubernetes.retry

PLAY RECAP ********************************************************************************************************************************************************************************************************
XYZ                : ok=23   changed=15   unreachable=0    failed=1   

>> ansible --version
ansible 2.4.3.0
  config file = /home/igulina/git_projects/kubevirt-ansible/ansible.cfg
  configured module search path = ['/home/igulina/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.3 (default, Oct  9 2017, 12:07:10) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)]

The deploy playbooks doesn't populate KubeVirt pods

After running deployment playbook I would expect to see populated KubeVirt's pods, for virtualization

on behalf of @cynepco3hahue

@lukas-bednar this part is problematic currently because we need to configure security context for libvirt and virt-handlers daemon sets, patch that must fix it kubevirt/kubevirt#418
In additional openshift does not support CustomResourseDefinition, with this patch we will get rid of it
kubevirt/kubevirt#355

copied from https://github.com/cynepco3hahue/kubevirt-ansible/issues/6

Stop using hosts:all

In several locations some plays are run against hosts: all. This presents a problem when more than just the k8s/openshift cluster is managed in the inventory, e.g. when there are ceph or gluster nodes in place or when the setup is being run on a local hypervisor and the hypervisor is in the inventory as "hypervisor" and should not be altered like the k8s cluster nodes.

Configure oc admin user for testing failed because of lacking certificate

TASK [Configure oc admin user for testing] *******************************************************************************************************
Friday 02 March 2018  18:42:33 +0800 (0:00:00.793)       0:08:26.062 ********** 
fatal: [dhcp-14-107.nay.redhat.com]: FAILED! => {
    "changed": true, 
    "cmd": "user_name=\"test_admin\"\n oc login -u system:admin\n oc get user \"$user_name\" || oc create user \"$user_name\"\n oc adm policy add-cluster-role-to-user cluster-admin \"$user_name\"", 
    "delta": "0:00:00.715857", 
    "end": "2018-03-02 18:42:33.996095", 
    "rc": 1, 
    "start": "2018-03-02 18:42:33.280238"
}

STDERR:

error: The server uses a certificate signed by unknown authority. You may need to use the --certificate-authority flag to provide the path to a certificate file for the certificate authority, or --insecure-skip-tls-verify to bypass the certificate check and use insecure connections.
Unable to connect to the server: x509: certificate signed by unknown authority
Unable to connect to the server: x509: certificate signed by unknown authority
Unable to connect to the server: x509: certificate signed by unknown authority


MSG:

non-zero return code

Add --config /etc/origin/master/admin.kubeconfig to every command or copy /etc/origin/master/admin.kubeconfig to /root/.kube/config can fix the problem.

oc login -u system:admin --config /etc/origin/master/admin.kubeconfig
        oc get user "$user_name" --connfig /etc/origin/master/admin.kubeconfig || oc create user "$user_name" --config /etc/origin/master/admin.kubeconfig
        oc adm policy add-cluster-role-to-user cluster-admin "$user_name" --config /etc/origin/master/admin.kubeconfig

Playbook deploy-openshift.yml doesn't work with ansible >= 2.4.1

When trying to run playbook with ansible 2.4 and above (I tried 2.4.1 and 2.5.0)
Last working for me was ansible-2.4.0, also working with 2.3.1.

[lbednar@lbednar kubevirt-ansible]$ ansible-playbook -i inventory.my -e "openshift_ansible_dir=openshift-ansible" deploy-openshift.yml 
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in 'deploy-openshift.yml': line 13, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

        name: "{{ openshift_ansible_dir }}/roles/openshift_facts"
    - name: Load openshift facts
      ^ here

Update README to reflect latest supported platform version

Currently README states that supported platforms are:
CentOS Linux release 7.3.1611 (Core), OpenShift 3.7 and Ansible 2.3.1

We should probably create a new testing matrix table and include the currently supported versions
and the ones which are in progress.

e.g:

Supported In Development
CentOS 7.4 N/A
OpenShift 3.7 OpenShift 3.9
Ansible 2.3.1 Ansible 2.4.2

OpenShift cluster deployment fails with 'users "test_admin" not found'

Installing OpenShift 3.7 cluster fails with "Couldn't find test_admin user".

>>  ansible-playbook -i inventory \
    -e "openshift_ansible_dir=openshift-ansible/ \
    openshift_playbook_path=playbooks/byo/config.yml \
    openshift_ver=3.7" playbooks/cluster/openshift/config.yml

<...>

TASK [Configure oc admin user for testing] *************************************************************
Tuesday 27 February 2018  20:55:36 +0100 (0:00:10.566)       1:44:28.987 ****** 
changed: [10.8.241.23] => {
    "changed": true,
    "cmd": "user_name=\"test_admin\"\n oc login -u system:admin\n oc get user \"$user_name\" || oc create user \"$user_name\"\n oc adm policy add-cluster-role-to-user cluster-admin \"$user_name\"",
    "delta": "0:00:03.117986",
    "end": "2018-02-27 19:55:54.181482",
    "rc": 0,
    "start": "2018-02-27 19:55:51.063496"
}

STDOUT:

Logged into "https://172.16.216.21:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-public
    kube-service-catalog
    kube-system
    logging
    openshift
    openshift-ansible-service-broker
    openshift-infra
    openshift-node

Using project "default".
user "test_admin" created
cluster role "cluster-admin" added: "test_admin"


STDERR:

Error from server (NotFound): users "test_admin" not found


PLAY RECAP *********************************************************************************************
10.8.241.23                : ok=633  changed=125  unreachable=0    failed=0   
localhost                  : ok=15   changed=0    unreachable=0    failed=0   


INSTALLER STATUS ***************************************************************************************
Initialization             : Complete
Health Check               : Complete
etcd Install               : Complete
NFS Install                : Complete
Master Install             : Complete
Master Additional Install  : Complete
Node Install               : Complete
Hosted Install             : Complete
Service Catalog Install    : Complete

Tuesday 27 February 2018  20:55:54 +0100 (0:00:18.364)       1:44:47.351 ****** 
=============================================================================== 
openshift_master : Update journald setup ------------------------------------------------------ 105.99s
openshift_master_certificates : Check status of master certificates ---------------------------- 80.80s
openshift_storage_nfs : remove exports from /etc/exports --------------------------------------- 70.62s
openshift_storage_nfs : Ensure export directories exist ---------------------------------------- 70.59s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 69.45s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 66.85s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 63.54s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 62.90s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 62.62s
tuned : Ensure files are populated from templates ---------------------------------------------- 58.77s
openshift_node_certificates : Check status of node certificates -------------------------------- 52.92s
Ensure openshift-ansible installer package deps are installed ---------------------------------- 47.25s
openshift_hosted : Create default projects ----------------------------------------------------- 41.72s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ---------------- 36.10s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ---------------- 36.06s
openshift_master : Add iptables allow rules ---------------------------------------------------- 35.40s
openshift_node : Add iptables allow rules ------------------------------------------------------ 35.21s
openshift_master : Create the ha systemd unit files -------------------------------------------- 29.49s
Run health checks (install) - EL --------------------------------------------------------------- 27.59s
etcd : file ------------------------------------------------------------------------------------ 27.15s

Come up with testing process

We need to think how to test our Ansible playbooks.

There were suggestions to go with Vagrant, but feel free to come with other ways.

Ensure consistency of selectorNode over whole deloyment process

In order to deploy KubeVirt pods on master node, we need to set selectorNode in manifests.
Right now the playbook is taking FQDN of master node and use it as selector.

On other hand openshift-ansible allows you to change name of node in inventory file, and if this happen the assumption above will be false.

So we need to make sure that we have proper label assigned to node or take that openshift_host variable under consideration.

In additional I would change line in inventory file
https://github.com/kubevirt-incubator/kubevirt-ansible/blob/34e790e827f130909b652c9ef76130f08e237113/inventory#L22

Since it is required to have master node to be schedulable .
Without that the KubeVirt pods will not be be able to run.

copied from https://github.com/cynepco3hahue/kubevirt-ansible/issues/8

python2 yum module is needed when deploy openshift on Fedora27

Openshift need python3 support, so it need to add ansible_python_interpreter=/usr/bin/python3 to the host variables. However, run the playbook meet below error on Fedora27:

TASK [Install openshift_facts requirements] ******************************************************************************************************
Saturday 03 March 2018  13:41:10 +0800 (0:00:12.931)       0:00:20.304 ******** 
failed: [dhcp-xxxx] (item=['python-yaml', 'python-ipaddress', 'wget', 'git', 'net-tools', 'bind-utils', 'iptables-services', 'bridge-utils', 'bash-completion', 'kexec-tools', 'sos', 'psacct', 'docker']) => {
    "changed": false,
    "item": [
        "python-yaml",
        "python-ipaddress",
        "wget",
        "git",
        "net-tools",
        "bind-utils",
        "iptables-services",
        "bridge-utils",
        "bash-completion",
        "kexec-tools",
        "sos",
        "psacct",
        "docker"
    ]
}

MSG:

python2 yum module is needed for this  module

Use package instead of yum can fix the problem.

# git diff
diff --git a/playbooks/cluster/openshift/config.yml b/playbooks/cluster/openshift/config.yml
index 49c04f0..5fd78f9 100644
--- a/playbooks/cluster/openshift/config.yml
+++ b/playbooks/cluster/openshift/config.yml
@@ -33,9 +33,10 @@
       yum:
         name: "{{ epel_release_rpm_url }}"
         state: present
+      when: ansible_distribution in ["CentOS","RedHat"]
 
     - name: Install openshift_facts requirements
-      yum:
+      package:
         name: "{{ item }}"
       with_items:
         - python-yaml

Could not find openshift-ansible playbooks

Following the README to deploy openshift got bellow error, it is looking for openshift-ansible under directory kubevirt-ansible/playbooks/cluster/openshift, however the openshift-ansible is cloned to kubevirt-ansible.

# ansible-playbook -i inventory \
>     -e "openshift_ansible_dir=openshift-ansible/ \
>     openshift_playbook_path=playbooks/byo/config.yml \
>     openshift_ver=3.7" playbooks/cluster/openshift/config.yml
 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

ERROR! Unable to retrieve file contents
Could not find or access '/root/kubevirt-ansible/playbooks/cluster/openshift/openshift-ansible/playbooks/prerequisites.yml'

deploy-kubernetes: can not deploy on CentOS 7.4

Kubernetes repository doesn't provide required docker package for CentOS 7.4

https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

TASK [/home/lbednar/work/kubevirt-org/kubevirt-ansible/kubernetes/roles/prerequisites : install all kubernetes packages] **********************************************************************
failed: [vm-69-21.qa.lab.tlv.redhat.com] (item=[u'jq', u'sshpass', u'bind-utils', u'net-tools', u'docker', u'kubeadm', u'kubelet', u'kubectl', u'kubernetes-cni']) => {"changed": false, "failed": true, "item": ["jq", "sshpass", "bind-utils", "net-tools", "docker", "kubeadm", "kubelet", "kubectl", "kubernetes-cni"], "msg": "No package matching 'docker' found available, installed or updated", "rc": 126, "results": ["No package matching 'docker' found available, installed or updated"]}
failed: [vm-69-15.qa.lab.tlv.redhat.com] (item=[u'jq', u'sshpass', u'bind-utils', u'net-tools', u'docker', u'kubeadm', u'kubelet', u'kubectl', u'kubernetes-cni']) => {"changed": false, "failed": true, "item": ["jq", "sshpass", "bind-utils", "net-tools", "docker", "kubeadm", "kubelet", "kubectl", "kubernetes-cni"], "msg": "No package matching 'docker' found available, installed or updated", "rc": 126, "results": ["No package matching 'docker' found available, installed or updated"]}
failed: [vm-69-1.qa.lab.tlv.redhat.com] (item=[u'jq', u'sshpass', u'bind-utils', u'net-tools', u'docker', u'kubeadm', u'kubelet', u'kubectl', u'kubernetes-cni']) => {"changed": false, "failed": true, "item": ["jq", "sshpass", "bind-utils", "net-tools", "docker", "kubeadm", "kubelet", "kubectl", "kubernetes-cni"], "msg": "No package matching 'docker' found available, installed or updated", "rc": 126, "results": ["No package matching 'docker' found available, installed or updated"]}

CI tests should ignore documentation changes

CI should ignore commits containing [ci skip] and [skip ci] in their title or description. [ci skip] and [skip ci] can be used for documentation and minor changes which don't require CI check.

We have 2 variables dirs in the root of the repo

We have:

  • vars
  • variables.

I think that we should have only one of them, and I prefer the name vars since it's part of the ansible jargon.

In addition, we have all.yaml and global_vars.yml, do you think that we should keep both of them?
I think that global_vras.yml should contain miscellaneous vars, while all.yaml should contain vars that are related to Kubevirt's deployment, and should be renamed to a meaningful name.

Fix project name in STDCI

The STDCI functionality for this project is associated with the older project name, since the project was moved the configuration needs to be updated for the CI to work.

Improve user experience of install-kubevirt-release playbook

Right now playbook runs inside of mock as a part of lago environment, I would like to see it to be able to run also outside of mock under regular user.

  • Extend readme with guide how to install KubeVirt release on existing cluster, step by step
  • Playbook itself shouldn't require root user to execute playbook
  • Add minimal inventory file with necessary parameters to install KubeVirt release on existing cluster
  • openshift/roles/kubevirt role requires kubeconfig file, and in addition there are credentials required - I think it should be converged

Failure to log in to the server on playbooks/kubevirt.yml

playbooks/kubevirt.yml returns login failure.


>> ansible-playbook -i localhost playbooks/kubevirt.yml
<...>
TASK [kubevirt : Create kube-system namespace] *********************************************************
Wednesday 28 February 2018  10:09:05 +0100 (0:00:00.345)       0:00:00.453 **** 
 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}.
Found: ns.stdout != "{{ namespace }}"

fatal: [localhost]: FAILED! => {
    "changed": true,
    "cmd": "kubectl create namespace kube-system",
    "delta": "0:00:00.097699",
    "end": "2018-02-28 10:09:05.327740",
    "rc": 1,
    "start": "2018-02-28 10:09:05.230041"
}

STDERR:

error: You must be logged in to the server (Unauthorized)

Use std-ci yum repos mirrors

  • It will protect the job from failure when Cento's repos are not available.
  • It will accelerate the job (the mirrors and the Jenkins slaves are in the same data center).

Dynamic inventory file for supporting conditional CNS deployment

As part of the adding a cns storage flavor, I would like to add some things to the inventory file to conditionally deploy cns. If cluster == openshift and storage_role == storage_cns I'd want to make the following changes to the inventory. What is the best way to achieve this?

[OSEv3:children]
...
glusterfs
[OSEv3:vars]
...
# Namespace for CNS pods (will be created)
openshift_storage_glusterfs_namespace=app-storage
# Automatically create a StorageClass referencing this CNS cluster
openshift_storage_glusterfs_storageclass=true
# glusterblock functionality is not supported outside of Logging/Metrics
openshift_storage_glusterfs_block_deploy=false
# Disable any other default StorageClass
openshift_storageclass_default=false
[glusterfs]
<master> glusterfs_devices='[ "/dev/vdd" ]'
<node0> glusterfs_devices='[ "/dev/vdd" ]'
<node1> glusterfs_devices='[ "/dev/vdd" ]'
...

Reorganization proposal

Hello,
I would like to help reorganize the playbooks and roles in this project but before I commit work it would be great to come to consensus what it should look like here are my suggestions:

  • Create top-level roles, playbooks and inventory directories. Move associated files to those locations.
  • Modify ansible.cfg to support path changes
  • Remove paths to roles in playbooks
  • Optional support compilation of kubevirt project - not a requirement.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.