ansibleplaybookbundle / kubevirt-apb Goto Github PK
View Code? Open in Web Editor NEWAPB for managing KubeVirt deployments
License: Apache License 2.0
APB for managing KubeVirt deployments
License: Apache License 2.0
Hi there,
So far the latest kubevirt-apb version is v0.4.1-alpha.2. I tried to provision it with glusterfs storage checked on OCP v3.10.0-0.47.0 + OpenStack. There is no kubvirt storageclass generated. Seems it didn't run storage part. If I use ansible-playbook to deploy kubevirt.yml, this problem won't happen http://pastebin.test.redhat.com/598551.
[root@cnv-executor-qwang-apb-master1 ~]# oc get all
NAME READY STATUS RESTARTS AGE
pod/virt-api-59f4bdd6-5z8gg 1/1 Running 0 45s
pod/virt-api-59f4bdd6-cvc8b 1/1 Running 1 45s
pod/virt-controller-6756fcdcc9-2nc5s 1/1 Running 0 45s
pod/virt-controller-6756fcdcc9-54n6v 0/1 Running 0 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/virt-api ClusterIP 172.30.122.46 <none> 443/TCP 45s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.extensions/virt-handler 0 0 0 0 0 <none> 45s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/virt-api 2 2 2 2 45s
deployment.extensions/virt-controller 2 2 2 1 45s
NAME DESIRED CURRENT READY AGE
replicaset.extensions/virt-api-59f4bdd6 2 2 2 45s
replicaset.extensions/virt-controller-6756fcdcc9 2 2 1 45s
[root@cnv-executor-qwang-apb-master1 ~]# oc get storageclass
NAME PROVISIONER AGE
glusterfs-storage (default) kubernetes.io/glusterfs 2d
[root@cnv-executor-qwang-apb-master1 ~]# oc get all -n dh-virtualization-prov-s5t4s
NAME READY STATUS RESTARTS AGE
pod/apb-e86b7acc-7300-4783-8c93-dcba82c906ed 0/1 Completed 0 1m
[root@cnv-executor-qwang-apb-master1 ~]# oc logs pod/apb-e86b7acc-7300-4783-8c93-dcba82c906ed -n dh-virtualization-prov-s5t4s
PLAY [Provision KubeVirt] ******************************************************
TASK [ansible.kubernetes-modules : Install latest openshift client] ************
skipping: [localhost]
TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
skipping: [localhost]
PLAY [localhost] ***************************************************************
TASK [kubevirt : include_tasks] ************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/tasks/provision.yml for localhost
TASK [kubevirt : Login As Super User] ******************************************
changed: [localhost]
TASK [kubevirt : Check if kubevirt-apb-2 exists] *******************************
changed: [localhost]
TASK [kubevirt : Create kubevirt-apb-2 namespace] ******************************
skipping: [localhost]
TASK [kubevirt : Add Privileged Policy] ****************************************
changed: [localhost] => (item=kubevirt-privileged)
changed: [localhost] => (item=kubevirt-controller)
changed: [localhost] => (item=kubevirt-infra)
TASK [kubevirt : Add Hostmount-anyuid Policy] **********************************
changed: [localhost]
TASK [kubevirt : Check for kubevirt.yaml template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]
TASK [kubevirt : Check for offline v0.4.1-alpha.2 templates in /opt/apb/kubevirt-templates] ***
ok: [localhost]
TASK [kubevirt : Download KubeVirt Template] ***********************************
skipping: [localhost]
TASK [kubevirt : Copy offline templates to /tmp] *******************************
changed: [localhost]
TASK [kubevirt : Render KubeVirt Yaml] *****************************************
changed: [localhost]
TASK [kubevirt : Render BYO template] ******************************************
skipping: [localhost]
TASK [kubevirt : Create KubeVirt Resources] ************************************
changed: [localhost]
TASK [kubevirt : Check for vm templates in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost] => (item=vm-template-fedora)
ok: [localhost] => (item=vm-template-windows2012r2)
ok: [localhost] => (item=vm-template-rhel7)
TASK [kubevirt : Copy VM templates to /tmp] ************************************
TASK [kubevirt : Check for vm templates in /opt/apb/kubevirt-templates] ********
ok: [localhost] => (item=vm-template-fedora)
ok: [localhost] => (item=vm-template-windows2012r2)
ok: [localhost] => (item=vm-template-rhel7)
TASK [kubevirt : Copy VM templates to /tmp] ************************************
[WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ offline_vm_templates.results |
selectattr('stat.exists') | map(attribute='item') | list | length > 0 }}
changed: [localhost] => (item=vm-template-fedora)
changed: [localhost] => (item=vm-template-windows2012r2)
changed: [localhost] => (item=vm-template-rhel7)
TASK [kubevirt : Download KubeVirt default VM templates] ***********************
[WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: cluster == "openshift" and "{{
byo_vm_templates.results | selectattr('stat.exists') | map(attribute='item') |
list | length == 0 }}" and "{{ offline_vm_templates.results |
selectattr('stat.exists') | map(attribute='item') | list | length == 0 }}"
ok: [localhost] => (item=vm-template-fedora)
ok: [localhost] => (item=vm-template-windows2012r2)
ok: [localhost] => (item=vm-template-rhel7)
TASK [kubevirt : Create default VM templates in OpenShift Namespace] ***********
changed: [localhost] => (item=vm-template-fedora)
changed: [localhost] => (item=vm-template-windows2012r2)
changed: [localhost] => (item=vm-template-rhel7)
[WARNING]: Could not match supplied host pattern, ignoring: masters
PLAY [masters[0]] **************************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: nodes
PLAY [masters nodes] ***********************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************
localhost : ok=15 changed=9 unreachable=0 failed=0
@rthallisey Could you take a look at it? Thanks.
Add a nested_virt param to the kubevirt-apb. This is going to require running:
echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf
rmmod kvm-intel && modprobe kvm-intel || true
cat /sys/module/kvm_intel/parameters/nested
In order for this to work, the container needs to be a root user and needs to mount in /lib/modules
.
Hi there,
ASB and service catalog worked as expected. Then I chose ephemeral storage plan to deploy kubevirt, then the deployment failed when executed "Allow ceph OSD traffic" task, the error was: Failed to find required executable iptables in paths: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin. Deploy storage-demo using kubevirt-ansible from CLI doesn't have the problem. The iptables was configured correctly. See kubevirt-ansible log: http://pastebin.test.redhat.com/581183
Here it the ansible log when deploy from web console:
[root@host-172-16-120-33 ~]# oc project rh-virtualization-prov-bfrfw
Now using project "rh-virtualization-prov-bfrfw" on server "https://172.16.120.33:8443".
[root@host-172-16-120-33 ~]# oc get all
NAME READY STATUS RESTARTS AGE
po/apb-ea08c82d-4c33-4595-b68c-4f06a3203083 0/1 Error 0 8m
[root@host-172-16-120-33 ~]# oc logs po/apb-ea08c82d-4c33-4595-b68c-4f06a3203083
+ [[ provision --extra-vars {"_apb_plan_id":"storage-demo","_apb_service_class_id":"60c8357b2a1cb091488d9c5586c4eb4b","_apb_service_instance_id":"49510c9c-c850-4f7c-b52f-32731422337a","admin_password":"redhat","admin_user":"qwang","cluster":"openshift","namespace":"qwang-storage-demo","storage_role":"storage-demo","version":"0.4.1-alpha.2"} == *\s\2\i\/\a\s\s\e\m\b\l\e* ]]
+ ACTION=provision
+ shift
+ apb_action_path=kubevirt-ansible/playbooks/kubevirt.yml
+ playbooks=/etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml
+ CREDS=/var/tmp/bind-creds
+ TEST_RESULT=/var/tmp/test-result
+ whoami
+ '[' -w /etc/passwd ']'
++ id -u
+ echo 'apb:x:1000140000:0:apb user:/opt/apb:/sbin/nologin'
+ set +x
+ [[ -e /etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml ]]
+ [[ ! -d /etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml ]]
+ ANSIBLE_ROLES_PATH=/etc/ansible/roles:/opt/ansible/roles
+ ansible-playbook /etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml -e action=provision --extra-vars '{"_apb_plan_id":"storage-demo","_apb_service_class_id":"60c8357b2a1cb091488d9c5586c4eb4b","_apb_service_instance_id":"49510c9c-c850-4f7c-b52f-32731422337a","admin_password":"redhat","admin_user":"qwang","cluster":"openshift","namespace":"qwang-storage-demo","storage_role":"storage-demo","version":"0.4.1-alpha.2"}'
[WARNING]: Found variable using reserved name: action
PLAY [localhost] ***************************************************************
TASK [kubevirt : include_tasks] ************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/tasks/provision.yml for localhost
TASK [kubevirt : Login As Super User] ******************************************
changed: [localhost]
TASK [kubevirt : Check if qwang-storage-demo exists] ***************************
changed: [localhost]
TASK [kubevirt : Create qwang-storage-demo namespace] **************************
skipping: [localhost]
TASK [kubevirt : Add Privileged Policy] ****************************************
changed: [localhost] => (item=kubevirt-privileged)
changed: [localhost] => (item=kubevirt-controller)
changed: [localhost] => (item=kubevirt-infra)
TASK [kubevirt : Add Hostmount-anyuid Policy] **********************************
changed: [localhost]
TASK [kubevirt : Check for kubevirt.yml template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]
TASK [kubevirt : Download KubeVirt Template] ***********************************
changed: [localhost]
TASK [kubevirt : Render KubeVirt Yml] ******************************************
changed: [localhost]
TASK [kubevirt : Render BYO template] ******************************************
skipping: [localhost]
TASK [kubevirt : Create KubeVirt Resources] ************************************
changed: [localhost]
TASK [kubevirt : Download KubeVirt source] *************************************
changed: [localhost]
TASK [kubevirt : Extract /tmp/kubevirt.tar.gz into /tmp/kubevirt] **************
changed: [localhost]
TASK [kubevirt : Create default VM templates in OpenShift Namespace] ***********
changed: [localhost] => (item=vm-template-fedora)
changed: [localhost] => (item=vm-template-windows2012r2)
changed: [localhost] => (item=vm-template-rhel7)
PLAY [masters[0]] **************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [storage-demo : include_tasks] ********************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/storage-demo/tasks/provision.yml for localhost
TASK [storage-demo : Login As Super User] **************************************
changed: [localhost]
TASK [storage-demo : Check if namespace qwang-storage-demo exists] *************
changed: [localhost]
TASK [storage-demo : Create qwang-storage-demo namespace] **********************
skipping: [localhost]
TASK [storage-demo : Check for storage-demo serviceaccount] ********************
changed: [localhost]
TASK [storage-demo : Create storage-demo serviceaccount] ***********************
changed: [localhost]
TASK [storage-demo : Grant privileged access to storage-demo serviceaccount] ***
changed: [localhost]
TASK [storage-demo : Select a target node] *************************************
changed: [localhost]
TASK [storage-demo : Set the target node] **************************************
ok: [localhost]
TASK [storage-demo : Render storage-demo deployment yaml] **********************
changed: [localhost]
TASK [storage-demo : Create storage-demo Resources] ****************************
changed: [localhost]
TASK [cdi : include_tasks] *****************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/cdi/tasks/provision.yml for localhost
TASK [cdi : Determine Environment] *********************************************
changed: [localhost]
TASK [cdi : Check if namespace golden-images exists] ***************************
changed: [localhost]
TASK [cdi : Create golden-images namespace using kubectl] **********************
skipping: [localhost]
TASK [cdi : Create golden-images namespace using oc] ***************************
changed: [localhost]
TASK [cdi : Check if RBAC exists for CDI] **************************************
changed: [localhost]
TASK [cdi : Create RBAC for CDI] ***********************************************
changed: [localhost]
TASK [cdi : Render golden-images ResourceQuota deployment yaml] ****************
changed: [localhost]
TASK [cdi : Create golden-images ResourceQuota] ********************************
changed: [localhost]
TASK [cdi : Render CDI deployment yaml] ****************************************
changed: [localhost]
TASK [cdi : Create CDI deployment] *********************************************
changed: [localhost]
PLAY [masters nodes] ***********************************************************
[WARNING]: Could not match supplied host pattern, ignoring: nodes
TASK [storage-demo-nodeconfig : include_tasks] *********************************
included: /etc/ansible/roles/kubevirt-ansible/roles/storage-demo-nodeconfig/tasks/provision.yml for localhost
TASK [storage-demo-nodeconfig : Allow ceph OSD traffic] ************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to find required executable iptables in paths: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"}
to retry, use: --limit @/etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.retry
PLAY RECAP *********************************************************************
localhost : ok=34 changed=27 unreachable=0 failed=1
+ EXIT_CODE=2
+ set +ex
+ '[' -f /var/tmp/test-result ']'
+ exit 2
[root@host-172-16-120-33 ~]# oc get all
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds/virt-handler 3 3 2 3 2 <none> 16m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/storage-demo 1 1 1 1 16m
deploy/virt-api 2 2 2 2 16m
deploy/virt-controller 2 2 2 1 16m
NAME DESIRED CURRENT READY AGE
rs/storage-demo-56cf75c588 1 1 1 16m
rs/virt-api-56c966985d 2 2 2 16m
rs/virt-controller-7559bf844b 2 2 1 16m
NAME READY STATUS RESTARTS AGE
po/storage-demo-56cf75c588-4dp5k 7/7 Running 1 16m
po/virt-api-56c966985d-w4d54 1/1 Running 0 16m
po/virt-api-56c966985d-xm46r 1/1 Running 0 16m
po/virt-controller-7559bf844b-rtc29 0/1 Running 0 16m
po/virt-controller-7559bf844b-wwtpd 1/1 Running 0 16m
po/virt-handler-nk747 0/1 Pending 0 1s
po/virt-handler-s46dc 1/1 Running 0 16m
po/virt-handler-s7h8z 1/1 Running 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/virt-api ClusterIP 172.30.18.103 <none> 443/TCP 16m
[root@host-172-16-120-33 ~]# oc describe serviceinstance
Name: rh-virtualization-nfj55
Namespace: qwang-storage-demo
Labels: <none>
Annotations: <none>
API Version: servicecatalog.k8s.io/v1beta1
Kind: ServiceInstance
Metadata:
Creation Timestamp: 2018-04-23T16:55:11Z
Finalizers:
kubernetes-incubator/service-catalog
Generate Name: rh-virtualization-
Generation: 1
Resource Version: 58651
Self Link: /apis/servicecatalog.k8s.io/v1beta1/namespaces/qwang-storage-demo/serviceinstances/rh-virtualization-nfj55
UID: 16582cdc-4717-11e8-b609-0a580a820005
Spec:
Cluster Service Class External Name: rh-virtualization
Cluster Service Class Ref:
Name: 60c8357b2a1cb091488d9c5586c4eb4b
Cluster Service Plan External Name: storage-demo
Cluster Service Plan Ref:
Name: 546cf93c2d7615ef26ad81d1e369be9b
External ID: 49510c9c-c850-4f7c-b52f-32731422337a
Parameters From:
Secret Key Ref:
Key: parameters
Name: rh-virtualization-parametersu4zms
Update Requests: 0
User Info:
Extra:
Scopes . Authorization . Openshift . Io:
user:full
Groups:
system:authenticated:oauth
system:authenticated
UID:
Username: qwang
Status:
Async Op In Progress: false
Conditions:
Last Transition Time: 2018-04-23T16:55:12Z
Message: Provision call failed: Error occurred during provision. Please contact administrator if it persists.
Reason: ProvisionCallFailed
Status: False
Type: Ready
Last Transition Time: 2018-04-23T16:57:18Z
Message: Provision call failed: Error occurred during provision. Please contact administrator if it persists.
Reason: ProvisionCallFailed
Status: True
Type: Failed
Deprovision Status: Required
Orphan Mitigation In Progress: false
Reconciled Generation: 1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrorWithParameters 17m (x4 over 17m) service-catalog-controller-manager failed to prepare parameters nil: secrets "rh-virtualization-parametersu4zms" not found
Normal Provisioning 17m service-catalog-controller-manager The instance is being provisioned asynchronously
Warning ProvisionCallFailed 15m (x2 over 15m) service-catalog-controller-manager Provision call failed: Error occurred during provision. Please contact administrator if it persists.
create service instance
[root@cnv-executor-shiywang-master1 ~]# cat kubevirt-apb.yml
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: kubevirt
namespace: kube-system
spec:
clusterServiceClassExternalName: dh-virtualization
clusterServicePlanExternalName: default
parameters:
admin_user: test_admin
admin_password: 123456
version: 0.7.0
[root@cnv-executor-shiywang-master1 ~]# oc create -f kubevirt-apb.yml
See execution of apb provisioning
[root@cnv-executor-shiywang-master1 ~]# oc get pods --all-namespaces | grep prov
dh-virtualization-prov-nrh8r apb-4764af0f-b886-4a14-ab79-f59b337edf21 0/1 Error 0 25m
Check the logs
[root@cnv-executor-shiywang-master1 ~]# oc logs -n dh-virtualization-prov-nrh8r apb-4764af0f-b886-4a14-ab79-f59b337edf21
PLAY [Provision KubeVirt] ******************************************************
TASK [ansible.kubernetes-modules : Install latest openshift client] ************
skipping: [localhost]
TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
skipping: [localhost]
PLAY [masters[0]] **************************************************************
TASK [kubevirt : include_tasks] ************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/tasks/provision.yml for localhost
TASK [kubevirt : Login As Super User] ******************************************
changed: [localhost]
TASK [kubevirt : Check if kube-system exists] **********************************
changed: [localhost]
TASK [kubevirt : Create kube-system namespace] *********************************
skipping: [localhost]
TASK [kubevirt : Add Privileged Policy] ****************************************
changed: [localhost] => (item=kubevirt-privileged)
changed: [localhost] => (item=kubevirt-controller)
changed: [localhost] => (item=kubevirt-infra)
changed: [localhost] => (item=kubevirt-apiserver)
TASK [kubevirt : Add Hostmount-anyuid Policy] **********************************
changed: [localhost]
TASK [kubevirt : Check for kubevirt.yaml.j2 template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]
TASK [kubevirt : Check for kubevirt.yaml.j2 version v0.7.0 in /opt/apb/kubevirt-templates] ***
ok: [localhost]
TASK [kubevirt : Download KubeVirt Template] ***********************************
skipping: [localhost]
TASK [kubevirt : Render offline template] **************************************
changed: [localhost]
TASK [kubevirt : Render KubeVirt Yaml] *****************************************
skipping: [localhost]
TASK [kubevirt : Create KubeVirt Resources] ************************************
changed: [localhost]
TASK [kubevirt : Check for demo-content.yaml template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]
TASK [kubevirt : Check for demo-content.yaml version v0.7.0 in /opt/apb/kubevirt-templates] ***
ok: [localhost]
TASK [kubevirt : Download Demo Content] ****************************************
skipping: [localhost]
TASK [kubevirt : Copy Offline Demo Content to /tmp] ****************************
changed: [localhost]
TASK [kubevirt : Copy BYO Demo Content to /tmp] ********************************
skipping: [localhost]
TASK [kubevirt : Create Demo Content] ******************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "apply", "-f", "/tmp/demo-content.yaml"], "delta": "0:00:01.310499", "end": "2018-08-02 07:53:36.559362", "msg": "non-zero return code", "rc": 1, "start": "2018-08-02 07:53:35.248863", "stderr": "error: error validating \"/tmp/demo-content.yaml\": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false", "stderr_lines": ["error: error validating \"/tmp/demo-content.yaml\": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false"], "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************
localhost : ok=12 changed=7 unreachable=0 failed=1
Log into failing container
[root@cnv-executor-shiywang-master1 ~]# oc debug -n dh-virtualization-prov-nrh8r apb-4764af0f-b886-4a14-ab79-f59b337edf21
Defaulting container name to apb.
Use 'oc describe pod/apb-4764af0f-b886-4a14-ab79-f59b337edf21-debug -n dh-virtualization-prov-nrh8r' to see all of the containers in this pod.
Debugging with pod/apb-4764af0f-b886-4a14-ab79-f59b337edf21-debug, original command: entrypoint.sh provision --extra-vars {"_apb_last_requesting_user":"test_admin","_apb_plan_id":"default","_apb_service_class_id":"fd9b21a9caa8bf8b42b27bb0c90d3b74","_apb_service_instance_id":"e865f6f7-9628-11e8-ad30-0a580a800009","admin_password":123456,"admin_user":"test_admin","cluster":"openshift","namespace":"kube-system","version":"0.7.0"}
Waiting for pod to start ...
Pod IP: 10.130.0.42
If you don't see a command prompt, try pressing enter.
sh-4.2$ ls
actions ansible.cfg etc hosts kubevirt-templates
sh-4.2$ cat kubevirt-templates/
sprint5/ v0.0.1-alpha.3/ v0.0.2/ v0.1.0-alpha/ v0.3.0-alpha.1/ v0.4.0/ v0.4.1/ v0.5.0-alpha.0/ v0.5.1-alpha.3/ v0.6.2/ v0.7.0-alpha.2/
v0.0.1-alpha.0/ v0.0.1-alpha.4/ v0.0.3/ v0.2.0/ v0.3.0-alpha.2/ v0.4.0-alpha.0/ v0.4.1-alpha.1/ v0.5.0-alpha.1/ v0.6.0/ v0.7.0/ v0.7.0-alpha.3/
v0.0.1-alpha.1/ v0.0.1-alpha.5/ v0.0.4/ v0.3.0/ v0.3.0-alpha.3/ v0.4.0-alpha.1/ v0.4.1-alpha.2/ v0.5.1-alpha.1/ v0.6.1/ v0.7.0-alpha.0/ v0.7.0-alpha.4/
v0.0.1-alpha.2/ v0.0.1-alpha.6/ v0.1.0/ v0.3.0-alpha.0/ v0.3.0-alpha.4/ v0.4.0-alpha.2/ v0.5.0/ v0.5.1-alpha.2/ v0.6.1-alpha.0/ v0.7.0-alpha.1/ v0.7.0-alpha.5/
sh-4.2$ cat kubevirt-templates/v0.7.0/
demo-content.yaml kubevirt.yaml.j2 vm-template-rhel7.yaml
kubevirt.yaml vm-template-fedora.yaml vm-template-windows2012r2.yaml
sh-4.2$ cat kubevirt-templates/v0.7.0/
demo-content.yaml kubevirt.yaml.j2 vm-template-rhel7.yaml
kubevirt.yaml vm-template-fedora.yaml vm-template-windows2012r2.yaml
sh-4.2$ cat kubevirt-templates/v0.7.0/demo-content.yaml
<html><body>You are being <a href="https://github-production-release-asset-2e65be.s3.amazonaws.com/76686583/7fdd6790-7fb4-11e8-9f22-24c2e2cb1329?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180730%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180730T135820Z&X-Amz-Expires=300&X-Amz-Signature=3a5c58ed35f91c86102b57db3e4b8400b119a7785ea564505e2d6940749baf94&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Ddemo-content.yaml&response-content-type=application%2Foctet-stream">redirected</a>.</body></html>sh-4.2$
sh-4.2$ exit
Removing debug pod ...
The kubevirt-templates/v0.7.0/demo-content.yaml
contains invalid content which causes failure during deployment.
@rthallisey Per our discussion today, the catalog item name should be "Virtualization" for both upstream and downstream versions. The catalog description for upstream should be as follows:
KubeVirt enables the migration of existing virtualized workloads directly into the development workflows supported by Kubernetes.
This provides a path to more rapid application modernization by:
- Supporting development of new microservice applications in containers that interact with existing virtualized applications.
- Combining existing virtualized workloads with new container workloads on the same platform, thereby making it easier to decompose monolithic virtualized workloads into containers over time.
The catalog description for downstream should be as follows:
Container-native Virtualization enables the migration of existing virtualized workloads directly into the development workflows supported by OpenShift Container Platform.
This provides a path to more rapid application modernization by:
- Supporting development of new microservice applications in containers that interact with existing virtualized applications.
- Combining existing virtualized workloads with new container workloads on the same platform, thereby making it easier to decompose monolithic virtualized workloads into containers over time.
Virtual machines running in Container Native Virtualization continue to utilize the same tried and trusted RHEL hypervisor (KVM) as Red Hat Virtualization and Red Hat OpenStack Platform.
@serenamarie125 FYI
This repository contains static templates which are the kubevirt.yaml itself just broken down into separated files one per each kind. I am talking about templates located under following directory.
https://github.com/ansibleplaybookbundle/kubevirt-apb/tree/master/roles/kubevirt-apb/templates
In my opinion this APB is supposed to consume kubevirt.yaml which is part of KubeVirt release.
https://github.com/kubevirt/kubevirt/releases
Hi there,
I chose ephemeral storage plan to deploy kubevirt on OCP web console, then I ran into this error when executed "Allow ceph OSD traffic" task:
iptables v1.4.21: can't initialize iptables table `filter': Permission denied (you must be root)
Here it the ansible log when deploy from web console:
[root@host-172-16-120-120 ~]# oc project rh-virtualization-prov-w4n5h
Now using project "rh-virtualization-prov-w4n5h" on server "https://172.16.120.120:8443".
[root@host-172-16-120-120 ~]# oc get all
NAME READY STATUS RESTARTS AGE
po/apb-e1ebfcc3-4a01-4c0f-83c2-8080d880c127 0/1 Error 0 1m
[root@host-172-16-120-120 ~]# oc logs po/apb-e1ebfcc3-4a01-4c0f-83c2-8080d880c127
+ [[ provision --extra-vars {"_apb_plan_id":"storage-demo","_apb_service_class_id":"60c8357b2a1cb091488d9c5586c4eb4b","_apb_service_instance_id":"94ef5eab-0670-4e69-8702-7688af1c5b0d","admin_password":"redhat","admin_user":"qwang","cluster":"openshift","namespace":"qwang-storage-demo-1","storage_role":"storage-demo","version":"0.4.1-alpha.2"} == *\s\2\i\/\a\s\s\e\m\b\l\e* ]]
+ ACTION=provision
+ shift
+ apb_action_path=kubevirt-ansible/playbooks/kubevirt.yml
+ playbooks=/etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml
+ CREDS=/var/tmp/bind-creds
+ TEST_RESULT=/var/tmp/test-result
+ whoami
+ '[' -w /etc/passwd ']'
++ id -u
+ echo 'apb:x:1000180000:0:apb user:/opt/apb:/sbin/nologin'
+ set +x
+ [[ -e /etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml ]]
+ [[ ! -d /etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml ]]
+ ANSIBLE_ROLES_PATH=/etc/ansible/roles:/opt/ansible/roles
+ ansible-playbook /etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.yml -e action=provision --extra-vars '{"_apb_plan_id":"storage-demo","_apb_service_class_id":"60c8357b2a1cb091488d9c5586c4eb4b","_apb_service_instance_id":"94ef5eab-0670-4e69-8702-7688af1c5b0d","admin_password":"redhat","admin_user":"qwang","cluster":"openshift","namespace":"qwang-storage-demo-1","storage_role":"storage-demo","version":"0.4.1-alpha.2"}'
[WARNING]: Found variable using reserved name: action
PLAY [localhost] ***************************************************************
TASK [kubevirt : include_tasks] ************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/tasks/provision.yml for localhost
TASK [kubevirt : Login As Super User] ******************************************
changed: [localhost]
TASK [kubevirt : Check if qwang-storage-demo-1 exists] *************************
changed: [localhost]
TASK [kubevirt : Create qwang-storage-demo-1 namespace] ************************
skipping: [localhost]
TASK [kubevirt : Add Privileged Policy] ****************************************
changed: [localhost] => (item=kubevirt-privileged)
changed: [localhost] => (item=kubevirt-controller)
changed: [localhost] => (item=kubevirt-infra)
TASK [kubevirt : Add Hostmount-anyuid Policy] **********************************
changed: [localhost]
TASK [kubevirt : Check for kubevirt.yml template in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost]
TASK [kubevirt : Download KubeVirt Template] ***********************************
changed: [localhost]
TASK [kubevirt : Render KubeVirt Yml] ******************************************
changed: [localhost]
TASK [kubevirt : Render BYO template] ******************************************
skipping: [localhost]
TASK [kubevirt : Create KubeVirt Resources] ************************************
changed: [localhost]
TASK [kubevirt : Check for vm templates in /etc/ansible/roles/kubevirt-ansible/roles/kubevirt/templates] ***
ok: [localhost] => (item=vm-template-fedora)
ok: [localhost] => (item=vm-template-windows2012r2)
ok: [localhost] => (item=vm-template-rhel7)
TASK [kubevirt : Copy VM templates to /tmp] ************************************
TASK [kubevirt : Download KubeVirt default VM templates] ***********************
[WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: cluster == "openshift" and "{{
byo_vm_templates.results | selectattr('stat.exists') | map(attribute='item') |
list | length == 0 }}"
changed: [localhost] => (item=vm-template-fedora)
changed: [localhost] => (item=vm-template-windows2012r2)
changed: [localhost] => (item=vm-template-rhel7)
TASK [kubevirt : Create default VM templates in OpenShift Namespace] ***********
changed: [localhost] => (item=vm-template-fedora)
changed: [localhost] => (item=vm-template-windows2012r2)
changed: [localhost] => (item=vm-template-rhel7)
PLAY [masters[0]] **************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [storage-demo : include_tasks] ********************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/storage-demo/tasks/provision.yml for localhost
TASK [storage-demo : Login As Super User] **************************************
changed: [localhost]
TASK [storage-demo : Check if namespace qwang-storage-demo-1 exists] ***********
changed: [localhost]
TASK [storage-demo : Create qwang-storage-demo-1 namespace] ********************
skipping: [localhost]
TASK [storage-demo : Check for storage-demo serviceaccount] ********************
changed: [localhost]
TASK [storage-demo : Create storage-demo serviceaccount] ***********************
changed: [localhost]
TASK [storage-demo : Grant privileged access to storage-demo serviceaccount] ***
changed: [localhost]
TASK [storage-demo : Select a target node] *************************************
changed: [localhost]
TASK [storage-demo : Set the target node] **************************************
ok: [localhost]
TASK [storage-demo : Render storage-demo deployment yaml] **********************
changed: [localhost]
TASK [storage-demo : Create storage-demo Resources] ****************************
changed: [localhost]
TASK [cdi : include_tasks] *****************************************************
included: /etc/ansible/roles/kubevirt-ansible/roles/cdi/tasks/provision.yml for localhost
TASK [cdi : Determine Environment] *********************************************
changed: [localhost]
TASK [cdi : Check if namespace golden-images exists] ***************************
changed: [localhost]
TASK [cdi : Create golden-images namespace using kubectl] **********************
skipping: [localhost]
TASK [cdi : Create golden-images namespace using oc] ***************************
changed: [localhost]
TASK [cdi : Check if RBAC exists for CDI] **************************************
changed: [localhost]
TASK [cdi : Create RBAC for CDI] ***********************************************
changed: [localhost]
TASK [cdi : Render golden-images ResourceQuota deployment yaml] ****************
changed: [localhost]
TASK [cdi : Create golden-images ResourceQuota] ********************************
changed: [localhost]
TASK [cdi : Render CDI deployment yaml] ****************************************
changed: [localhost]
TASK [cdi : Create CDI deployment] *********************************************
changed: [localhost]
PLAY [masters nodes] ***********************************************************
[WARNING]: Could not match supplied host pattern, ignoring: nodes
TASK [storage-demo-nodeconfig : include_tasks] *********************************
included: /etc/ansible/roles/kubevirt-ansible/roles/storage-demo-nodeconfig/tasks/provision.yml for localhost
TASK [storage-demo-nodeconfig : Allow ceph OSD traffic] ************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/sbin/iptables -t filter -I INPUT -p tcp -j ACCEPT --destination-port 6789", "msg": "iptables v1.4.21: can't initialize iptables table `filter': Permission denied (you must be root)\nPerhaps iptables or your kernel needs to be upgraded.", "rc": 3, "stderr": "iptables v1.4.21: can't initialize iptables table `filter': Permission denied (you must be root)\nPerhaps iptables or your kernel needs to be upgraded.\n", "stderr_lines": ["iptables v1.4.21: can't initialize iptables table `filter': Permission denied (you must be root)", "Perhaps iptables or your kernel needs to be upgraded."], "stdout": "", "stdout_lines": []}
to retry, use: --limit @/etc/ansible/roles/kubevirt-ansible/playbooks/kubevirt.retry
PLAY RECAP *********************************************************************
localhost : ok=34 changed=26 unreachable=0 failed=1
+ EXIT_CODE=2
+ set +ex
+ '[' -f /var/tmp/test-result ']'
+ exit 2
Install https://github.com/kubevirt/kubevirt/blob/release-0.6/cluster/examples/vmi-windows.yaml in the apb. This should be an addition to https://github.com/ansibleplaybookbundle/kubevirt-apb/blob/master/download-templates.sh upstream and added the kubevirt rpm downstream.
ansibleplaybookbundle/apb-base#36
Changes:
/opt/apb/actions
to /opt/apb/projects
/opt/apb/inventory/hosts
On a cluster provisioned today with kubevirt-apb, apb shows provisioned successfully, but there are no kubevirt pods/deployments/crd in the cluster
oc get serviceinstance -n kube-system -o yaml
apiVersion: v1
items:
- apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"servicecatalog.k8s.io/v1beta1","kind":"ServiceInstance","metadata":{"annotations":{},"name":"kubevirt","namespace":"kube-system"},"spec":{"clusterServiceClassExternalName":"dh-virtualization","clusterServicePlanExternalName":"default","parameters":{"admin_password":123456,"admin_user":"test_admin","version":"0.7.0-alpha.2"}}}
creationTimestamp: 2018-06-29T09:18:40Z
finalizers:
- kubernetes-incubator/service-catalog
generation: 1
name: kubevirt
namespace: kube-system
resourceVersion: "4239"
selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/kube-system/serviceinstances/kubevirt
uid: 699373d2-7b7d-11e8-bd17-0a580a800003
spec:
clusterServiceClassExternalName: dh-virtualization
clusterServiceClassRef:
name: fd9b21a9caa8bf8b42b27bb0c90d3b74
clusterServicePlanExternalName: default
clusterServicePlanRef:
name: e6304baf7ba0781fcf87068a11041b2c
externalID: 6993736d-7b7d-11e8-bd17-0a580a800003
parameters:
admin_password: 123456
admin_user: test_admin
version: 0.7.0-alpha.2
updateRequests: 0
userInfo:
extra:
scopes.authorization.openshift.io:
- user:full
groups:
- system:authenticated:oauth
- system:authenticated
uid: ""
username: test_admin
status:
asyncOpInProgress: false
conditions:
- lastTransitionTime: 2018-06-29T09:20:29Z
message: The instance was provisioned successfully
reason: ProvisionedSuccessfully
status: "True"
type: Ready
deprovisionStatus: Required
externalProperties:
clusterServicePlanExternalID: e6304baf7ba0781fcf87068a11041b2c
clusterServicePlanExternalName: default
parameterChecksum: b18a900d25a2b5c43603fe2ee5e528b8265b9c3658c4a8f6f6c8d5a50c878629
parameters:
admin_password: 123456
admin_user: test_admin
version: 0.7.0-alpha.2
userInfo:
extra:
scopes.authorization.openshift.io:
- user:full
groups:
- system:authenticated:oauth
- system:authenticated
uid: ""
username: test_admin
observedGeneration: 1
orphanMitigationInProgress: false
provisionStatus: Provisioned
reconciledGeneration: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
cat kubevirt-apb.yml
---
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: kubevirt
namespace: kube-system
spec:
clusterServiceClassExternalName: dh-virtualization
clusterServicePlanExternalName: default
parameters:
admin_user: test_admin
admin_password: 123456
version: 0.7.0-alpha.2
oc get pods -n kube-system
NAME READY STATUS RESTARTS AGE
master-api-cnv-executor-vatsal-master1.example.com 1/1 Running 0 34m
master-controllers-cnv-executor-vatsal-master1.example.com 1/1 Running 0 34m
master-etcd-cnv-executor-vatsal-master1.example.com 1/1 Running 0 33m
@serenamarie125 FYI
Provide a playbook provision.yml
that runs the kubevirt-ansible roles. This will allow us to remove the hardcoded path the the kubevirt.yml playbook in the Dockerfile. https://github.com/ansibleplaybookbundle/kubevirt-apb/blob/master/Dockerfile#L58
provision.yml
- import_playbooks: /opt/ansible/roles/kubevirt.yml
The virtualmachines-apb requires we run as cluster-admin. We should be able to add rbac rules for kubevirt to create for the virtualmachines-apb so that it can run as non cluster-admin.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.