kubevirt / demo Goto Github PK
View Code? Open in Web Editor NEWEasy to use KubeVirt demo based on minikube.
License: Apache License 2.0
Easy to use KubeVirt demo based on minikube.
License: Apache License 2.0
Currently, to install kubernetes 1.8 on minikube you have to use the bootstrap=kubeadm option.
Due to a bug, this causes the minikube status output to be messed up:
[root@redhat demo]# minikube status
minikube: Running
cluster: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.42.191
This causes run-demo to refuse to start because "minikube is installed but not running".
If you comment out the check for minikube running, the demo works fine.
Since, virt-api
service is removed in v0.2.0
, I think demo information should also changed to latest one.
The demos currently indicate to use version v0.8.0. Currrent is v0.95.0 with v0.10.0 in development.
Please update the demo pages to reflect current versions.
For the demo it is sometimes necessary to have an up to date client.
We could provide the client in a container or flatpak for easier consumption.
GKE and AKS are managed Kubernetes clusters. It should be fairly easy to support them if a user has access codes.
Could you please add a guide on how to specify OS image (e.g., Fedora or Ubuntu) and where to locate the OS image file (e.g., .img file) on a host when create a VM?
We should add automation in order to ensure that the demo continues to work with releases of KubeVirt.
$ kubectl apply -f https://raw.githubusercontent.com/kubevirt/demo/master/manifests/vm.yaml
offlinevirtualmachine.kubevirt.io "testvm" created
The VirtualMachinePreset "small" is invalid: []: Invalid value: map[string]interface {}{"metadata":map[string]interface {}{"uid":"63604500-4792-11e8-80c4-8216f171324b", "selfLink":"", "clusterName":"", "annotations":map[string]interface {}{"kubectl.kubern
etes.io/last-applied-configuration":"{\"apiVersion\":\"kubevirt.io/v1alpha1\",\"kind\":\"VirtualMachinePreset\",\"metadata\":{\"annotations\":{},\"name\":\"small\",\"namespace\":\"myproject\"},\"spec\":{\"domain\":{\"resources\":{\"requests\":{\"memory\":
\"64M\"}}},\"selector\":{\"matchLabels\":{\"kubevirt.io/size\":\"small\"}}}}\n"}, "name":"small", "namespace":"myproject", "creationTimestamp":"2018-04-24T07:37:49Z"}, "spec":map[string]interface {}{"domain":map[string]interface {}{"resources":map[string]
interface {}{"requests":map[string]interface {}{"memory":"64M"}}}, "selector":map[string]interface {}{"matchLabels":map[string]interface {}{"kubevirt.io/size":"small"}}}, "apiVersion":"kubevirt.io/v1alpha1", "kind":"VirtualMachinePreset"}: validation fail
ure list: spec.domain.devices in body is required
Apparently adding an empty spec.domain.devices
allowed me to move forward.
cc @fabiand
I finally setup an minikube running three vms, it works,
and according the setup the network topology I assume is like this
But I didn't find any network card inside the vms, when I execute virtctl
on my localhost
➜ manifests git:(master) ✗ virtctl console testvm123 --kubeconfig=/home/shiywang/.kube/config
Escape sequence is ^]
Welcome to Alpine Linux 3.5
Kernel 4.4.45-0-virtgrsec on an x86_64 (/dev/ttyS0)
localhost login: root
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org>.
You can setup the system with the command: setup-alpine
You may change this message by editing /etc/motd.
localhost:~# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
localhost:~#
➜ manifests git:(master) ✗ oc get vms
NAME AGE
testvm 2d
testvm123 39m
testvm12345 38m
vms xml dump file
<domain type='qemu' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>default_testvm123</name>
<uuid>38fc0b14-4472-4ef8-8c3d-5fe8909f3c6b</uuid>
<metadata>
<kubevirt xmlns="http://kubevirt.io">
<uid>3d2cc36e-0a3a-11e8-98f4-2820fe22abcd</uid>
<graceperiod>
<deletionGracePeriodSeconds>0</deletionGracePeriodSeconds>
</graceperiod>
</kubevirt>
</metadata>
<memory unit='KiB'>63488</memory>
<currentMemory unit='KiB'>62500</currentMemory>
<vcpu placement='static'>1</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='uuid'>38fc0b14-4472-4ef8-8c3d-5fe8909f3c6b</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/local/bin/qemu-system-x86_64</emulator>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='iscsi' name='iqn.2017-01.io.kubevirt:sn.42/2'>
<host name='10.105.187.139' port='3260'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<interface type='direct'>
<mac address='52:54:00:e7:ea:dd'/>
<source network='default' dev='eth0' mode='bridge'/>
<target dev='macvtap0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='unix'>
<source mode='bind' path='/var/run/kubevirt-private/default/testvm123/virt-serial0'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='unix'>
<source mode='bind' path='/var/run/kubevirt-private/default/testvm123/virt-serial0'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' socket='/var/run/kubevirt-private/default/testvm123/virt-vnc'>
<listen type='socket' socket='/var/run/kubevirt-private/default/testvm123/virt-vnc'/>
</graphics>
<video>
<model type='vga' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
<qemu:commandline>
<qemu:env name='SLICE' value='/kubepods/besteffort/pod3d2e0c59-0a3a-11e8-98f4-2820fe22abcd/6de8ba609d7e8e5dfcc6fc06bf40f261c5e6cd0cdcf0dd395f171c731f6fd122'/>
<qemu:env name='CONTROLLERS' value='perf_event,freezer,memory,cpuset,net_cls'/>
</qemu:commandline>
</domain>
why there's no NIC in vms ?
After the intial setup I always see
[FAILED] Failed to start Docker Storage Setup.
in the boot log. However it does not seem to be critical since everything seems to work fine.
Link to User Guide in Readme seems to be broken.
https://kubevirt.io/user-guide/docs/latest/welcome/index.html
The kubevirt.io try-me for minikube/minishift has oc commands that require use of the system/admin user. There is no note to that effect in the text.
http://kubevirt.io/get_kubevirt/
Deploying KubeVirt on OpenShift Origin
On OpenShift Origin, the following SCCs need to be added prior kubevirt.yaml deployment:
$ oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:kubevirt-privileged
$ oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:kubevirt-controller
$ oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:kubevirt-apiserver
Somehow connecting to consoles does not work with virtctl, minikube, and v0.2.0.
Follow the steps as in the readme, but they fail.
The env:
[root@localhost demo]# cat /etc/redhat-release
Fedora release 25 (Twenty Five)
[root@localhost demo]# dnf install -y make qemu-system-x86 libguestfs-tools-c expect
Last metadata expiration check: 16:15:44 ago on Mon Feb 13 00:31:53 2017.
Package make-1:4.1-5.fc24.x86_64 is already installed, skipping.
Package qemu-system-x86-2:2.7.1-2.fc25.x86_64 is already installed, skipping.
Package libguestfs-tools-c-1:1.34.4-2.fc25.x86_64 is already installed, skipping.
Package expect-5.45-22.fc24.x86_64 is already installed, skipping.
Dependencies resolved.
Nothing to do.
Complete!
Run make build, got the virt-resize error, any advice to solve it? thanks!
[root@localhost demo]# make build
virt-builder centos-7.3
--no-network
--smp 4 --memsize 2048
--output kubevirt-demo.img
--format qcow2
--size 20G
--hostname kubevirt-demo
--upload data/bootstrap-kubevirt.sh:/
--root-password password:
--run-command "echo -e Login as 'root' to proceed.\\n >> /etc/issue"
--firstboot-command "GIT_TAG=v0.0.1-alpha.2 bash -x /bootstrap-kubevirt.sh ; init 0 ;"
[ 33.8] Downloading: http://libguestfs.org/download/builder/centos-7.3.xz
[ 35.3] Planning how to build this image
[ 35.3] Uncompressing
[ 57.7] Resizing (using virt-resize) to expand the disk to 20.0G
virt-resize: error: libguestfs error: could not create appliance through
libvirt.Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=directOriginal error from libvirt: Cannot access storage file
'/root/demo/kubevirt-demo.img' (as uid:107, gid:107): Permission denied
[code=38 int1=13]If reporting bugs, run virt-resize with debugging enabled and include the
complete output:virt-resize -v -x [...]
Makefile:9: recipe for target 'build' failed
make: *** [build] Error 1
[root@localhost demo]#
Today we "compile" the stock manifests into something useful for minikube.
We need to do this until the stock manifests are generic.
In the mean time this functionality should be factored out to be reusable in other setups as well.
We are using demo vm.yaml but it fails with below error. Kubervirt seems up and working.
[centos@ip-172-31-43-99 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubevirt/demo/master/manifests/vm.yaml
The "" is invalid: : spec.template.spec.volumes.containerDisk in body is a forbidden property
[centos@ip-172-31-43-99 kubevirt-ansible]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-5gpqn 1/1 Running 0 25m
kube-system coredns-86c58d9df4-j7sqz 1/1 Running 0 25m
kube-system etcd-ip-172-31-43-99.eu-central-1.compute.internal 1/1 Running 0 24m
kube-system kube-apiserver-ip-172-31-43-99.eu-central-1.compute.internal 1/1 Running 0 23m
kube-system kube-cni-plugins-amd64-hv2zc 1/1 Running 0 24m
kube-system kube-controller-manager-ip-172-31-43-99.eu-central-1.compute.internal 1/1 Running 0 24m
kube-system kube-multus-amd64-2ztrz 1/1 Running 0 24m
kube-system kube-ovs-cni-plugin-amd64-pqdvw 1/1 Running 0 23m
kube-system kube-proxy-mggdz 1/1 Running 0 25m
kube-system kube-scheduler-ip-172-31-43-99.eu-central-1.compute.internal 1/1 Running 0 24m
kube-system virt-api-66d49464-7bg9s 1/1 Running 0 23m
kube-system virt-api-66d49464-c9x5z 1/1 Running 0 23m
kube-system virt-controller-7fb89fd44c-j2mlc 1/1 Running 0 23m
kube-system virt-controller-7fb89fd44c-xnhqm 1/1 Running 0 23m
kube-system virt-handler-k6bb4 1/1 Running 0 23m
kube-system weave-net-tdxwp 2/2 Running 0 25m
Review repo for offensive langauge
git grep -I -E 'master|slave|whitelist|blacklist' -- ':!vendor' ':!cluster-up' ':!cluster-sync' ':!*generated*'
Give the user an impression how a working setup should look like by adding an example output of kubectl get pods --all-namespaces
after deployment.
Verify nested virtualization is enabled on the machine minikube is being installed on:
$ cat /sys/module/kvm_intel/parameters/nested
Y
It says no on my machine ... how about not assuming I know the ins and outs of virtualization and giving me a hint if it's not enabled, i.e. https://docs.fedoraproject.org/quick-docs/en-US/using-nested-virtualization-in-kvm.html
VERSION=v0.15.0 seems not to be the latest... maybe apply similar patch as in kubevirt/user-guide#242 ?
I want to start a VM using the example vm.yaml https://github.com/kubevirt/demo/blob/master/manifests/vm.yaml. When I start the VM, the VMI reports Waring message that "failed to find a sourceFile in containerDisk rootfs: Failed to check /proc/1/root/var/lib/containers/storage/devicemapper/mnt/8f42be56572b3f6f79751c36b4b4f99d8335f36984d54504bac650b23a2b247a/disk for disks: open /proc/1/root/var/lib/containers/storage/devicemapper/mnt/8f42be56572b3f6f79751c36b4b4f99d8335f36984d54504bac650b23a2b247a/disk: no such file or directory".
What happened:
I start a vmi, but above info happend
my vmi example as below from here:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: test
name: test
spec:
running: true
template:
metadata:
name: test
spec:
nodeSelector:
kubernetes.io/hostname: node-70-22
#livemigrate: "true"
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
cache: writethrough
- disk:
bus: virtio
name: cloudinitdisk
# - disk:
# bus: virtio
# name: mydisk
interfaces:
- name: default
masquerade: {}
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
#image: kubevirt/fedora-cloud-container-disk-demo:latest
image: 10.20.25.222:5000/centos8.4_run:jmnd_dev
imagePullPolicy: IfNotPresent
#- name: mydisk
#persistentVolumeClaim:
# claimName: mypvc
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
Environment:
KubeVirt version (use virtctl version):
Client Version: version.Info{GitVersion:"v0.56.0", GitCommit:"b1dbd1bccc882282690331ca84e97ddf83555611", GitTreeState:"clean", BuildDate:"2022-08-18T20:19:27Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{GitVersion:"v0.57.1", GitCommit:"4c08bf090387743b2e2c8037c941102f11e6b031", GitTreeState:"clean", BuildDate:"2022-09-14T14:05:11Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"b631974d68ac5045e076c86a5c66fba6f128dc72", GitTreeState:"clean", BuildDate:"2022-01-19T17:51:12Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"b631974d68ac5045e076c86a5c66fba6f128dc72", GitTreeState:"clean", BuildDate:"2022-01-19T17:45:53Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
VM or VMI specifications:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Kernel (e.g. uname -a):
Linux k8s-69-22 5.4.0-126-generic #142~18.04.1-Ubuntu SMP Thu Sep 1 16:25:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Others:
If need additional information, please leave a message
In the main readme
there's a link named user guide
that links to http://docs.kubevirt.io/
.
Unfortunately, this page seems to be a parking page for a german hoster, selfhost.de
Also, it'd be nice if the link was https; I actually tried https://docs.kubevirt.io/
, but I got nothing at all.
Hi,
I was referring to the link https://github.com/kubevirt/kubevirt/tree/master/cmd/registry-disk-v1alpha for deploying my qcow2 image.
By following the steps as provided in the link but was not successful.
So can you please guide me on how to deploy a virtual machine using our qcow2 image?
Any guide or link that you can help in achieving this would be great.
Thanks,
Anand
After running:
$ kubectl patch offlinevirtualmachine testvm --type merge -p '{"spec":{"running":true}}'
I get the $SUBJECT error.
C:\minikube> kubectl get --all-namespaces pods
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 1/1 Running 0 24m
kube-system kube-dns-54cccfbdf8-grdkr 3/3 Running 0 24m
kube-system kubernetes-dashboard-77d8b98585-66pxb 1/1 Running 0 24m
kube-system storage-provisioner 1/1 Running 0 24m
kube-system virt-controller-5c74754ddd-5p4cx 1/1 Running 0 15m
kube-system virt-controller-5c74754ddd-tbhkf 0/1 Running 0 15m
kube-system virt-handler-c887w 1/1 Running 0 15m
Shouldn't also the virt-controller-5c74754ddd-tbhkf
running? I am following this guide
When trying to launch a VM on minishisft 1.18 and kubevirt 0.7.0-alpha.1:
…
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-06-18T09:53:32Z
message: 'failed to create virtual machine pod: pods "virt-launcher-testvm-" is
forbidden: unable to validate against any security context constraint: [spec.volumes[0]:
Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.securityContext.runAsUser:
Invalid value: 0: must be in the ranges: [1000070000, 1000079999] spec.containers[1].securityContext.securityContext.runAsUser:
Invalid value: 0: must be in the ranges: [1000070000, 1000079999] spec.containers[1].securityContext.privileged:
Invalid value: true: Privileged containers are not allowed]'
reason: FailedCreate
status: "False"
type: Synchronized
phase: Pending
Hi,
We want to add some ARM64 related descriptions, but there is no 'CONTRIBUTING.md' and LICENSE here, so we are not able to contribute code here. Can you add the files?
Example:
https://github.com/hustcat/sriov-cni/blob/master/LICENSE
https://github.com/hustcat/sriov-cni/blob/master/CONTRIBUTING.md
Guideline:
https://help.github.com/en/github/building-a-strong-community/setting-guidelines-for-repository-contributors
https://help.github.com/en/github/building-a-strong-community/adding-a-license-to-a-repository
only "get pod" has a "status" field that could show "Running" stage. Maybe:
maybe change to:
check "VM is running" "kubectl get pod | grep iscsi-demo-target | grep -q Running"
The demo does currently not work until the following two issues are resolved:
/kind enhancement
What happened:
Current standard of this repos main branch name master
is not respectful to some contributors and/or users of KubeVirt
What you expected to happen:
Please adjust the main branch name to main
.
Anything else we need to know?:
Minikube 0.26.0 requires systemd which is not available in the travis environment.
What are your feelings on archiving this repo? Or replacing everything with a README that points to the Labs served by https://kubevirt.io/labs?
➜ manifests git:(master) ✗ kubectl describe pod iscsi-demo-target-tgtd-5674b4f6fd-kztdk
Name: iscsi-demo-target-tgtd-5674b4f6fd-kztdk
Namespace: default
Node: minikube/192.168.39.192
Start Time: Wed, 31 Jan 2018 17:58:31 +0800
Labels: app=iscsi-demo-target
name=iscsi-demo-target-tgtd
pod-template-hash=1230609298
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/iscsi-demo-target-tgtd-5674b4f6fd
Containers:
target:
Container ID:
Image: kubevirt/iscsi-demo-target-tgtd:v0.0.4
Image ID:
Port: 3260/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cj858 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-cj858:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cj858
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 default-scheduler Normal Scheduled Successfully assigned iscsi-demo-target-tgtd-5674b4f6fd-kztdk to minikube
2m 2m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-cj858"
2m 4s 6 kubelet, minikube Warning FailedCreatePodSandBox Failed create pod sandbox.
OS I use centos 7.4
minikube version
minikube version: v0.25.0
minikube start \
--vm-driver kvm2 \
--network-plugin cni
The kubevirt.io demo for minikube has a section describing the configuration of nested virt.
http://kubevirt.io/get_kubevirt/ "Appendix: deploying Kube"
If not set it refers to this link:
https://docs.fedoraproject.org/quick-docs/en-US/using-nested-virtualization-in-kvm.html
This in turn indicates that the link has moved and should point to:
https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/index.html
After make build
and ./run-demo.sh
, the next command fails:
[root@kubevirt-demo ~]# kubectl create -f /vm.json
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Other things don't work as well, e.g. virsh list --all
; checking libvirt it seems all configuration is lost (everything is left at default).
Using kubevirt v0.7.0-alpha,1
I've pushed the fedora 28 cloud image to a PVC f28cloud
, and changed th ownership of the disk.img to 107:107.
I also added SCCs:
oc adm policy add-scc-to-user privileged -n kube-system -z kubevirt-privileged
oc adm policy add-scc-to-user privileged -n kube-system -z kubevirt-controller
oc adm policy add-scc-to-user privileged -n kube-system -z kubevirt-apiserver
# ALSO after the err below I tried:
oc adm policy add-scc-to-user privileged -z default
Then I created a VM with:
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
name: f28cloud
spec:
running: false
selector:
matchLabels:
guest: f28cloud
template:
metadata:
labels:
guest: f28cloud
kubevirt.io/size: medium
spec:
domain:
devices:
disks:
- name: rootfs
volumeName: f28cloud
disk:
bus: virtio
- name: cloudinitdisk
volumeName: cloudinitvolume
disk:
bus: virtio
volumes:
- name: f28cloud
persistentVolumeClaim:
claimName: f28cloud
- name: cloudinitvolume
cloudInitNoCloud:
userDataBase64: SGkuXG4=
---
apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachineInstancePreset
metadata:
name: medium
spec:
selector:
matchLabels:
kubevirt.io/size: medium
domain:
resources:
requests:
memory: 1G
devices: {}
When launching the VM on minishisft 1.18 I get:
$ oc describe vmi f28cloud
Name: f28cloud
Namespace: myproject
Labels: guest=f28cloud
kubevirt.io/nodeName=localhost
kubevirt.io/size=medium
Annotations: presets.virtualmachines.kubevirt.io/presets-applied=kubevirt.io/v1alpha2
virtualmachinepreset.kubevirt.io/medium=kubevirt.io/v1alpha2
API Version: kubevirt.io/v1alpha2
Kind: VirtualMachineInstance
Metadata:
Cluster Name:
Creation Timestamp: 2018-06-18T22:18:42Z
Deletion Grace Period Seconds: 0
Deletion Timestamp: 2018-06-18T22:18:57Z
Finalizers:
foregroundDeleteVirtualMachine
Generate Name: f28cloud
Generation: 0
Owner References:
API Version: kubevirt.io/v1alpha2
Block Owner Deletion: true
Controller: true
Kind: VirtualMachine
Name: f28cloud
UID: 082e0af0-7331-11e8-87ce-d6808e47f6cd
Resource Version: 30978
Self Link: /apis/kubevirt.io/v1alpha2/namespaces/myproject/virtualmachineinstances/f28cloud
UID: 8f2441ba-7345-11e8-9f7e-ea3e142a42df
Spec:
Domain:
Devices:
Disks:
Disk:
Bus: virtio
Name: rootfs
Volume Name: f28cloud
Disk:
Bus: virtio
Name: cloudinitdisk
Volume Name: cloudinitvolume
Interfaces:
Bridge:
Name: default
Features:
Acpi:
Enabled: true
Firmware:
Uuid: 3ef8dbb0-2876-554f-87df-f5684818bbed
Machine:
Type: q35
Resources:
Requests:
Memory: 1G
Networks:
Name: default
Pod:
Volumes:
Name: f28cloud
Persistent Volume Claim:
Claim Name: f28cloud
Cloud Init No Cloud:
User Data Base 64: SGkuXG4=
Name: cloudinitvolume
Status:
Interfaces:
Ip Address: 172.17.0.9
Node Name: localhost
Phase: Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 15s virtualmachine-controller Created virtual machine pod virt-launcher-f28cloud-92jlx
Normal SuccessfulHandOver 2s virtualmachine-controller Pod owner ship transferred to the node virt-launcher-f28cloud-92jlx
Warning SyncFailed 1s virt-handler, localhost server error. command Launcher.Sync failed: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: 2018-06-18T22:18:56.053734Z qemu-system-x86_64: -drive file=/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img,format=raw,if=none,id=drive-virtio-disk0: Could not open '/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img': Permission denied')
Normal Started 1s virt-handler, localhost VirtualMachineInstance started.
Warning SyncFailed 1s virt-handler, localhost server error. command Launcher.Sync failed: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: 2018-06-18T22:18:56.348489Z qemu-system-x86_64: -drive file=/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img,format=raw,if=none,id=drive-virtio-disk0: Could not open '/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img': Permission denied')
Warning SyncFailed 1s virt-handler, localhost server error. command Launcher.Sync failed: virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2018-06-18T22:18:56.620449Z qemu-system-x86_64: -drive file=/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img,format=raw,if=none,id=drive-virtio-disk0: Could not open '/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img': Permission denied')
Warning SyncFailed 1s virt-handler, localhost server error. command Launcher.Sync failed: virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: ot-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 -device nec-usb-xhci,id=usb,bus=pci.2,addr=0x0 -drive file=/var/run/kubevirt-private/vmi-disks/f28cloud/disk.img,format=raw,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.3,addr=0x0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/cloud-init-data/myproject/f28cloud/noCloud.iso,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=23,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:46:42:b7,bus=pci.1,addr=0x0 -chardev socket,id=charserial0,path=/var/run/kubevirt-private/myproject/f28cloud/virt-serial0,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -vnc vnc=unix:/var/run/kubevirt-private/myproject/f28cloud/virt-vnc -devic')
Warning SyncFailed 0s virt-handler, localhost server error. command Launcher.Sync failed: virError(Code=38, Domain=10, Message='failed to connect to monitor socket: No such process')
Warning SyncFailed 0s virt-handler, localhost unexpected EOF
Warning Stopped 0s virt-handler, localhost The VirtualMachineInstance crashed.
Normal SuccessfulDelete 0s virtualmachine-controller Deleted virtual machine pod virt-launcher-f28cloud-92jlx
The image itself is mounted successfully.
We should show that kubevirt can run on origin as well.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.