Git Product home page Git Product logo

qemu's Introduction

QEMU README

QEMU is a generic and open source machine & userspace emulator and virtualizer.

QEMU is capable of emulating a complete machine in software without any need for hardware virtualization support. By using dynamic translation, it achieves very good performance. QEMU can also integrate with the Xen and KVM hypervisors to provide emulated hardware while allowing the hypervisor to manage the CPU. With hypervisor support, QEMU can achieve near native performance for CPUs. When QEMU emulates CPUs directly it is capable of running operating systems made for one machine (e.g. an ARMv7 board) on a different machine (e.g. an x86_64 PC board).

QEMU is also capable of providing userspace API virtualization for Linux and BSD kernel interfaces. This allows binaries compiled against one architecture ABI (e.g. the Linux PPC64 ABI) to be run on a host using a different architecture ABI (e.g. the Linux x86_64 ABI). This does not involve any hardware emulation, simply CPU and syscall emulation.

QEMU aims to fit into a variety of use cases. It can be invoked directly by users wishing to have full control over its behaviour and settings. It also aims to facilitate integration into higher level management layers, by providing a stable command line interface and monitor API. It is commonly invoked indirectly via the libvirt library when using open source applications such as oVirt, OpenStack and virt-manager.

QEMU as a whole is released under the GNU General Public License, version 2. For full licensing details, consult the LICENSE file.

Documentation

Documentation can be found hosted online at https://www.qemu.org/documentation/. The documentation for the current development version that is available at https://www.qemu.org/docs/master/ is generated from the docs/ folder in the source tree, and is built by Sphinx.

Building

QEMU is multi-platform software intended to be buildable on all modern Linux platforms, OS-X, Win32 (via the Mingw64 toolchain) and a variety of other UNIX targets. The simple steps to build QEMU are:

mkdir build
cd build
../configure
make

Additional information can also be found online via the QEMU website:

Submitting patches

The QEMU source code is maintained under the GIT version control system.

git clone https://gitlab.com/qemu-project/qemu.git

When submitting patches, one common approach is to use 'git format-patch' and/or 'git send-email' to format & send the mail to the [email protected] mailing list. All patches submitted must contain a 'Signed-off-by' line from the author. Patches should follow the guidelines set out in the style section of the Developers Guide.

Additional information on submitting patches can be found online via the QEMU website

The QEMU website is also maintained under source control.

git clone https://gitlab.com/qemu-project/qemu-web.git

A 'git-publish' utility was created to make above process less cumbersome, and is highly recommended for making regular contributions, or even just for sending consecutive patch series revisions. It also requires a working 'git send-email' setup, and by default doesn't automate everything, so you may want to go through the above steps manually for once.

For installation instructions, please go to

The workflow with 'git-publish' is:

$ git checkout master -b my-feature
$ # work on new commits, add your 'Signed-off-by' lines to each
$ git publish

Your patch series will be sent and tagged as my-feature-v1 if you need to refer back to it in the future.

Sending v2:

$ git checkout my-feature # same topic branch
$ # making changes to the commits (using 'git rebase', for example)
$ git publish

Your patch series will be sent with 'v2' tag in the subject and the git tip will be tagged as my-feature-v2.

Bug reporting

The QEMU project uses GitLab issues to track bugs. Bugs found when running code built from QEMU git or upstream released sources should be reported via:

If using QEMU via an operating system vendor pre-built binary package, it is preferable to report bugs to the vendor's own bug tracker first. If the bug is also known to affect latest upstream code, it can also be reported via GitLab.

For additional information on bug reporting consult:

ChangeLog

For version history and release notes, please visit https://wiki.qemu.org/ChangeLog/ or look at the git history for more detailed information.

Contact

The QEMU community can be contacted in a number of ways, with the two main methods being email and IRC

Information on additional methods of contacting the community can be found online via the QEMU website:

qemu's People

Contributors

afaerber avatar agraf avatar aliguori avatar aurel32 avatar berrange avatar blueswirl avatar bonzini avatar davidhildenbrand avatar dgibson avatar ebblake avatar edgarigl avatar ehabkost avatar elmarco avatar gkurz avatar huth avatar jan-kiszka avatar jnsnow avatar kevmw avatar kraxel avatar legoater avatar mcayland avatar mstsirkin avatar philmd avatar pm215 avatar rth7680 avatar stefanharh avatar stsquad avatar stweil avatar vivier avatar xanclic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qemu's Issues

QEMU crash when execute device_add and device_del alternately

When execute device_add and device_del alternately, qemu crashes:

[root@localhost coredump]# virsh qemu-monitor-command testvm --hmp 'device_add vfio-user-pci,socket=/var/run/cntrl,id=testdisk0'

[root@localhost coredump]# virsh qemu-monitor-command testvm --hmp 'device_del testdisk0'

[root@localhost coredump]# virsh qemu-monitor-command testvm --hmp 'device_add vfio-user-pci,socket=/var/run/cntrl,id=testdisk0'

[root@localhost coredump]# virsh qemu-monitor-command testvm --hmp 'device_del testdisk0'

[root@localhost coredump]# virsh qemu-monitor-command testvm --hmp 'device_add vfio-user-pci,socket=/var/run/cntrl,id=testdisk0'
error: Unable to read from monitor: Connection reset by peer

### And the coredump stack as follows:
Thread 2 (Thread 0x7f587b15df40 (LWP 2700247)):
#0 0x00007f587c57edd2 in futex_abstimed_wait_cancelable (private=, abstime=0x7ffdf875b130, expected=0, futex_word=0x559f5d6da2f0) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
spdk/spdk#1 __pthread_cond_wait_common (abstime=0x7ffdf875b130, mutex=0x559f5d006f20, cond=0x559f5d6da2c8) at pthread_cond_wait.c:539
spdk/spdk#2 __pthread_cond_timedwait (cond=cond@entry=0x559f5d6da2c8, mutex=mutex@entry=0x559f5d006f20, abstime=abstime@entry=0x7ffdf875b130) at pthread_cond_wait.c:667
spdk/spdk#3 0x0000559f58d86e41 in qemu_cond_timedwait_impl (cond=0x559f5d6da2c8, mutex=0x559f5d006f20, ms=1000, file=0x559f58e3e3d0 "/root/qemu-5.0/builddir/build/BUILD/qemu-5.0.0.4/hw/vfio/user.c", line=721) at util/qemu-thread-posix.c:188
spdk/spdk#4 0x0000559f58a781b7 in vfio_user_send_wait (proxy=0x559f5d006ea0, hdr=0x559f5bf69c50, fds=, rsize=, nobql=) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/user.c:721
spdk/spdk#5 0x0000559f58a785f8 in vfio_user_set_irqs (irq=0x7ffdf875b250, proxy=0x559f5d006ea0) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/user.c:1380
spdk/spdk#6 vfio_user_io_set_irqs (vbasedev=, irqs=0x7ffdf875b250) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/user.c:1616
spdk/spdk#7 0x0000559f58a65c2f in vfio_unmask_single_irqindex (vbasedev=vbasedev@entry=0x559f5c0822f0, index=index@entry=0) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/common.c:89
spdk/spdk#8 0x0000559f58a6b676 in vfio_intx_disable_kvm (vdev=vdev@entry=0x559f5c081a00) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/pci.c:225
spdk/spdk#9 0x0000559f58a6bcea in vfio_intx_disable (vdev=0x559f5c081a00) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/pci.c:339
spdk/spdk#10 vfio_disable_interrupts (vdev=vdev@entry=0x559f5c081a00) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/pci.c:1249
spdk/spdk#11 0x0000559f58a6f519 in vfio_pci_pre_reset (vdev=vdev@entry=0x559f5c081a00) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/pci.c:2161
spdk/spdk#12 0x0000559f58a7000b in vfio_user_pci_reset (dev=) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/hw/vfio/pci.c:3752
spdk/spdk#13 0x0000559f58b63b10 in resettable_phase_hold (obj=obj@entry=0x559f5c081a00, opaque=opaque@entry=0x0, type=type@entry=RESET_TYPE_COLD) at hw/core/resettable.c:182
spdk/spdk#14 0x0000559f58b64160 in resettable_assert_reset (obj=obj@entry=0x559f5c081a00, type=type@entry=RESET_TYPE_COLD) at hw/core/resettable.c:60
spdk/spdk#15 0x0000559f58b5fb1d in device_set_realized (obj=, value=, errp=0x7ffdf875b548) at hw/core/qdev.c:935
spdk/spdk#16 0x0000559f58ca18d7 in property_set_bool (obj=0x559f5c081a00, v=, name=, opaque=0x559f5ba6ed70, errp=0x7ffdf875b548) at qom/object.c:2238
spdk/spdk#17 0x0000559f58ca651f in object_property_set_qobject (obj=obj@entry=0x559f5c081a00, value=value@entry=0x559f5c991980, name=name@entry=0x559f58e75518 "realized", errp=errp@entry=0x7ffdf875b548) at qom/qom-qobject.c:26
spdk/spdk#18 0x0000559f58ca3cb5 in object_property_set_bool (obj=0x559f5c081a00, value=, name=0x559f58e75518 "realized", errp=0x7ffdf875b548) at qom/object.c:1390
spdk/spdk#19 0x0000559f58b203d6 in qdev_device_add (opts=opts@entry=0x559f5c7cb3b0, errp=errp@entry=0x7ffdf875b620) at qdev-monitor.c:680
spdk/spdk#20 0x0000559f58b20753 in qmp_device_add (qdict=, ret_data=ret_data@entry=0x0, errp=errp@entry=0x7ffdf875b650) at qdev-monitor.c:805
spdk/spdk#21 0x0000559f58b20a2d in hmp_device_add (mon=0x7ffdf875b6e0, qdict=) at qdev-monitor.c:905
spdk/spdk#22 0x0000559f58c472a8 in handle_hmp_command (mon=mon@entry=0x7ffdf875b6e0, cmdline=, cmdline@entry=0x559f5bbe5600 "device_add vfio-user-pci,socket=/var/run/vfiouser-disk/vmuuid_test-d810e767-5426-41f5-8229-bdcb0a43a840/cntrl,id=testdisk0") at monitor/hmp.c:1082
--Type for more, q to quit, c to continue without paging--
spdk/spdk#23 0x0000559f58aadf92 in qmp_human_monitor_command (command_line=0x559f5bbe5600 "device_add vfio-user-pci,socket=/var/run/vfiouser-disk/vmuuid_test-d810e767-5426-41f5-8229-bdcb0a43a840/cntrl,id=testdisk0", has_cpu_index=, cpu_index=0, errp=errp@entry=0x7ffdf875b7f8) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/monitor/misc.c:142
spdk/spdk#24 0x0000559f58c6ccb9 in qmp_marshal_human_monitor_command (args=, ret=0x7ffdf875b890, errp=0x7ffdf875b888) at qapi/qapi-commands-misc.c:783
spdk/spdk#25 0x0000559f58d37a70 in qmp_dispatch (cmds=0x559f59421aa0 <qmp_commands>, request=, allow_oob=) at qapi/qmp-dispatch.c:155
spdk/spdk#26 0x0000559f58c442c1 in monitor_qmp_dispatch (mon=0x559f5badf8c0, req=) at monitor/qmp.c:145
spdk/spdk#27 0x0000559f58c44aa0 in monitor_qmp_bh_dispatcher (data=) at monitor/qmp.c:234
spdk/spdk#28 0x0000559f58d80027 in aio_bh_call (bh=0x559f5ba428a0) at util/async.c:136
spdk/spdk#29 aio_bh_poll (ctx=ctx@entry=0x559f5bade000) at util/async.c:164
spdk/spdk#30 0x0000559f58d8372e in aio_dispatch (ctx=0x559f5bade000) at util/aio-posix.c:380
spdk/spdk#31 0x0000559f58d7ff0e in aio_ctx_dispatch (source=, callback=, user_data=) at util/async.c:306
spdk/spdk#32 0x00007f587d2f6184 in g_main_dispatch (context=0x559f5bae5b80) at ../glib/gmain.c:3325
spdk/spdk#33 g_main_context_dispatch (context=context@entry=0x559f5bae5b80) at ../glib/gmain.c:4043
spdk/spdk#34 0x0000559f58d8296a in glib_pollfds_poll () at util/main-loop.c:219
spdk/spdk#35 os_host_main_loop_wait (timeout=1000000000) at util/main-loop.c:242
spdk/spdk#36 main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:518
spdk/spdk#37 0x0000559f58ab4a61 in qemu_main_loop () at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/softmmu/vl.c:1710
spdk/spdk#38 0x0000559f589bc9be in main (argc=, argv=, envp=) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/softmmu/main.c:49

Thread 1 (Thread 0x7f587b15a700 (LWP 2700251)):
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
spdk/spdk#1 0x00007f587c3f2921 in __GI_abort () at abort.c:79
spdk/spdk#2 0x00007f587cc37f29 in tcmalloc::Log(tcmalloc::LogMode, char const*, int, tcmalloc::LogItem, tcmalloc::LogItem, tcmalloc::LogItem, tcmalloc::LogItem) () at /usr/lib64/libtcmalloc.so.4
spdk/spdk#3 0x00007f587cc2bf29 in () at /usr/lib64/libtcmalloc.so.4
spdk/spdk#4 0x00007f587d2fbfa9 in g_free (mem=0x559f5940eae8 <vfio_group_list>) at ../glib/gmem.c:199
spdk/spdk#5 0x0000559f58ca262c in object_property_free (data=0x559f5d5f8b18) at qom/object.c:278
spdk/spdk#6 0x00007f587d2e29bb in g_hash_table_remove_all_nodes (hash_table=hash_table@entry=0x559f5c176520, notify=notify@entry=1, destruction=destruction@entry=1) at ../glib/ghash.c:708
spdk/spdk#7 0x00007f587d2e3e1a in g_hash_table_remove_all_nodes (destruction=1, notify=1, hash_table=0x559f5c176520) at ../glib/ghash.c:1459
spdk/spdk#8 g_hash_table_unref (hash_table=0x559f5c176520) at ../glib/ghash.c:1463
spdk/spdk#9 0x0000559f58ca3069 in object_property_del_all (obj=0x559f5c94c800) at qom/object.c:614
spdk/spdk#10 object_finalize (data=0x559f5c94c800) at qom/object.c:667
spdk/spdk#11 object_unref (obj=obj@entry=0x559f5c94c800) at qom/object.c:1128
spdk/spdk#12 0x0000559f589c360b in phys_section_destroy (mr=0x559f5c94c800) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/exec.c:1497
spdk/spdk#13 phys_sections_free (map=0x559f5d5f9510) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/exec.c:1506
spdk/spdk#14 address_space_dispatch_free (d=0x559f5d5f9500) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/exec.c:2971
spdk/spdk#15 0x0000559f58a0ed69 in flatview_destroy (view=0x559f5ce75e40) at /usr/src/debug/qemu-kvm-5.0.0.4-1.2.ctl2.x86_64/memory.c:285
spdk/spdk#16 0x0000559f58d9910c in call_rcu_thread (opaque=) at util/rcu.c:283
spdk/spdk#17 0x0000559f58d86654 in qemu_thread_start (args=0x559f5ba8f020) at util/qemu-thread-posix.c:519
spdk/spdk#18 0x00007f587c578f2b in start_thread (arg=0x7f587b15a700) at pthread_create.c:486
spdk/spdk#19 0x00007f587c4b070f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

oracle qemu branch: vfio-user-dbfix
spdk branch: V22.01

Has anyone ever encountered a similar issue? thanks🙂

About vfio-user client in QEMU

Hi @jraman567 ,

I'd like to ask about the vfio-user client in QEMU. I believe that the project is working on the vfio-user-p3.1 branch. if I understand correctly, this is a project that leverages the existing vfio-user mechanism to improve Multi-process QEMU. I'v tried to work the vfio-user cilent with vfio-user server, and it is already work well.

I am considering implementing PCIe endpoint controller device emulation using this mechanism. Please refer to the QEMU ML for more details if necessary.

Could you please tell me the plan for upstreaming this project?

all PCI writes are posted

Currently, vfio_user_region_write uses VFIO_USER_NO_REPLY unconditionally, meaning essentially all writes are posted. But that shouldn't be the case, for example, for PCI config space, where it's expected that writes will wait for an ack before the VCPU continues

cc @antroseco @jraman567

migration fails at destination with "Unable to write to socket: Bad file descriptor"

When trying to migrate SPDK NVMf/vfio-user target which creates an NVMe controller with one namespace in the guest (/dev/nvme0n1), destination QEMU fails with:

Unable to write to socket: Bad file descriptor

Using the following:

Debugging further, this happens here:

#0  0x0000555555cb8eff in qio_channel_socket_writev (ioc=0x5555568a2400, iov=0x7fffec35da90, niov=1, fds=0x555557314934, nfds=3, errp=0x7fffec35da70) at ../io/channel-socket.c:571
#1  0x0000555555cb2627 in qio_channel_writev_full (ioc=0x5555568a2400, iov=0x7fffec35da90, niov=1, fds=0x555557314934, nfds=3, errp=0x7fffec35da70) at ../io/channel.c:86
#2  0x0000555555c812d5 in vfio_user_send_locked (proxy=0x5555575af7e0, msg=0x55555747eb70, fds=0x7fffec35db40) at ../hw/vfio/user.c:278
#3  0x0000555555c815c9 in vfio_user_send_recv (proxy=0x5555575af7e0, msg=0x55555747eb70, fds=0x7fffec35db40, rsize=0) at ../hw/vfio/user.c:351
#4  0x0000555555c82c38 in vfio_user_set_irqs (vbasedev=0x5555575a9c70, irq=0x555557314920) at ../hw/vfio/user.c:898
#5  0x0000555555c6b79d in vfio_enable_vectors (vdev=0x5555575a9370, msix=true) at ../hw/vfio/pci.c:413
#6  0x0000555555c6bb4c in vfio_msix_vector_do_use (pdev=0x5555575a9370, nr=3, msg=0x0, handler=0x0) at ../hw/vfio/pci.c:516
#7  0x0000555555c6be8c in vfio_msix_enable (vdev=0x5555575a9370) at ../hw/vfio/pci.c:615
#8  0x0000555555c70b0b in vfio_pci_load_config (vbasedev=0x5555575a9c70, f=0x5555568f5af0) at ../hw/vfio/pci.c:2528
#9  0x0000555555bab3df in vfio_load_device_config_state (f=0x5555568f5af0, opaque=0x5555575a9c70) at ../hw/vfio/migration.c:382
#10 0x0000555555babbe2 in vfio_load_state (f=0x5555568f5af0, opaque=0x5555575a9c70, version_id=1) at ../hw/vfio/migration.c:649
#11 0x00005555558a5cb9 in vmstate_load (f=0x5555568f5af0, se=0x555556964df0) at ../migration/savevm.c:908
#12 0x00005555558a8dec in qemu_loadvm_section_start_full (f=0x5555568f5af0, mis=0x5555568cec70) at ../migration/savevm.c:2433
#13 0x00005555558a944a in qemu_loadvm_state_main (f=0x5555568f5af0, mis=0x5555568cec70) at ../migration/savevm.c:2619
#14 0x00005555558a95c5 in qemu_loadvm_state (f=0x5555568f5af0) at ../migration/savevm.c:2698
#15 0x00005555558e437d in process_incoming_migration_co (opaque=0x0) at ../migration/migration.c:555
#16 0x0000555555e28cb6 in coroutine_trampoline (i0=1457783792, i1=21845) at ../util/coroutine-ucontext.c:173
#17 0x00007ffff75a4b50 in __correctly_grouped_prefixwc (begin=0x7fffec35da70 L"\x56965b50啕\003", end=0x0, thousands=-175363960 L'\xf58c2888', grouping=0x555556650010 "") at grouping.c:171
#18 0x0000000000000000 in  ()
(gdb) p errno
$2 = 9
(gdb) p sioc->fd
$3 = 13

Looking at the FD:

# ls -lh /proc/1816/fd/13
lrwx------ 1 root root 64 Jun  8 11:43 /proc/1816/fd/13 -> 'socket:[30949]'
# cat /proc/1816/fdinfo/13
pos:    0
flags:  02000002
mnt_id: 10

The source QEMU is run as follows:

/opt/qemu/bin/qemu-system-x86_64 -smp 4 -nographic -m 2G -object memory-backend-file,id=mem0,size=2G,mem-path=/dev/hugepages,share=on,prealloc=yes, -numa node,memdev=mem0 -kernel bionic-server-cloudimg-amd64-vmlinuz-generic -initrd bionic-server-cloudimg-amd64-initrd-generic -append console=ttyS0 root=/dev/sda1 single intel_iommu=on -hda bionic-server-cloudimg-amd64-0.raw -hdb nvme.img -nic user,model=virtio-net-pci -machine pc-q35-3.1 -device vfio-user-pci,socket=/var/run/vfio-user.sock,x-enable-migration=on -D qemu.out -trace enable=vfio*

and destination QEMU:

/opt/qemu/bin/qemu-system-x86_64 -smp 4 -nographic -m 2G -object memory-backend-file,id=mem0,size=2G,mem-path=/dev/hugepages,share=on,prealloc=yes, -numa node,memdev=mem0 -kernel bionic-server-cloudimg-amd64-vmlinuz-generic -initrd bionic-server-cloudimg-amd64-initrd-generic -append console=ttyS0 root=/dev/sda1 single intel_iommu=on -hda bionic-server-cloudimg-amd64-0.raw -hdb nvme.img -nic user,model=virtio-net-pci -machine pc-q35-3.1 -device vfio-user-pci,socket=/var/run/vfio-user.sock,x-enable-migration=on -D qemu.out -trace enable=vfio* -incoming tcp:0:4444

I migrate using:

migrate -d tcp:<IP address>:4444

In the source QEMU log:

vfio_msi_interrupt  (VFIO user </var/run/vfio-user.sock>) vector 2 0xfee04004/0x4023
vfio_get_dirty_bitmap container fd=-1, iova=0x0 size= 0xa0000 bitmap_size=0x18 start=0x0
vfio_get_dirty_bitmap container fd=-1, iova=0xc0000 size= 0xb000 bitmap_size=0x8 start=0xc0000
vfio_get_dirty_bitmap container fd=-1, iova=0xcb000 size= 0x3000 bitmap_size=0x8 start=0xcb000
vfio_get_dirty_bitmap container fd=-1, iova=0xce000 size= 0x1e000 bitmap_size=0x8 start=0xce000
vfio_msi_interrupt  (VFIO user </var/run/vfio-user.sock>) vector 2 0xfee04004/0x4023
vfio_get_dirty_bitmap container fd=-1, iova=0xec000 size= 0x4000 bitmap_size=0x8 start=0xec000
vfio_get_dirty_bitmap container fd=-1, iova=0xf0000 size= 0x10000 bitmap_size=0x8 start=0xf0000
vfio_get_dirty_bitmap container fd=-1, iova=0x100000 size= 0x7ff00000 bitmap_size=0xffe0 start=0x100000
vfio_get_dirty_bitmap container fd=-1, iova=0xfd000000 size= 0x1000000 bitmap_size=0x200 start=0x80080000
vfio_get_dirty_bitmap container fd=-1, iova=0xfebd1000 size= 0x1000 bitmap_size=0x8 start=0x81100000
vfio_get_dirty_bitmap container fd=-1, iova=0xfffc0000 size= 0x40000 bitmap_size=0x8 start=0x80000000
vfio_update_pending  (VFIO user </var/run/vfio-user.sock>) pending 0x8000
vfio_save_pending  (VFIO user </var/run/vfio-user.sock>) precopy 0x1195000 postcopy 0x0 compatible 0x0
vfio_migration_set_state  (VFIO user </var/run/vfio-user.sock>) state 2
vfio_vmstate_change  (VFIO user </var/run/vfio-user.sock>) running 0 reason finish-migrate device state 2
vfio_get_dirty_bitmap container fd=-1, iova=0x0 size= 0xa0000 bitmap_size=0x18 start=0x0
vfio_get_dirty_bitmap container fd=-1, iova=0xc0000 size= 0xb000 bitmap_size=0x8 start=0xc0000
vfio_get_dirty_bitmap container fd=-1, iova=0xcb000 size= 0x3000 bitmap_size=0x8 start=0xcb000
vfio_get_dirty_bitmap container fd=-1, iova=0xce000 size= 0x1e000 bitmap_size=0x8 start=0xce000
vfio_get_dirty_bitmap container fd=-1, iova=0xec000 size= 0x4000 bitmap_size=0x8 start=0xec000
vfio_get_dirty_bitmap container fd=-1, iova=0xf0000 size= 0x10000 bitmap_size=0x8 start=0xf0000
vfio_get_dirty_bitmap container fd=-1, iova=0x100000 size= 0x7ff00000 bitmap_size=0xffe0 start=0x100000
vfio_get_dirty_bitmap container fd=-1, iova=0xfd000000 size= 0x1000000 bitmap_size=0x200 start=0x80080000
vfio_get_dirty_bitmap container fd=-1, iova=0xfebd1000 size= 0x1000 bitmap_size=0x8 start=0x81100000
vfio_get_dirty_bitmap container fd=-1, iova=0xfffc0000 size= 0x40000 bitmap_size=0x8 start=0x80000000
vfio_migration_set_state  (VFIO user </var/run/vfio-user.sock>) state 2
vfio_update_pending  (VFIO user </var/run/vfio-user.sock>) pending 0x8000
vfio_save_buffer  (VFIO user </var/run/vfio-user.sock>) Offset 0x1000 size 0x8000 pending 0x8000
vfio_update_pending  (VFIO user </var/run/vfio-user.sock>) pending 0x8000
vfio_save_buffer  (VFIO user </var/run/vfio-user.sock>) Offset 0x9000 size 0x0 pending 0x8000
vfio_migration_set_state  (VFIO user </var/run/vfio-user.sock>) state 0
vfio_save_complete_precopy  (VFIO user </var/run/vfio-user.sock>)
vfio_save_device_config_state  (VFIO user </var/run/vfio-user.sock>)
vfio_region_unmap Region migration mmaps[0] unmap [0x1000 - 0x8fff]
vfio_save_cleanup  (VFIO user </var/run/vfio-user.sock>)
vfio_migration_state_notifier  (VFIO user </var/run/vfio-user.sock>) state completed

And in the destination QEMU:

...
vfio_region_mmap Region migration mmaps[0] [0x1000 - 0x8fff]
vfio_migration_set_state  (VFIO user </var/run/vfio-user.sock>) state 4
vfio_load_state  (VFIO user </var/run/vfio-user.sock>) data 0xffffffffef100003                                                                                                                                                                  vfio_load_state  (VFIO user </var/run/vfio-user.sock>) data 0xffffffffef100004
vfio_load_state_device_data  (VFIO user </var/run/vfio-user.sock>) Offset 0x1000 size 0x8000
vfio_load_state  (VFIO user </var/run/vfio-user.sock>) data 0xffffffffef100004
vfio_listener_region_del region_del 0xc0000 - 0xdffff
vfio_listener_region_add_ram region_add [ram] 0xc0000 - 0xcafff [0x7fa250200000]
vfio_listener_region_add_ram region_add [ram] 0xcb000 - 0xcdfff [0x7fa2506cb000]
vfio_listener_region_add_ram region_add [ram] 0xce000 - 0xdffff [0x7fa25020e000]
vfio_listener_region_add_skip SKIPPING region_add 0xb0000000 - 0xbfffffff
vfio_listener_region_del region_del 0xc0000 - 0xcafff
vfio_listener_region_del region_del 0xce000 - 0xdffff
vfio_listener_region_del region_del 0xe0000 - 0xfffff
vfio_listener_region_add_ram region_add [ram] 0xc0000 - 0xcafff [0x7fa2506c0000]
vfio_listener_region_add_ram region_add [ram] 0xce000 - 0xebfff [0x7fa2506ce000]
vfio_listener_region_add_ram region_add [ram] 0xec000 - 0xeffff [0x7fa2506ec000]                                                                                                                                                                vfio_listener_region_add_ram region_add [ram] 0xf0000 - 0xfffff [0x7fa2506f0000]
vfio_listener_region_add_skip SKIPPING region_add 0xfed1c000 - 0xfed1ffff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd7000 - 0xfebd7fff
vfio_listener_region_add_ram region_add [ram] 0xfd000000 - 0xfdffffff [0x7fa241400000]
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4000 - 0xfebd43ff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4400 - 0xfebd441f
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4420 - 0xfebd44ff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4500 - 0xfebd4515
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4516 - 0xfebd45ff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4600 - 0xfebd4607
vfio_listener_region_add_skip SKIPPING region_add 0xfebd4608 - 0xfebd4fff
vfio_listener_region_add_skip SKIPPING region_add 0xfe000000 - 0xfe000fff
vfio_listener_region_add_skip SKIPPING region_add 0xfe001000 - 0xfe001fff
vfio_listener_region_add_skip SKIPPING region_add 0xfe002000 - 0xfe002fff
vfio_listener_region_add_skip SKIPPING region_add 0xfe003000 - 0xfe003fff
vfio_load_state  (VFIO user </var/run/vfio-user.sock>) data 0xffffffffef100002
vfio_listener_region_add_skip SKIPPING region_add 0xfebd0000 - 0xfebd0fff
vfio_listener_region_add_ram region_add [ram] 0xfebd1000 - 0xfebd1fff [0x7fa35db96000]
vfio_listener_region_add_skip SKIPPING region_add 0xfebd0000 - 0xfebd0fff
vfio_listener_region_add_ram region_add [ram] 0xfebd1000 - 0xfebd1fff [0x7fa35db96000]
vfio_listener_region_add_skip SKIPPING region_add 0xfebd2000 - 0xfebd3fff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd5000 - 0xfebd53ff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd5400 - 0xfebd5fff
vfio_listener_region_add_skip SKIPPING region_add 0xfebd6000 - 0xfebd6fff
vfio_pci_write_config  (VFIO user </var/run/vfio-user.sock>, @0x4, 0x507, len=0x2)
vfio_listener_region_add_skip SKIPPING region_add 0xfebd2000 - 0xfebd3fff
vfio_region_mmaps_set_enabled Region VFIO user </var/run/vfio-user.sock> BAR 0 mmaps enabled: 1
vfio_region_mmaps_set_enabled Region VFIO user </var/run/vfio-user.sock> BAR 4 mmaps enabled: 1
vfio_region_mmaps_set_enabled Region VFIO user </var/run/vfio-user.sock> BAR 5 mmaps enabled: 1
vfio_intx_disable  (VFIO user </var/run/vfio-user.sock>)
vfio_msix_vector_do_use  (VFIO user </var/run/vfio-user.sock>) vector 3 used

KVM error when QEMU adds duplicate region

@john-johnson-git QEMU seems to be adding duplicate regions and KVM doesn't like it:

qemu-system-x86_64: kvm_set_user_memory_region: KVM_SET_USER_MEMORY_REGION failed, slot=10, start=0xfebd1000, size=0x1000: File exists

VFIO trace:

vfio_listener_region_add_ram region_add [ram] 0xfebd1000 - 0xfebd1fff [0x7fa916d1c000]
vfio_listener_region_add_skip SKIPPING region_add 0xfebd0000 - 0xfebd0fff
vfio_listener_region_add_ram region_add [ram] 0xfebd1000 - 0xfebd1fff [0x7fa916d1c000]

See nutanix/libvfio-user#439.

'vfio-user-pci' is not a valid device model name

I compiled qemu with the following command, but encountered the error message: ”'vfio-user-pci' is not a valid device model name“ when running it.

git clone https://github.com/oracle/qemu qemu-orcl
cd qemu-orcl
git submodule update --init --recursive
./configure --enable-multiprocess --enable-slirp
make

The command that has been executed is:
./build/qemu-system-x86_64 --enable-kvm -cpu host -smp 4 -m 2G -object memory-backend-file,id=mem0,size=2G,mem-path=/dev/hugepages,share=on -numa node,memdev=mem0 -drive file=focal-server-cloudimg-amd64.img,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -device vfio-user-pci,socket=/var/run/cntrl -vnc :5 -net user,hostfwd=tcp::2333-:22
I am running qemu in a nested virtualization platform (ubuntu20.04).

Kernel panic when running qemu/vfio-user-v0.9 with -cpu host option

Guest OS freezes or kernel panics when Qemu is run with -cpu host option.

qemu built with:

vfio-user-v0.9

commit 29e3142d6c23adb327c9752a4dd988b24c33d24b (HEAD -> vfio-user-v0.9, origin/vfio-user-v0.9)
Author: John Johnson <[email protected]>
Date:   Sun Jun 6 22:51:22 2021 -0700

    remove MAPPABLE flag
    change max_msg to max_xfer
    threading is hard - fix discon race


./configure --target-list="x86_64-softmmu" --enable-kvm --enable-linux-aio --enable-numa && make -j100

Qemu command line:

taskset -a -c 1-2 /home/klateck/work/qemu-vfiouser/build/qemu-system-x86_64 \
-m 1024 \
--enable-kvm \
-cpu host \
-smp 2 \
-vga std \
-vnc :100 -daemonize \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind \
-snapshot -monitor telnet:127.0.0.1:10002,server,nowait \
-numa node,memdev=mem \
-pidfile /home/klateck/vhost_test/vms/0/qemu.pid \
-serial file:/home/klateck/vhost_test/vms/0/serial.log \
-D /home/klateck/vhost_test/vms/0/qemu.log \
-chardev file,path=/home/klateck/vhost_test/vms/0/seabios.log,id=seabios \
-device isa-debugcon,iobase=0x402,chardev=seabios \
-net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 \
-net nic \
-drive file=/home/sys_sgci/spdk_dependencies/spdk_test_image.qcow2,if=none,id=os_disk \
-device ide-hd,drive=os_disk,bootindex=0

This results in either a guest OS kernel panic:
kp1.txt
kp2.txt

Or a freeze when booting guest OS:
freeze1.txt
which seems to produce a dmesg error on host at the same time:

[1960842.564891] kvm [4075811]: vcpu0, guest rIP: 0xffffffffaf06b064 disabled perfctr wrmsr: 0xc0010007 data 0xffff

Host OS info:
Fedora 32 5.8.15-201.fc32.x86_64
gcc version 10.2.1 20201125 (Red Hat 10.2.1-9) (GCC)

Guest OS info:
Fedora 32 vhost32-cloud-12806 5.6.6-300.fc32.x86_64

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.