Git Product home page Git Product logo

moatsyslab / femu Goto Github PK

View Code? Open in Web Editor NEW
384.0 27.0 182.0 298.22 MB

FEMU: Accurate, Scalable and Extensible NVMe SSD Emulator (FAST'18)

License: Other

Shell 0.74% Makefile 0.50% C 85.52% Python 5.08% C++ 3.80% Objective-C 0.03% Haxe 0.08% Assembly 1.69% NSIS 0.01% Perl 1.64% GDB 0.01% GLSL 0.01% CMake 0.07% Java 0.29% PowerShell 0.01% Batchfile 0.14% Tcl 0.01% C# 0.39% Lex 0.02% Yacc 0.03%
ssd ftl qemu open-channel zns nvme emulator simulator computational-storage

femu's Introduction

FEMU Version Build Status License: GPL v2 Platform

  ______ ______ __  __ _    _ 
 |  ____|  ____|  \/  | |  | |
 | |__  | |__  | \  / | |  | |
 |  __| |  __| | |\/| | |  | |
 | |    | |____| |  | | |__| |
 |_|    |______|_|  |_|\____/  -- A QEMU-based and DRAM-backed NVMe SSD Emulator

Contact Information

Maintainer: Huaicheng Li, Email: [email protected]

Research opportunities: Huaicheng Li is hiring Ph.D. students to join his group, feel free to contact him for details!

Feel free to contact Huaicheng for any suggestions/feedback, bug reports, or general discussions.

Please consider citing our FEMU paper at FAST 2018 if you use FEMU. The bib entry is

@InProceedings{Li+18-FEMU, 
Author = {Huaicheng Li and Mingzhe Hao and Michael Hao Tong 
and Swaminathan Sundararaman and Matias Bj{\o}rling and Haryadi S. Gunawi},
Title = "The CASE of FEMU: Cheap, Accurate, Scalable and Extensible Flash Emulator",
Booktitle =  {Proceedings of 16th USENIX Conference on File and Storage Technologies (FAST)},
Address = {Oakland, CA},
Month =  {February},
Year =  {2018}
}

Research Papers using FEMU

Please Check the growing list of research papers using FEMU here, including papers at ASPLOS, OSDI, SOSP and FAST, etc.

Project Description (What is FEMU?)

                        +--------------------+
                        |    VM / Guest OS   |
                        |                    |
                        |                    |
                        |  NVMe Block Device |
                        +--------^^----------+
                                 ||
                              PCIe/NVMe
                                 ||
  +------------------------------vv----------------------------+
  |  +---------+ +---------+ +---------+ +---------+ +------+  |
  |  | Blackbox| |  OCSSD  | | ZNS-SSD | |  NoSSD  | | ...  |  |
  |  +---------+ +---------+ +---------+ +---------+ +------+  |
  |                    FEMU NVMe SSD Controller                |
  +------------------------------------------------------------+

Briefly speaking, FEMU is a fast, accurate, scalable, and extensible NVMe SSD Emulator. Based upon QEMU/KVM, FEMU is exposed to Guest OS (Linux) as an NVMe block device (e.g. /dev/nvme0nX). It supports emulating different types of SSDs:

  • Whitebox mode (OCSSD) (a.k.a. Software-Defined Flash (SDF), or OpenChannel-SSD) with host side FTL (e.g. LightNVM or SPDK FTL), both OpenChannel Spec 1.2 and 2.0 are supported.

  • Blackbox mode (BBSSD) with FTL managed by the device (like most of current commercial SSDs). A page-level mapping based FTL is included.

  • ZNS mode (ZNSSD), exposing NVMe Zone interface for the host to directly read/write/append to the device following certain rules.

  • NoSSD mode, emulating a as-fast-as-possible NVMe device with sub-10 microsecond latency. This is to emualte SCM-class block devices such as Optane or Z-NAND SSDs.

FEMU design aims to achieve the benefits of both SSD Hardware platforms (e.g. CNEX OpenChannel SSD, OpenSSD, etc.) and SSD simulators (e.g. DiskSim+SSD, FlashSim, SSDSim, etc.). Like hardware platforms, FEMU can support running full system stack (Applications + OS + NVMe interface) on top, thus enabling Software-Defined Flash (SDF) alike research with modifications at application, OS, interface or SSD controller architecture level. Like SSD simulators, FEMU can also support internal-SSD/FTL related research. Users can feel free to experiment with new FTL algorithms or SSD performance models to explore new SSD architecture innovations as well as benchmark the new arch changes with real applications, instead of using decade-old disk trace files.

Installation

  1. Make sure you have installed necessary libraries for building QEMU. The dependencies can be installed by following instructions below:
  git clone https://github.com/vtess/femu.git
  cd femu
  mkdir build-femu
  # Switch to the FEMU building directory
  cd build-femu
  # Copy femu script
  cp ../femu-scripts/femu-copy-scripts.sh .
  ./femu-copy-scripts.sh .
  # only Debian/Ubuntu based distributions supported
  sudo ./pkgdep.sh
  1. Compile & Install FEMU:
  ./femu-compile.sh

FEMU binary will appear as x86_64-softmmu/qemu-system-x86_64

Tested host environment (For successful FEMU compilation):

Linux Distribution Kernel Gcc Ninja Python
Gentoo 5.10 9.3.0 1.10.1 3.7.9
Ubuntu 16.04.5 4.15.0 5.4.0 1.8.2 3.6.0
Ubuntu 20.04.1 5.4.0 9.3.0 1.10.0 3.8.2
Ubutnu 22.04.2 5.15.0 11.3.0 1.10.1 3.10.6

Tested VM environment (Whether a certain FEMU mode works under a certain guest kernel version):

Mode \ Guest Kernel 4.16 4.20 5.4 5.10 6.1
NoSSD
Black-box SSD
OpenChannel-SSD v1.2
OpenChannel-SSD v2.0
Zoned-Namespace (ZNS) SSD
  1. Prepare the VM image (For performance reasons, we suggest to use a server version guest OS [e.g. Ubuntu Server 22.04, 20.04, 18.04, 16.04])

You can either build your own VM image, or use the VM image provided by us

Option 1: This is the recommended way to get FEMU running quickly - Use our VM image file. You can download it from our FEMU-VM-image-site. After you fill in the form, VM image downloading instructions will be sent to your email address shortly.

Option 2: Build your own VM image by following instructions: After the guest OS is installed, make following changes to redirect VM output to the console, instead of using a separate GUI window. (Desktop version guest OS is not tested)

Note: Please ask for help from Google if any of the steps doesn't work. In general, it gives you a basic idea to build your own VM image and make it run in text console.

    # Download a Ubuntu server ISO file
    $ mkdir -p ~/images/
    $ cd ~/images
    $ wget http://releases.ubuntu.com/20.04/ubuntu-20.04.3-live-server-amd64.iso
    $ sudo apt-get install qemu-system-x86
    # Create a QCOW2 disk image
    $ qemu-img create -f qcow2 femu.qcow2 80G

    # install guest OS to femu.qcow2 (You need a GUI environment to prepare the VM image)
    $ qemu-system-x86_64 -cdrom ubuntu-20.04.3-live-server-amd64.iso -hda femu.qcow2 -boot d -net nic -net user -m 8192 -localtime -smp 8 -cpu host -enable-kvm

  • After guest OS is installed, boot it with
    $ qemu-system-x86_64 -hda femu.qcow2 -net nic -net user -m 8192 -localtime -smp 8 -cpu host -enable-kvm

If the OS is installed into femu.qcow2, you should be able to enter the guest OS. Inside the VM, edit /etc/default/grub, make sure the following options are set.

GRUB_CMDLINE_LINUX="ip=dhcp console=ttyS0,115200 console=tty console=ttyS0"
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"

Still in the VM, update the grub

$ sudo update-grub
$ sudo shutdown -h now

Now you're ready to Run FEMU. If you stick to a Desktop version guest OS, please remove "-nographics" command option from the running script before running FEMU.

  1. Login to FEMU VM
  • If you correctly setup the aforementioned configurations, you should be able to see text-based VM login in the same terminal where you issue the running scripts.
  • Or, more conveniently, FEMU running script has mapped host port 8080 to guest VM port 22, thus, after you install and run openssh-server inside the VM, you can also ssh into the VM via below command line. (Please run it from your host machine)
$ ssh -p8080 $user@localhost

Run FEMU

0. Minimum Requirement

  • Run FEMU on a physical machine, not inside a VM (if the VM has nested virtualization enabled, you can also give it a try, but FEMU performance will suffer, this is not recommended.)

  • At least 8 cores and 12GB DRAM in the physical machine to enable seamless run of the following default FEMU scripts emulating a 4GB SSD in a VM with 4 vCPUs and 4GB DRAM.

  • If you intend to emulate a larger VM (more vCPUs and DRAM) and an SSD with larger capacity, make sure refer to the resource provisioning tips here.

1. Run FEMU as blackbox SSDs (Device-managed FTL or BBSSD mode)

TODO: currently blackbox SSD parameters are hard-coded in hw/femu/ftl/ftl.c, please change them accordingly and re-compile FEMU.

Boot the VM using the following script:

./run-blackbox.sh

2. Run FEMU as whitebox SSDs (ak.a. OpenChannel-SSD or OCSSD mode)

Both OCSSD Specification 1.2 and Specification 2.0 are supported, to run FEMU OCSSD mode:

./run-whitebox.sh

By default, FEMU will run OCSSD in 2.0 mode. To run OCSSD in 1.2, make sure OCVER=1 is set in the run-whitebox.sh

Inside the VM, you can play with LightNVM.

3. Run FEMU without SSD logic emulation (NoSSD mode)

./run-nossd.sh

In this nossd mode, no SSD emulation logic (either blackbox or whitebox emulation) will be executed. Base NVMe specification is supported, and FEMU in this case handles IOs as fast as possible. It can be used for basic performance benchmarking, as well as fast storage-class memory (SCM, or Intel Optane SSD) emulation.

4. Run FEMU as NVMe ZNS (Zoned-Namespace) SSDs (ZNSSD mode)

./run-zns.sh

5. Run FEMU as Computational Storage, a.k.a, SmartSSD (SmartSSD mode)

Stay tuned.

6. Run FEMU as CXL-SSD (CXLSSD mode)

Stay tuned.

Contributing

Github issue and pull request are preferred. Do let us know if you have any thoughts!

Acknowledgement

FEMU is inspired by many prior SSD simulators/emulators (SSDSim, FlashSim, VSSIM) as well as hardware development platforms (OpenSSD, DFC), but FEMU has gone far beyond what prior platforms can achieve in terms of performance, extensibility, and usability.

FEMU's NVMe controller logic is based on QEMU/NVMe, LightNVM/QEMU and ZNS/QEMU.

For more detail, please checkout the Wiki!

femu's People

Contributors

abysdom avatar akmalarifin avatar chanyoung avatar dependabot[bot] avatar haltz avatar hansenidden18 avatar hongweiqin avatar huaicheng avatar nicktehrany avatar shehbazj avatar sunpenghao avatar ywang-wnlo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

femu's Issues

How can I get 99.99th latency?

My question is below.

  • Can you let me know how to measure 99.99th long tail latency?
  • Is it possible to measure latency of each I/O?

Is it right that latency measurements in femu are made by recording in specific memory rather than delaying through 'sleep' command? I want to know exactly how measurements are made in femu.

mkfs.ext4 infinitely wait when "Writing superblocks and filesystem accounting information"

Hi,

I run FEMU in white-box mode. It is ok to create a pblk target on top of Open-channel SSD by FEMU.
However, It will infinitely wait when I try creating a ext4 based on pblk.

The case is as the following picture shows:
image

It is waiting there infinitely.

Guest Configuration:
VM: Ubuntu18.04-Server
Kernel version: 4.16.0+

In fact, I try 14.04-server you supported and It waits there also in my case.I don't know what the wrong is. Could anyone give me some ideas?

The starting VM script is following:

sudo x86_64-softmmu/qemu-system-x86_64 \
    -name "FEMU-whitebox-SSD" \
    -enable-kvm \
    -cpu host \
    -smp 4 \
    -m 4G \
    -device virtio-scsi-pci,id=scsi0 \
    -device scsi-hd,drive=hd0 \
    -drive file=$OSIMGF,if=none,aio=native,cache=none,format=qcow2,id=hd0 \
    -drive file=$NVMEIMGF0,if=none,aio=threads,format=raw,id=id0 \
    -device nvme,drive=id0,serial=serial0,id=nvme0,namespaces=1,lver=1,lmetasize=16,ll2pmode=0,nlbaf=5,lba_index=3,mdts=10,lnum_ch=16,lnum_lun=1,lnum_pln=1,lsec_size=4096,lsecs_per_pg=4,lpgs_per_blk=512,ldebug=0,femu_mode=0 \
    -net user,hostfwd=tcp::8080-:22 \
    -net nic,model=virtio \
    -nographic \
    -qmp unix:./qmp-sock,server,nowait
    #-object iothread,id=iothread0 \
    #-display none \
    #-nographic \
    #-monitor stdio \
    #-s -S \
    #

Key-Value SSD emulator

Hello I'm a M.S student to research KV-SSD.

I want to use FEMU on Key-Value SSD for improve mapping algorithm.
But FEMU doesn't have any Key-Value things.
I think I have to edit FEMU for research KV-SSD.
What I have to edit for KV-SSD emulating?

FEMU-SSD-size Setup

Basic on the femu structure: FEMU-SSD-size + VM-DRAM-size + 2-4GB (Host-OS) <= Total-DRAM-size-in-your-Host

I have host DRAM size = 50G, VM-DRAM-size = 4G. Theoritical, I can get more than 45G. But in fact, I only can allocate 20G for femu ssd size. If I allocate more than 20G. The system will pop out an error.

I set up code config as below:
static void ssd_init_params(struct ssdparams spp)
{
spp->secsz = 512;
spp->secs_per_pg = 8;
spp->pgs_per_blk = 256;
spp->blks_per_pl = 720; /
16GB */
spp->pls_per_lun = 1;
spp->luns_per_ch = 8;
spp->nchs = 8;

I change the ./run-block file as below:
sudo x86_64-softmmu/qemu-system-x86_64
-name "FEMU-BBSSD-VM"
-enable-kvm
-cpu host
-smp 4
-m 4G
-device virtio-scsi-pci,id=scsi0
-device scsi-hd,drive=hd0
-drive file=$OSIMGF,if=none,aio=native,cache=none,format=qcow2,id=hd0
-device femu,devsz_mb=30720,femu_mode=1
-net user,hostfwd=tcp::8080-:22
-net nic,model=virtio
-nographic
-qmp unix:./qmp-sock,server,nowait 2>&1 | tee log

Error message:
[FEMU] FTL-Err: start_lpn=6553536,tt_pgs=4194304

[FEMU] FTL-Err: start_lpn=7864304,tt_pgs=4194304

FEMU disabling NVMe Controller nvme_add_kvm_msi_virq,cq[1]->virq=27

Hi
I run FEMU as a black box (formated F2FS) and run randomwrite workload of filebench for a few minutes.
The guest linux kernel is 4.15.1 and the FEMU version is ba56426
The following information is printed on the terminal:

"FEMU disabling NVMe Controller ...
Coperd, CQ, db_addr=5268549644, eventidx_addr=5270683660
Coperd,nvme_add_kvm_msi_virq,cq[1]->virq=27
Coperd, SQ, db_addr=5268549640, eventidx_addr=5270683656"

What problem is it and how can I solve it?

Connection information between block driver and femu

Hello.
I am a M.S. student who has recently become interested in ZNS and wants to do research.

I would like to use the block driver's write_hint information (e.g., block driver's bio_write_hint) in femu to conduct research,
but would you mind if I ask which part of the block driver is connected to the nvme command that femu receives and how it matches?

Run FEMU as an emulated blackbox SSD

Before I run the ./run-blackbox.sh, I delete the file vssd1.conf. I find that the femu still works with 1GB ssd device, why? Does the vssd1.conf file works on the 1GB ssd device? If not, how can I change the black box ssd setiing. Thanks!

How can I emulate multiple nvme devices ?

HI. Thanks for creating this software first.

I'm tring to emulate multiple blackbox SSDs in FEMU.
Is there any way to implement it ?
Thanks again for your help :)

Trying to emulate a larger SSD size

Hey there,
I'm currently trying to use your system with an SSD size of 16GB while running on the black-box mode.
I've tried changing it inside the black-box script and inside the ssd1.conf file but I keep getting the error message "exceed sector number".
what changes should i make in the ssd1.conf file so it'll be compatible to the size I've set in the NVMEIMGSZ variable inside the black-box script ?

Whether femu implements erase

Whether femu implements the erase function?I built an erase command myself.
tgt_submit_erase(545);
tgt_submit_read(545); ==> output is 0
tgt_submit_write(545, 6);// write 6 into the page
tgt_submit_read(545); ==> output is 6, is ok
tgt_submit_erase(545);
tgt_submit_read(545); ==> output is still 6 , there??
I don't know whether femu didn't implement erase, or my code has a problem. thank you~

FEMU compilation error due to meson

I am getting the following error when compiling FEMU.

../tests/qtest/meson.build:73:2: ERROR: No program name specified.

A full log can be found at /media/sdd/KVDM/FEMU/build-femu/meson-logs/meson-log.txt

ERROR: meson setup failed

config-host.mak is out-of-date, running configure
bash: line 3: ./config.status: No such file or directory
make: *** No rule to make target 'config-host.mak', needed by 'meson.stamp'.  Stop.

===> FEMU compilation done ...

My machine setup is as follows.
OS: ubuntu1~16.04.12
Kernel: 4.4.0-210-generic
GCC: 5.4.0
Ninja: 1.10.2
Python: 3.8.5

I have tried the solution at this page https://bugs.launchpad.net/qemu/+bug/1892533 to no avail.

Could you please suggest how to resolve the issue?

Dose ssd emulator support TRIM?

Hi,
I'm looking at code that emulates an ssd device.
Then, TRIM related code in '/femu/hw/block/ssd/ssd_trim_manager.c' had been commented.
Does not yet support the trim command in emulator?

Disable or redirect femu debug outputs

Hi. Thanks for creating this software first.
I launched run-blackbox.sh and noticed that debug outputs like ppn[-1] not mapped and some GC info are printed directly to ttyS0, mixing with the VM output.
Is there any way to disable all of them or redirect them to another place?

Thanks in advance.

pblk mydevice: corrupted read LBA (0/xxxxx)

When I test OCSSD2.0 write and read using fio,the pblk mydevice: corrupted read LBA (0/xxxxx) appeared.

This is the script I used for testing.

#!/bin/bash
sudo insmod /lib/modules/5.4.0-70-generic/kernel/drivers/lightnvm/pblk.ko
sudo nvme lnvm create -d nvme0n1 -n mydevice -t pblk -b 0 -e 7
#100%随机,70%读,30%写 4K
sudo fio -filename=/dev/mydevice -direct=1 -iodepth 1 -thread -rw=randrw -rwmixrread=70 -ioengine=psync -bs=4k -size=2G -numjobs=50 -runtime=180 -group_reportingg -name=randrw_70read_4k

image

Latency fluctuation observed

Hi,
After running FEMU as a white box, I use a self-defined kernel module to measure the actual latency of requests. The kernel module creates N kernel threads where N equals to the number of LUN. For each thread, it first erase the whole corresponding LUN and then generates write requests targeting at sequential pages inside that LUN.
I observe some latency fluctuation. Especially for the first write request(up to tens of milliseconds). Have you noticed this issue? I assume it is caused by the host page fault?

Bandwidth with differential channels

I have set the ssd parameters as following, and then test the bandwidth with comand:
sudo time dd if=/dev/nvme0n1 of=/dev/null bs=8k count=1024. I seems that the bandwidth is almost all the same. Why?
spp->secsz = 512;
spp->secs_per_pg = 8;
spp->pgs_per_blk = 256;
spp->blks_per_pl = 1024; /* 64GB */
spp->pls_per_lun = 1;
spp->luns_per_ch = 8;
spp->nchs = 2 or 10;

Emulate a large SSD

When I emulate a disk of 256GB, it is ok. But when I emulate a disk of 512GB, it has error "Invalid State!" while reading or writing.
ftl.c settings is shown as following:
512GB
spp->secsz = 512;
spp->secs_per_pg = 8;
spp->pgs_per_blk = 1024;
spp->blks_per_pl = 1024; /* 16GB /
spp->pls_per_lun = 1;
spp->luns_per_ch = 32;
spp->nchs = 4;
256GB
spp->secsz = 512;
spp->secs_per_pg = 8;
spp->pgs_per_blk = 1024;
spp->blks_per_pl = 1024; /
16GB */
spp->pls_per_lun = 1;
spp->luns_per_ch = 32;
spp->nchs = 2;

memfd_create declared twice error

The compile-time error occurs on my Ubuntu18.04 as follows:

/home/sirius/labs-proj/femu/util/memfd.c:40:12: error: static declaration of ‘memfd_create’ follows non-static declaration
 static int memfd_create(const char *name, unsigned int flags)
            ^~~~~~~~~~~~
In file included from /usr/include/x86_64-linux-gnu/bits/mman-linux.h:115:0,
                 from /usr/include/x86_64-linux-gnu/bits/mman.h:45,
                 from /usr/include/x86_64-linux-gnu/sys/mman.h:41,
                 from /home/sirius/labs-proj/femu/include/sysemu/os-posix.h:29,
                 from /home/sirius/labs-proj/femu/include/qemu/osdep.h:104,
                 from /home/sirius/labs-proj/femu/util/memfd.c:28:
/usr/include/x86_64-linux-gnu/bits/mman-shared.h:46:5: note: previous declaration of ‘memfd_create’ was here
 int memfd_create (const char *__name, unsigned int __flags) __THROW;
     ^~~~~~~~~~~~
/home/sirius/labs-proj/femu/rules.mak:69: recipe for target 'util/memfd.o' failed

The reason is that in femu/util/memfd.c:40:12 "static int memfd_create" is declared after "int memfd_create" has been declared in /usr/include/x86_64-linux-gnu/bits/mman-linux.h:115:0.

I rename "static int memfd_create" to "static int tmp_memfd_create" in femu/util/memfd.c:40:12 and compilation finishes successfully.
So a rename to avoid double declaration is necessary.

It looks like TRIM(DISCARD) request is being processed like a READ request.

I am modifying FEMU to support TRIM(DISCARD) function. (for bbssd)
By the way, it looks like TRIM(DISCARD) request is being processed like a READ request.
(Of course, when mounting on the FEMU, there is no problem at all unless you use the -o discard option. This issue only occurs when the -o discard option is used.)

See the nvme_process_sq_io function in FEMU/hw/block/femu/nvme-io.c.
It seems that all requests returning NVME_SUCCESS from nvme_io_cmd are forwarded to the FTL thread.

However, since DSM requests (that are not READ/WRITE requests) also return NVME_SUCCESS from the nvme_io_cmd function, as a result, they are transferred to the FTL thread and are processed as if they were read requests. (since is_write variable is 0)

Of course, this does not involve data DMA to the host, but it increases the latency of the lun as much as processing the read request, which can make the emulation result inaccurate. (This is because FTL processes READ to lun even though the host's READ request has not occurred.)

So, if you have no plan to support TRIM(DISCARD), FTL threads will probably need modifications like invalidating DSM requests.
(Among DSM requests, I know that the request with the NVME_DSMGMT_AD attribute is the DISCARD request.)

Could you please review if my analysis is correct?
I'm not familiar with the QEMU and NVMe protocols, so maybe my analysis was wrong.

unable to see large difference in kvm_exit with/without doorbell in guest

Hello FEMU Experts,

First of all, thank you for building and open-sourcing your work!

I trying to understand the overhead of doorbell that you cite in your FAST paper. I do random writes of small sizes on a OC-SSD target. I run this with and without the 1 LOC addition (return false) that you describe in "tweaking Guest linux" sub-section of README. I see a very small difference in the number of kvm_exit events. I would like to ask the approximate ratio/frequency of drop in vm exits you obtained with/without guest modification?

Steps to reproduce:

  1. run FEMU in "whitebox" mode using Ubuntu 16.04 guest VM and an NVMe disk
  2. compile and build latest linux kernel - v 5.0.0+.
  3. create a nvme device using nvme-cli that supports liblightnvm

sudo nvme lnvm create -d nvme0n1 -n pblkdev -t pblk -b 0 -e 7

  1. in host terminal, run perf tool to count number of kvm_exit events:

sudo perf record -e kvm:kvm_exit -ag

  1. in guest, run the following
    for i in shuf -i 1-10000 -n 10000;
    do
    dd if=/dev/zero of=/dev/nvme0n1 bs=4096 count=1 seek=$i;
    done

  2. stop perf tool and run report:

sudo perf report -n

Now, make change in

nvme_dbbuf_update_and_check_event()

  1. recompile, reboot. re-run perf record in host. re-run random write in guest. terminate perf record in host. get perf report.

The number of guest exits I get for MSR_WRITE events:

with doorbell : 690050
without doorbell : 691373

Is this normal? is there any other way to detect how many guest exits were limited after making the tweak you describe in the guest OS?

Thanks again for your help!

my abnormal ext4 test in blackbox-ssd

I use fio to test f2fs and ext4 performance on blackbox SSD separately. SSD capacity is 16G,8 channel,flash page size is 4K
fio configuration:rw bs=4096 size=1GB numjobs=2 runtime=60
f2fs :IOPS=41.2k, BW=161MiB/s
ext4:IOPS=3367, BW=13.2MiB/s ????
and I have tried it many times.
Thank you~~

A block, whose valid page is zero, is always selected as victim.

I runed fio script like below. And then I found that a block, whose valid page is zero, is always selected as victim.

1 [global]
2 name=fio-rand-write
3 filename=/dev/nvme0n1
4 rw=randwrite
5 bs=4K
6 direct=1
7 numjobs=1
8 ;time_based=1
9 ;runtime=5
10 group_reporting=1
11
12 [file1]
13 size=3G
14 ioengine=libaio
15 iodepth=32

I want to emulate the situation that a block with non-zero valid page can be selected as victim, after running fio script. Is it possible?

How to RUN FEMU

I compiled femu and downloaded the FEMU-VM image as you provided.
Could you show the detailed instructions about how to run femu?
Cause I'm a noobie to QUME, is there any reference that helps me be familiar with the whole system?

Thanks

`vpc` change of `[0ab8c45]Handle bbssd per-line valid page count correctly` is unnecessary

As I comment in this comment

in /hw/block/femu/bbssd/ftl.c the set_pri is as follow:

static inline void victim_line_set_pri(void *a, pqueue_pri_t pri)
{
    ((struct line *)a)->vpc = pri;
}

in hw/block/femu/lib/pqueue.c pqueue_change_priority has already changed the vpc to new_vpc (equal old_vpc - 1) by calling setpri

void pqueue_change_priority(pqueue_t *q, pqueue_pri_t new_pri, void *d)
{
    size_t posn;
    pqueue_pri_t old_pri = q->getpri(d);

    q->setpri(d, new_pri);
    posn = q->getpos(d);
    if (q->cmppri(old_pri, new_pri))
        bubble_up(q, posn);
    else
        percolate_down(q, posn);
}

so we shouldn't put line->vpc--; this line out of the else statement, otherwise the vpc will be decreased by 2

Create a file in bbssd in FTL

Discussed in https://github.com/ucare-uchicago/FEMU/discussions/70

Originally posted by newcloudsu February 19, 2022
Sorry I have some questions ,
I run FEMU in black-box mode , and I create a large file in /dev/nvme0n1 (e.g. 2GB)
In ftl.c , I think "ssdparam" struct will allocate 2GB space to support it , and then I try to trace it , so I print get_tbl_ent() ppa . But it looks like allocate not enough space .
How do I know the system is already allocated enough space that I create?

I modify to 4GB
spp->secsz = 512;
spp->secs_per_pg = 8;
spp->pgs_per_blk = 256;
spp->blks_per_pl = 32; /* 16GB /
spp->pls_per_lun = 1;
spp->luns_per_ch = 8;
spp->nchs = 8;

Theoretically, I think I create 2G file twice (create-delete-create), it should go to do_gc() in ftl.c (suppose /dev/nvme0n1 is 4GB , I think created total more than 4GB will do gc) ,but it doesn't.

How can I trigger the do_gc() with allocate about full space?

Maybe my think is wrong.
Thanks for it!

Cannot write OCSSD2.0 of FEMU using SPDK

When I run the hello_world(https://github.com/spdk/spdk/tree/v20.10/examples/nvme/hello_world) of SPDK(https://github.com/spdk/spdk/tree/v20.10) in FEMU's OCSSD2.0 mode,I didn't see “Hello world!” being printed.But when I use the FEMU's ZNS mode,the hello_world example runs successfully.The hello_world example will write the "Hello world!" string to the device and then read it out.

- OCSSD2.0 Behavior

femu@fvm:~/spdk$ sudo ./scripts/setup.sh
[sudo] password for femu:
0000:00:04.0 (1af4 1004): Active mountpoints on /dev/sda2, so not binding PCI dev
0000:00:05.0 (1d1d 1f1f): nvme -> uio_pci_generic
femu@fvm:~/spdk$ sudo build/examples/hello_world
[2021-03-25 12:33:54.422615] Starting SPDK v20.10 git sha1 e5d26ecc2 / DPDK 20.08.0 initialization...
[2021-03-25 12:33:54.422893] [ DPDK EAL parameters: [2021-03-25 12:33:54.423293] hello_world [2021-03-25 12:33:54.4]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attaching to 0000:00:05.0
[2021-03-25 12:33:54.557991] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.558158] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.558705] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.559213] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.559826] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.560356] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.560929] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.561458] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.562018] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.562548] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.563114] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.563662] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.564237] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.564777] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.566232] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.566452] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.567013] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.567580] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.568115] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.568646] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.569262] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.569775] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.570339] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.570882] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.571464] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.572023] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.572611] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.573141] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
Attached to 0000:00:05.0
Using controller FEMU OpenChannel-SSD (vOCSSD0 ) with 1 namespaces.
Namespace ID: 1 size: 4GB
Initialization complete.
[2021-03-25 12:33:54.660258] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.660603] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.661115] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.661657] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.662186] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.662699] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.663223] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.663785] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.664321] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.664829] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.665445] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.665943] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.666482] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.667027] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.667893] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.668459] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.669074] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.669723] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.670350] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.670993] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.671630] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.672258] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.672908] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.673509] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.674129] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.674706] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.675340] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.800299] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
INFO: using host memory buffer for IO
[2021-03-25 12:33:54.801837] nvme_qpair.c: 280:nvme_io_qpair_print_command: NOTICE: OCSSD / VECTOR RESET (90) sqi1
[2021-03-25 12:33:54.802126] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID RESET (02/c1) qid:1 ci1
[2021-03-25 12:33:54.802562] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.802858] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.803197] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.803499] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.803882] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.804171] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.804499] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.804787] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.805123] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.805726] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.806084] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.806369] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.806703] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.807000] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.807413] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.807641] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.807978] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.808281] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.808613] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.808908] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.809603] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.809849] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.810192] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.810473] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.886881] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.887098] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.887430] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.887730] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.891282] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.891500] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.891812] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.892125] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.892433] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.892749] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1
[2021-03-25 12:33:54.893064] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) q
[2021-03-25 12:33:54.893695] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 c1

- ZNS Behavior

femu@fvm:~/MyFile/spdk$ sudo ./scripts/setup.sh
[sudo] password for femu:
0000:00:04.0 (1af4 1004): Active mountpoints on /dev/sda2, so not binding PCI dev
0000:00:05.0 (8086 5845): nvme -> uio_pci_generic
femu@fvm:~/MyFile/spdk$ sudo build/examples/hello_world
[2021-03-25 13:04:08.826105] Starting SPDK v20.10 git sha1 e5d26ecc2 / DPDK 20.08.0 initialization...
[2021-03-25 13:04:08.827183] [ DPDK EAL parameters: [2021-03-25 13:04:08.827416] hello_world [2021-03-25 13:04:08.827691] -c 0x1 [2021-03-25 13:04:08.827986] --log-level=lib.eal:6 [2021-03-25 13:04:08.828292] --log-level=lib.cryptodev:5 [2021-03-25 13:04:08.]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attaching to 0000:00:05.0
[2021-03-25 13:04:08.958469] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.959193] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:0006 p:1 m:0 dnr:1
[2021-03-25 13:04:08.959488] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.959740] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:0007 p:1 m:0 dnr:1
[2021-03-25 13:04:08.960059] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.960297] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0008 p:1 m:0 dnr:1
[2021-03-25 13:04:08.960602] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.960876] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:0009 p:1 m:0 dnr:1
[2021-03-25 13:04:08.961181] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.961462] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:000a p:1 m:0 dnr:1
[2021-03-25 13:04:08.961773] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.962036] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:000b p:1 m:0 dnr:1
[2021-03-25 13:04:08.962364] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.962621] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:000c p:1 m:0 dnr:1
[2021-03-25 13:04:08.963387] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.963480] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:000d p:1 m:0 dnr:1
[2021-03-25 13:04:08.963872] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.964049] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:000e p:1 m:0 dnr:1
[2021-03-25 13:04:08.964361] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.964591] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:000f p:1 m:0 dnr:1
[2021-03-25 13:04:08.964918] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.965141] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:0010 p:1 m:0 dnr:1
[2021-03-25 13:04:08.965458] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.965706] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:0012 p:1 m:0 dnr:1
[2021-03-25 13:04:08.966013] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.966281] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0013 p:1 m:0 dnr:1
[2021-03-25 13:04:08.966588] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:08.967170] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:0014 p:1 m:0 dnr:1
Attached to 0000:00:05.0
Using controller FEMU ZNS-SSD Control (vZNSSD0 ) with 1 namespaces.
Namespace ID: 1 size: 4GB
Initialization complete.
[2021-03-25 13:04:09.039279] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.039390] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:0015 p:1 m:0 dnr:1
[2021-03-25 13:04:09.039687] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.039951] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:0016 p:1 m:0 dnr:1
[2021-03-25 13:04:09.040300] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.040562] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
[2021-03-25 13:04:09.040886] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.041133] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
[2021-03-25 13:04:09.041479] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.041717] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:001a p:1 m:0 dnr:1
[2021-03-25 13:04:09.042041] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.042308] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:001b p:1 m:0 dnr:1
[2021-03-25 13:04:09.042615] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.043190] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:001c p:1 m:0 dnr:1
[2021-03-25 13:04:09.043674] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.044009] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:001d p:1 m:0 dnr:1
[2021-03-25 13:04:09.044447] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.044707] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:001e p:1 m:0 dnr:1
[2021-03-25 13:04:09.045101] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.045459] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:001f p:1 m:0 dnr:1
[2021-03-25 13:04:09.045824] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.046152] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:0000 p:1 m:0 dnr:1
[2021-03-25 13:04:09.046521] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.047216] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:1
[2021-03-25 13:04:09.047605] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.047971] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:1
[2021-03-25 13:04:09.048353] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.048675] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:1
INFO: using host memory buffer for IO
Hello world!
[2021-03-25 13:04:09.140279] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.140730] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:1
[2021-03-25 13:04:09.141184] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.141581] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:1
[2021-03-25 13:04:09.142053] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.142457] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:1
[2021-03-25 13:04:09.143351] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.143745] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:1
[2021-03-25 13:04:09.144233] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.144606] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:1
[2021-03-25 13:04:09.145091] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.145448] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:1
[2021-03-25 13:04:09.145962] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.147189] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:1
[2021-03-25 13:04:09.147732] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.148072] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:1
[2021-03-25 13:04:09.148575] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.148903] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:1
[2021-03-25 13:04:09.149409] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.149807] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:1
[2021-03-25 13:04:09.150298] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:22 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.150657] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:1
[2021-03-25 13:04:09.151147] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.151890] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:1
[2021-03-25 13:04:09.152355] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.152751] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:1
[2021-03-25 13:04:09.153223] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:19 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.234478] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:1
[2021-03-25 13:04:09.235283] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:23 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.235321] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:1
[2021-03-25 13:04:09.235345] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:21 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.235366] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:1
[2021-03-25 13:04:09.235388] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:18 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.235410] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:1
[2021-03-25 13:04:09.235432] nvme_qpair.c: 248:nvme_admin_qpair_print_command: NOTICE: ASYNC EVENT REQUEST (0c) qid:0 cid:20 nsid:0 cdw10:00000000 cdw11:00000000
[2021-03-25 13:04:09.235452] nvme_qpair.c: 451:spdk_nvme_print_completion: NOTICE: INVALID OPCODE (00/01) qid:0 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:1

-QEMU Behavior

qemu@qemu:~/MyFile/spdk$ sudo build/examples/hello_world
[2021-03-25 21:10:53.785401] Starting SPDK v21.01-pre git sha1 7f6afb7bc / DPDK 20.08.0 initialization...
[2021-03-25 21:10:53.785582] [ DPDK EAL parameters: [2021-03-25 21:10:53.785626] hello_world [2021-03-25 21:10:53.785654] -c 0x1 [2021-03-25 21:10:53.785687] --log-level=lib.eal:6 [2021-03-25 21:10:53.785708] --log-level=lib.cryptodev:5 [2021-03-25 21:10:53.]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attaching to 0000:00:03.0
Attached to 0000:00:03.0
Using controller QEMU NVMe Ctrl (deadbeef ) with 1 namespaces.
Namespace ID: 1 size: 51GB
Initialization complete.
INFO: using host memory buffer for IO
Hello world!

OCSSD oob_meta Bug

When I tried to write and read ocssd Sectors with oob_meta data, I found only the latest oob_meta data can be read correctly. So I read the code of femu_oc12.c and found no ppa offset is added to ln->meta_buf.

Here is the code:

int femu_oc_meta_write(FEMU_OC_Ctrl *ln, void *meta)
{
    memcpy(ln->meta_buf, meta, ln->params.sos);
    return 0;
}

int femu_oc_meta_read(FEMU_OC_Ctrl *ln, void *meta)
{
    memcpy(meta, ln->meta_buf, ln->params.sos);
    return 0;
}

no ppa offset is added to ln->meta_buf. So I wish you can fix it as quickly as possible. Thanks

FEMU fails with guest kernel 4.16

Greeting!

I'm using FEMU as an emulator in one VM. I need to install a driver on the VM but it only supports specific kernel versions (4.13 and 4.15). Therefore, I attempted to create images with kernel 4.13 following the instruction in README, as I knew FEMU is not compatible with kernel 4.15. However, when I appended the FEMU device to the QEMU VM, I got errors about NVMe related errors.

As the README says FEMU works with guest kernel version 4.16, 4.20, 5.4 and 5.10 for the black box mode, I tested on each version but found FEMU only works with kernel 5.4 and 5.10. The boot process pops the same error as I got in the test of kernel 4.13.

I think the reason causing booting with kernel 4.16 & 4.20 failed also prevents the VM boot with kernel 4.13 in the blackbox mode. I wonder if anyone can help me to check if my image creation is wrong or I miss any parameters in the whole procedure.

Testbed:

QEMU version 5.2.0
Guest kernel version 4.16

GRUB config:

GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=lsb_release -i -s 2> /dev/null || echo Debian
GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"
GRUB_CMDLINE_LINUX=""
GRUB_CMDLINE_LINUX="ip=dhcp console=ttyS0,115200 console=tty console=ttyS0"
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"

QEMU Command:
sudo x86_64-softmmu/qemu-system-x86_64 \ -name "FEMU-BBSSD-VM" \ -enable-kvm \ -cpu host \ -smp 4 \ -m 4G \ -device virtio-scsi-pci,id=scsi0 \ -device scsi-hd,drive=hd0 \ -drive file=/home/danlin/images/u18.qcow2,if=none,aio=native,cache=none,format=qcow2,id=hd0 \ -device femu,devsz_mb=4096,femu_mode=1 \ -net user,hostfwd=tcp::6432-:22 \ -net nic,model=virtio \ -qmp unix:./qmp-sock,server,nowait 2>&1 | tee log \

Error messages:

1.213700] Freeing unused kernel memory: 2008K
[ 1.217629] Freeing unused kernel memory: 1900K
1.2230241 x86/mm: Checked W+X mappings: passed, no W+8 pages found.
1.2240821 x86/mm: Checking user space page tables
1.232831] x86/mm: Checked W+X mappings: passed, no W+X pages found.
1.344019) Floppy drive(s): fdo is 2.88M AMI BIOS
1.367409] FDC O is a S82078B
1.3693521 piix4_smbus 0000:00:01.3: SMBus Host Controller at Ox700, revisio
1.373780] cryptd: max_cpu_qlen set to 1000
[ 1.374790] virtio_net virtioo ens3: renamed from etho
1.376688] input: Virtua1PS/2 VMware VMMouse as /devices/platform/i8042/seri
01/input/input4
[ 1.379200] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/seri
01/input/input3
[ 1.391903] AVX2 version of gcm_enc/dec engaged.
[ 1.395810) AES CTR mode by8 optimization enabled
[ 1.9201001 tsc: Ref ined Tsc clocksource calibration: 3407.975 MHz
1.921275] clocksource: tsc: mask: Oxffffffffffffffff max_cycles: 0x311fbc3c
54f, max_idle_ns: 440795216628 ns
31.856070] nyme nume: 1/0 215 QID 1 timeout, aborting
31.8570221 nvme nvme: Abort status: Oxo
61.936068] nyme nume: 1/0 215 QID 1 timeout, reset controller
92.848071) nvme nvme: 1/0 215 QID 1 timeout, disable controller
[242.763376]
schedule_preempt_disabled+oxe/0x10
__mutex_lock.isra. 2+0x18c/0x4do
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0X40/0x70
_mutex_lock_slowpath+0x13/0x20
? __mutex_lock_slowpath+0x13/0x20
mutex_lock+0x2f/0x40
nvme_stop_queues+0x21/0x50
nyme_dev_disable+0x280/0x490
? dev_warn+0x64/0x80
? schedule+0x2c/0x80
nvme_timeout+0x240/0x2f0
blk_mq_terminate_expired+0X4f/0x90
bt_iter+0X4c/0x60
blk_mq_queue_tag_busy_iter+0x162/0x260
? blk_mq_add_to_requeue_list+0xC0/0XCO
? blk_mq_add_to_requeue_list+0xc0/OXCO
blk_mq_timeout_work+oxea/0x160
process_one_work+0x1de/0x3e0
worker_thread+0x32/0x410
kthread+0x121/0x140
? process_one_work+0x3e0/0x3e0
? Kthread_create_worker_on_cpu+0x70/0x70
ret_from_fork+0x35/0x40

Overhead of writing doorbell on Guest OS

I’m a graduate student majoring in Computer Science. I would be grateful if you give me some suggestions.
I want to ask the question about the overhead of writing a doorbell on Guest OS using FEMU. According to the paper, it just needs to write the memory address of Guest OS without VM exit, but it still has 6.5 us (avg), 5.7 us (median), and 18.9 us (std) time intervals in my experiment. I expect the time should be one memory access time. I calculated the time interval for each writing doorbell event in the nvme_write_sq_db function (based on Linux Kernel v5.10.11 /drivers/nvme/host/pci.c ) running on FEMU.

  • The following is how I calculate the time interval for this function.
    Screenshot from 2022-02-17 18-58-44
  • The following is the result.
    unnamed
    unnamed (1)
    Thanks!

The latency of LightNVM Interface is a bit long.

Hello~ I do a simple test about the latency with lightnvm Interface in whitebox mode. Then compare with the posix interface in blockbox mode.
This is what I do:

  • Open femu with whitebox mode;
  • Open nvm dev by nvm_dev_open;
  • Write a 320K data into this dev;
  • Calculate the latency of such a write op.
    I find the latency is a bit long, around 11ms. I repeat this test with Posix Interface in blackbox mode. The latency is obviously shorter, about 2ms.
    I've designed a OCSSD-based application, but the evaluation results is poor since the large latency of lightnvm interface. Do you have any idea or advice about my problem?

Thanks for your watching!

This is my test and result:

Code:

#include <stdio.h>
#include <liblightnvm.h>
#include <string.h>
#include <unistd.h>
#include <sys/time.h>
#include <fcntl.h>


int main(int argc, char **argv)
{

	struct nvm_dev *dev = nvm_dev_open("/dev/nvme0n1");
	if (!dev) {
		perror("nvm_dev_open");
		return 1;
	}
	nvm_dev_pr(dev);

	const struct nvm_geo* geo = nvm_dev_get_geo(dev);
	nvm_geo_pr(geo);

	// fetch the ws_opt and bytes/sector
	size_t ws_opt = nvm_dev_get_ws_opt(dev);
	printf("ws_opt: %ld\n", ws_opt);
	size_t ws_min = nvm_dev_get_ws_min(dev);
	printf("ws_min: %ld\n", ws_min);

	char read_buffer[360448], write_buffer[360448];
	strcpy(write_buffer, "Hello, world!");

	// start timing.
	struct timeval start, middle, end;

	printf("######################### liblightnvm interface ############################ \n");

	// get write latency.
	gettimeofday(&start, NULL);

	struct nvm_addr addr[ws_opt];
	// write 360448B data to OCSSD.
	for(int j = 0; j < 360448 / (ws_opt * geo->g.sector_nbytes); j++){
		for(int i = 0; i < ws_opt; i++){
			addr[i].val = nvm_addr_off2gen(dev, j * ws_opt + i * geo->g.sector_nbytes).val;
		}
		nvm_cmd_write(dev, addr, ws_opt, write_buffer, nullptr, 0x0, nullptr);
	}
	gettimeofday(&middle, NULL);
	uint64_t t_cur_ltcy = (middle.tv_sec-start.tv_sec)*1000000+(middle.tv_usec-start.tv_usec);//Microsecond
	printf("lightNVM Write latency: %luus\n", t_cur_ltcy);		

	// get the read latency.
	for(int j = 0; j < 360448 / (ws_opt * geo->g.sector_nbytes); j++){
		for(int i = 0; i < ws_opt; i++){
			addr[i].val = nvm_addr_off2gen(dev, j * ws_opt + i * geo->g.sector_nbytes).val;
		}
		nvm_cmd_read(dev, addr, ws_opt, read_buffer, nullptr, 0x0, nullptr);
	}

	gettimeofday(&end, NULL);
	t_cur_ltcy = (end.tv_sec-middle.tv_sec)*1000000+(end.tv_usec-middle.tv_usec);//Microsecond
	printf("lightNVM Read latency: %luus\n", t_cur_ltcy);		

	printf("%s\n", read_buffer);

	nvm_dev_close(dev);

	// clear read buffer.
	memset(read_buffer, 0, 360448);
}
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/time.h>
#include <fcntl.h>

int main(int argc, char **argv)
{
	printf("######################### Posix interface ############################ \n");

	struct timeval start, middle, end;
	char* read_buffer = new char[360448];
	char* write_buffer = new char[360448];
	memset(read_buffer, 0, 360448);
	memset(write_buffer, 0, 360448);
	strcpy(write_buffer, "Hello, world!");	

 	// open by posix interface.
	int fd = open("/dev/nvme0n1", O_RDWR|O_SYNC);
	gettimeofday(&start, NULL);
	lseek(fd, 0, SEEK_SET);
	write(fd, write_buffer, 360448);	
	gettimeofday(&middle, NULL);
	unsigned long t_cur_ltcy = (middle.tv_sec-start.tv_sec)*1000000+(middle.tv_usec-start.tv_usec);//Microsecond
	printf("Posix Write latency: %luus\n", t_cur_ltcy);		

	lseek(fd, 0, SEEK_SET);
	read(fd, read_buffer, 360448);	
	gettimeofday(&end, NULL);
	t_cur_ltcy = (end.tv_sec-middle.tv_sec)*1000000+(end.tv_usec-middle.tv_usec);//Microsecond
	printf("Posix Read latency: %luus\n", t_cur_ltcy);		

	printf("%s\n", read_buffer);
	delete[] read_buffer;
	delete[] write_buffer;
	return 0;
}

Test result 1:

######################### liblightnvm interface ############################ 
lightNVM Write latency: 11665us
lightNVM Read latency: 2781us
Hello, world!

Test result 2:

######################### Posix interface ############################ 
Posix Write latency: 2420us
Posix Read latency: 96us
Hello, world!

Does the FTL in blackbox mode decied where write command end up?

Hello

I'm wondering if the FLT in blackbox mode have an impact on how the underlying memory is emulated, or if it's only used for statistical means.

As far as I see it the NvmeRequest that is passed through the FTL and then sent to the poller don't change the address. I'm I wrong in thinking this?

How to profile SSD-related information?

I want to monitor the SSD related information in FEMU.
For example, I want to store the Write amplification factor (WAF) in ftl and check the value after benchmarking.

Is there a way to check the value of a specific variable stored by FEMU?

pblk: corrupted read LBA

Hi,

When I run FEMU as a write box, instantiate pblk, and mkfs. Dmesg warns me "corrupted read LBA".

This happens each time when pblk tries to read some data from the device.
It seems that the rqd didn't return the LBA metadata correctly.
Same configuration under qemu-nvme may not print same warnings.

Note that although dmesg warns me, the mkfs command returns successfully and I can mount the device subsequently.

Did I mis-reconfigured?

Guest OS :
CentOS 7
Kernel version 4.16.0

VM startup scripts:

/root/qhw/femu/build-femu/x86_64-softmmu/qemu-system-x86_64 \
        -name "qhwVM" \
        -m 32G \
        -smp 32 \
        --enable-kvm \
        -net nic,macaddr=52:54:00:17:21:72 -net tap,ifname=tap0,script=/var/lib/libvirt/images/qemu-ifup.sh,downscript=no \
        -device virtio-scsi-pci,id=scsi0 \
        -hda /mnt/sdc/qhw/images/qhwImage.qcow2 \
        -hdb /home/qhw/VMimages/backdrive.raw \
        -drive file=/home/qhw/VMimages/vssd1.raw,if=none,aio=threads,format=raw,id=id0 \
        -device nvme,drive=id0,serial=serial0,id=nvme0,namespaces=1,lver=1,lmetasize=16,ll2pmode=0,nlbaf=5,lba_index=3,mdts=10,lnum_ch=8,lnum_lun=4,lnum_pln=2,lsec_size=4096,lsecs_per_pg=4,lpgs_per_blk=512,ldebug=0,femu_mode=0 \
        -qmp unix:./qmp-sock,server,nowait

Warning messages look like this:

May 17 22:54:15 qhwVirt kernel: ------------[ cut here ]------------
May 17 22:54:15 qhwVirt kernel: pblk: corrupted read LBA
May 17 22:54:15 qhwVirt kernel: WARNING: CPU: 26 PID: 8489 at /usr/src/kernels/linux/drivers/lightnvm/pblk-read.c:129 __pblk_end_io_read+0x1a0/0x210 [pblk]
May 17 22:54:15 qhwVirt kernel: Modules linked in: rwGenerator(OE) pblk(OE) ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack libcrc32c iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ppdev parport_pc parport pcspkr i2c_piix4 sg ip_tables ext4 mbcache jbd2 sr_mod cdrom sd_mod ata_generic pata_acpi bochs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm virtio_scsi ata_piix libata e1000 virtio_pci i2c_core virtio_ring serio_raw floppy virtio dm_mirror dm_region_hash dm_log
May 17 22:54:15 qhwVirt kernel: dm_mod
May 17 22:54:15 qhwVirt kernel: CPU: 26 PID: 8489 Comm: bash Tainted: G           OE    4.16.00adb32858_DisableNVMeDoorbellWriteForFemu+ #6
May 17 22:54:15 qhwVirt kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org 04/01/2014
May 17 22:54:15 qhwVirt kernel: RIP: 0010:__pblk_end_io_read+0x1a0/0x210 [pblk]
May 17 22:54:15 qhwVirt kernel: RSP: 0018:ffff9cf8df483cd0 EFLAGS: 00010086
May 17 22:54:15 qhwVirt kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006
May 17 22:54:15 qhwVirt kernel: RDX: 0000000000000000 RSI: 0000000000000096 RDI: ffff9cf8df496970
May 17 22:54:15 qhwVirt kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000002c8
May 17 22:54:15 qhwVirt kernel: R10: 0000000000000004 R11: 0000000000000000 R12: ffff9cf8ddfc2c40
May 17 22:54:15 qhwVirt kernel: R13: ffff9cf8d2e63e00 R14: 0000000000000001 R15: ffff9cf8dd3b9800
May 17 22:54:15 qhwVirt kernel: FS:  00007fe0895df740(0000) GS:ffff9cf8df480000(0000) knlGS:0000000000000000
May 17 22:54:15 qhwVirt kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 17 22:54:15 qhwVirt kernel: CR2: 0000000000f6d3c8 CR3: 00000006c6adc000 CR4: 00000000000006e0
May 17 22:54:15 qhwVirt kernel: Call Trace:
May 17 22:54:15 qhwVirt kernel: <IRQ>
May 17 22:54:15 qhwVirt kernel: nvme_nvm_end_io+0x2a/0x40
May 17 22:54:15 qhwVirt kernel: __blk_mq_complete_request+0xb9/0x180
May 17 22:54:15 qhwVirt kernel: blk_mq_complete_request+0x63/0xb0
May 17 22:54:15 qhwVirt kernel: nvme_process_cq+0xdd/0x1a0
May 17 22:54:15 qhwVirt kernel: nvme_irq+0x1e/0x50
May 17 22:54:15 qhwVirt kernel: __handle_irq_event_percpu+0x3a/0x1b0
May 17 22:54:15 qhwVirt kernel: handle_irq_event_percpu+0x30/0x70
May 17 22:54:15 qhwVirt kernel: handle_irq_event+0x3d/0x60
May 17 22:54:15 qhwVirt kernel: handle_edge_irq+0x8a/0x190
May 17 22:54:15 qhwVirt kernel: handle_irq+0xa7/0x130
May 17 22:54:15 qhwVirt kernel: ? kvm_clock_read+0x21/0x30
May 17 22:54:15 qhwVirt kernel: ? sched_clock+0x5/0x10
May 17 22:54:15 qhwVirt kernel: do_IRQ+0x43/0xc0
May 17 22:54:15 qhwVirt kernel: common_interrupt+0xf/0xf
May 17 22:54:15 qhwVirt kernel: RIP: 0010:__do_softirq+0x6f/0x288
May 17 22:54:15 qhwVirt kernel: RSP: 0018:ffff9cf8df483f78 EFLAGS: 00000206 ORIG_RAX: ffffffffffffffde
May 17 22:54:15 qhwVirt kernel: RAX: ffff9cf8d61a5800 RBX: ffff9cf8df495f40 RCX: 0000000000000002
May 17 22:54:15 qhwVirt kernel: RDX: 0000000000000000 RSI: 000000000000ee44 RDI: 0000000000000838
May 17 22:54:15 qhwVirt kernel: RBP: 0000000000000000 R08: 00000000e8ba357a R09: 0000000000000001
May 17 22:54:15 qhwVirt kernel: R10: 0000000000000004 R11: 0000000000000005 R12: 0000000000000000
May 17 22:54:15 qhwVirt kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
May 17 22:54:15 qhwVirt kernel: ? common_interrupt+0xa/0xf
May 17 22:54:15 qhwVirt kernel: irq_exit+0xd5/0xe0
May 17 22:54:15 qhwVirt kernel: smp_apic_timer_interrupt+0x60/0x140
May 17 22:54:15 qhwVirt kernel: apic_timer_interrupt+0xf/0x20
May 17 22:54:15 qhwVirt kernel: </IRQ>
May 17 22:54:15 qhwVirt kernel: RIP: 0010:get_signal+0x2da/0x6a0
May 17 22:54:15 qhwVirt kernel: RSP: 0018:ffffa8a64e34fd70 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff12
May 17 22:54:15 qhwVirt kernel: RAX: 0000000000000000 RBX: 0000000000000011 RCX: ffffa8a64e34fe28
May 17 22:54:15 qhwVirt kernel: RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff9cf8d4c920c8
May 17 22:54:15 qhwVirt kernel: RBP: ffffa8a64e34fe48 R08: 0000000000026e10 R09: ffffffff9b09b0b7
May 17 22:54:15 qhwVirt kernel: R10: ffff9cf8df4a6e10 R11: ffffce47a0539f00 R12: ffff9cf8d4c91ac8
May 17 22:54:15 qhwVirt kernel: R13: 0000000000000010 R14: ffff9cf8d4c91ac0 R15: ffff9cf8d61a5800
May 17 22:54:15 qhwVirt kernel: ? __dequeue_signal+0x177/0x240
May 17 22:54:15 qhwVirt kernel: do_signal+0x36/0x650
May 17 22:54:15 qhwVirt kernel: exit_to_usermode_loop+0x45/0x8f
May 17 22:54:15 qhwVirt kernel: do_syscall_64+0x172/0x1a0
May 17 22:54:15 qhwVirt kernel: entry_SYSCALL_64_after_hwframe+0x3d/0xa2
May 17 22:54:15 qhwVirt kernel: RIP: 0033:0x7fe088c11480
May 17 22:54:15 qhwVirt kernel: RSP: 002b:00007ffd52f190b8 EFLAGS: 00000246 ORIG_RAX: 000000000000000e
May 17 22:54:15 qhwVirt kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fe088c11480
May 17 22:54:15 qhwVirt kernel: RDX: 0000000000000000 RSI: 00007ffd52f19140 RDI: 0000000000000002
May 17 22:54:15 qhwVirt kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000f5b590
May 17 22:54:15 qhwVirt kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
May 17 22:54:15 qhwVirt kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000f78760
May 17 22:54:15 qhwVirt kernel: Code: 4c 89 ff e8 33 76 ff ff e9 d3 fe ff ff 31 c0 e9 46 ff ff ff 48 c7 c7 48 81 4a c0 31 c0 48 89 54 24 08 48 89 0c 24 e8 50 d8 be da <0f> 0b 48 8b 54 24 08 48 8b 0c 24 e9 db fe ff ff 80 3d f4 ed 00
May 17 22:54:15 qhwVirt kernel: ---[ end trace c99d7929276443cd ]---


I made a docker image for this

This is an exciting job, but it's not so easy to install it. So I made a docker image for it, already uploaded to docker hub. I made it based on a Python image, so the os is Debian.

You can get it by:
docker pull lijiali1101/femu
Besides, you should give the device folder permission to it, you may run it by this command:
docker run --privileged -it -v /dev:/dev lijiali1101/femu /bin/bash

The code is stored in /home/Code/femu, and I have modified the shell file, remove the "sudo" in each script. So you can exec "./run-blackbox.sh" and the other two programs directly.

I use the qcow2 file provided by the author, so the user/password both are "femu".

I have tested it several times, it worked fine. Hope the installation will no longer trouble ^_^

ZNS mode is not functional

Hi, I used the run-zns.sh script and u20s.qcow2 from this repository to run FEMU (latest commit) in ZNS mode. However, it doesn't seem to be functional.

femu@fvm:~$ sudo blkzone report /dev/nvme0n1
blkzone: /dev/nvme0n1: unable to determine zone size
femu@fvm:~$ cat /sys/block/nvme0n1/queue/nr_zones                                                                                    0
femu@fvm:~$ lsblk                                                                                                                    NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT                                                                                          fd0       2:0    1    4K  0 disk                                                                                                     loop0     7:0    0   55M  1 loop /snap/core18/1880
loop1     7:1    0 55.4M  1 loop /snap/core18/2128                                                                                   loop2     7:2    0 71.3M  1 loop /snap/lxd/16099
loop3     7:3    0 70.3M  1 loop /snap/lxd/21029                                                                                     loop4     7:4    0 32.3M  1 loop /snap/snapd/12883
loop5     7:5    0 29.9M  1 loop /snap/snapd/8542
sda       8:0    0   80G  0 disk
├─sda1    8:1    0    1M  0 part
└─sda2    8:2    0   80G  0 part /
nvme0n1 259:0    0    4G  0 disk
femu@fvm:~$ nvme list
Failed to open /dev/nvme0
Node                  SN                   Model                                    Namespace Usage                      Format
     FW Rev
--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1                                                                        1           0.00   B /   0.00   B      1   B +  0 B

femu@fvm:~/git/nvme-cli$ sudo ./nvme zns id-ns /dev/nvme0n1
NVMe status: INVALID_FIELD: A reserved coded value or an unsupported value in a defined field(0x4002)
femu@fvm:~/git/nvme-cli$ sudo ./nvme zns id-ctrl /dev/nvme0n1
NVMe ZNS Identify Controller:
zasl    : 0

run-blackbox.sh error

first, thanks for creating this great software!

I've installed and run-blackbox.sh, but
I got a message like this

[    0.421087] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc068 irq 15
Coperd,nvme_add_kvm_msi_virq,cq[1]->virq=26
Coperd,DBBUF,sq[1]:db=5272289288,ei=5285466120
Coperd,DBBUF,cq[1]:db=5272289292,ei=5285466124
Coperd, nvme_set_db_memory returns SUCCESS!
Coperd, Admin CMD (12) returns [65535]
[  ppn[-1] not mapped!!!
  0.424216] e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
[ppn[-1] not mapped!!!
    0.425723] e100: Copyright(c) 1999-2006 Intel Corporation
[    0.427122] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
[    0.428677] e1000: Copyright (c) 1999-2006 Intel Corporation.
[    0.430130] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
[    0.431458] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    0.433144] sky2: driver version 1.30
[    0.434635] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.436506] ehci-pci: EHCI PCI platform driver
[    0.437709] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.439128] ohci-pci: OHCI PCI platform driver
[    0.440287] uhci_hcd: USB Universal Host Controller Interface driver
[    0.441735] usbcore: registered new interface driver usblp
[    0.443227] usbcore: registered new interface driver usb-storage
[    0.444747] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    0.447461] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.448640] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.450204] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[    0.452517] rtc_cmos 00:00: RTC can wake from S4
[    0.454208] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
[    0.455713] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
[    0.458204] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: [email protected]
[    0.460337] hidraw: raw HID events driver (C) Jiri Kosina
[    0.462549] usbcore: registered new interface driver usbhid
[    0.464136] usbhid: USB HID core driver
[    0.466274] Initializing XFRM netlink socket
[    0.467803] NET: Registered protocol family 10
[    0.469429] Segment Routing with IPv6
[    0.470719] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.472227] NET: Registered protocol family 17
[    0.473381] 9pnet: Installing 9P2000 support
[    0.474734] Key type dns_resolver registered
[    0.476198] sched_clock: Marking stable (476157843, 0)->(813596432, -337438589)
[    0.478541] registered taskstats version 1
[    0.479622] Loading compiled-in X.509 certificates
[    0.481400]   Magic number: 11:819:127
[    0.482491] acpi device:02: hash matches
[    0.483552] console [netcon0] enabled
[    0.484567] netconsole: network logging started
[    0.485758] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[    0.489034] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[    0.490456] ALSA device list:
[    0.491392]   No soundcards found.
[    0.492656] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[    0.495214] cfg80211: failed to load regulatory.db
[    0.576182] Freeing unused kernel memory: 1280K
[    0.577389] Write protecting the kernel read-only data: 18432k
[    0.579137] Freeing unused kernel memory: 2008K
[    0.581688] Freeing unused kernel memory: 952K
Loading, please wait...
starting version 229
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!
ppn[-1] not mapped!!!

it seems works anyway, I ignored that message and
I ran a simple fio command to test its functionality
cmd goes like

fio --filename=/dev/nvme0n1 --direct=1 --rw=read --randrepeat=0 --ioengine=libaio --bs=1024k --iodepth=8 --time_based=1 --runtime=180 --name=fio_direct_read_test

But then ppn[-1] not mapped!!! repeated.

how to stop this or, is it fine to have this message?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.