Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1365721 - Some nvml built-in tests failed with NVDIMM device in guest
Summary: Some nvml built-in tests failed with NVDIMM device in guest
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: Yumei Huang
URL:
Whiteboard:
: 1539541 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-10 03:16 UTC by Yumei Huang
Modified: 2018-12-11 14:40 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-rhev-2.12.0-20.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-12-11 14:40:33 UTC


Attachments (Terms of Use)

Description Yumei Huang 2016-08-10 03:16:38 UTC
Description of problem:
When QE do nvml built-in tests with NVDIMM device in guest, some tests failed. 
Failed tests:
  obj_pool_lock/TEST0 ---- Failed
  pmem_is_pmem/TEST1 ---- Failed
  obj_tx_add_range/TEST2 ---Timeout

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.6.0-18.el7
kernel-3.10.0-484.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. Boot guest with NVDIMM device backed by a file in host
 /usr/libexec/qemu-kvm -name rhel73 -m 8G,slots=240,maxmem=20G -smp 16  \

-realtime mlock=off -no-user-config -nodefaults \

-drive file=/home/guest/rhel7.3.qcow2,if=none,id=drive-disk,format=qcow2,cache=none -device virtio-scsi-pci,id=scsi0,disable-legacy=off,disable-modern=off -device scsi-hd,drive=drive-disk,bus=scsi0.0,id=scsi-hd0  \

-usb -device usb-tablet,id=input0  -netdev tap,id=hostnet1 -device virtio-net-pci,mac=42:ce:a9:d2:4d:d9,id=idlbq7eA,netdev=hostnet1  \

-vga qxl -spice port=5901,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on     -monitor stdio  \

-machine pc,nvdimm=on -object memory-backend-file,id=mem1,share,mem-path=/dev/pmem0,size=4G -device nvdimm,memdev=mem1,id=nv1  \

-numa node -numa node

2. Mount /dev/pmem0 in guest
 # mkfs.xfs /dev/pmem0
 # mount -o dax /dev/pmem0 /mnt/pmem

3. Download nvml src code and make test
 # git clone https://github.com/pmem/nvml.git 
 # cd nvml 
 # make
 # make test
 # cd src/tests
 # cp testconfig.sh.example testconfig.sh
 Modify testconfig.sh:
    add "PMEM_FS_DIR=/mnt/pmem" 

4. Run the tests under nvml/src/tests
 # ./RUNTESTS        // will run all the tests and stop once failed one test
 #./RUNTESTS pmem_is_pmem   // or run tests one by one


Actual results:
# ./RUNTESTS pmem_is_pmem
pmem_is_pmem/TEST1: SETUP (check/pmem/debug)
pmem_is_pmem/TEST1: START: pmem_is_pmem
pmem_is_pmem/TEST1 crashed (signal 6). err1.log below.
{pmem_is_pmem.c:91 main} pmem_is_pmem/TEST1: Error: assertion failure: ret[0] (0x1) == ret[i] (0x0)
{ut_backtrace.c:193 ut_sighandler} pmem_is_pmem/TEST1: 

{ut_backtrace.c:194 ut_sighandler} pmem_is_pmem/TEST1: Signal 6, backtrace:
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 0: ./pmem_is_pmem() [0x404293]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 1: ./pmem_is_pmem() [0x404388]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 2: /lib64/libc.so.6(+0x35250) [0x7f5bbab7f250]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f5bbab7f1d7]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 4: /lib64/libc.so.6(abort+0x148) [0x7f5bbab808c8]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 5: ./pmem_is_pmem() [0x402c31]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 6: ./pmem_is_pmem() [0x401bdf]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 7: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f5bbab6bb35]
{ut_backtrace.c:144 ut_dump_backtrace} pmem_is_pmem/TEST1: 8: ./pmem_is_pmem() [0x401809]
{ut_backtrace.c:196 ut_sighandler} pmem_is_pmem/TEST1: 

out1.log below.
pmem_is_pmem/TEST1 out1.log pmem_is_pmem/TEST1: START: pmem_is_pmem
pmem_is_pmem/TEST1 out1.log  ./pmem_is_pmem /mnt/pmem/test_pmem_is_pmem1/testfile1

pmem1.log below.
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <1> [out.c:241 out_init] pid 710: program: /home/src/nvml/src/test/pmem_is_pmem/pmem_is_pmem
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <1> [out.c:243 out_init] libpmem version 1.0
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <1> [out.c:244 out_init] src version SRCVERSION:1.1-313-gcfddcb2
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [mmap.c:59 util_mmap_init] 
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [libpmem.c:56 libpmem_init] 
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:1162 pmem_init] 
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:1100 pmem_get_cpuinfo] clflush supported
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:1132 pmem_get_cpuinfo] using clflush
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:1137 pmem_get_cpuinfo] movnt supported
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:1149 pmem_get_cpuinfo] using movnt
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:455 pmem_is_pmem_init] 
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem.c:455 pmem_is_pmem_init] 
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 0
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 1
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 0
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [pmem_linux.c:135 is_pmem_proc] returning 0
pmem_is_pmem/TEST1 pmem1.log <libpmem>: <3> [libpmem.c:69 libpmem_fini] 

RUNTESTS: stopping: pmem_is_pmem/TEST1 failed, TEST=check FS=pmem BUILD=debug

# ./RUNTESTS obj_tx_add_range
obj_tx_add_range/TEST0: SETUP (check/pmem/debug)
obj_tx_add_range/TEST0: START: obj_tx_add_range
obj_tx_add_range/TEST0: PASS
obj_tx_add_range/TEST1: SKIP not compiled with support for Valgrind pmemcheck
obj_tx_add_range/TEST2: SETUP (check/pmem/debug)
obj_tx_add_range/TEST2: START: obj_tx_add_range
RUNTESTS: stopping: obj_tx_add_range/TEST2 timed out, TEST=check FS=pmem BUILD=debug


# ./RUNTESTS obj_pool_lock
obj_pool_lock/TEST0: SETUP (check/pmem/debug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/pmem/nondebug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/pmem/static-debug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/pmem/static-nondebug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/non-pmem/debug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/non-pmem/nondebug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/non-pmem/static-debug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0: PASS
obj_pool_lock/TEST0: SETUP (check/non-pmem/static-nondebug)
obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0 crashed (signal 6). err0.log below.
{obj_pool_lock.c:97 test_open_in_different_process} obj_pool_lock/TEST0: Error: create: Resource temporarily unavailable
{ut_backtrace.c:193 ut_sighandler} obj_pool_lock/TEST0: 

{ut_backtrace.c:194 ut_sighandler} obj_pool_lock/TEST0: Signal 6, backtrace:
{obj_pool_lock.c:90 test_open_in_different_process} obj_pool_lock/TEST0: Error: pmemobj_open after fork failed but for unexpected reason: Invalid argument
{ut_backtrace.c:193 ut_sighandler} obj_pool_lock/TEST0: 

{ut_backtrace.c:194 ut_sighandler} obj_pool_lock/TEST0: Signal 6, backtrace:
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 0: ./obj_pool_lock.static-nondebug() [0x4187ca]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 1: ./obj_pool_lock.static-nondebug() [0x4188bf]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 2: /lib64/libc.so.6(+0x35250) [0x7f0782b09250]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f0782b091d7]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 4: /lib64/libc.so.6(abort+0x148) [0x7f0782b0a8c8]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 5: ./obj_pool_lock.static-nondebug() [0x417241]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 6: ./obj_pool_lock.static-nondebug() [0x4031aa]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 7: ./obj_pool_lock.static-nondebug() [0x4032ed]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 8: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f0782af5b35]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 9: ./obj_pool_lock.static-nondebug() [0x402e92]
{ut_backtrace.c:196 ut_sighandler} obj_pool_lock/TEST0: 

{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 0: ./obj_pool_lock.static-nondebug() [0x4187ca]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 1: ./obj_pool_lock.static-nondebug() [0x4188bf]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 2: /lib64/libc.so.6(+0x35250) [0x7f0782b09250]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 3: /lib64/libc.so.6(gsignal+0x37) [0x7f0782b091d7]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 4: /lib64/libc.so.6(abort+0x148) [0x7f0782b0a8c8]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 5: ./obj_pool_lock.static-nondebug() [0x417241]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 6: ./obj_pool_lock.static-nondebug() [0x40315c]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 7: ./obj_pool_lock.static-nondebug() [0x4032ed]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 8: /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f0782af5b35]
{ut_backtrace.c:144 ut_dump_backtrace} obj_pool_lock/TEST0: 9: ./obj_pool_lock.static-nondebug() [0x402e92]
{ut_backtrace.c:196 ut_sighandler} obj_pool_lock/TEST0: 

out0.log below.
obj_pool_lock/TEST0 out0.log obj_pool_lock/TEST0: START: obj_pool_lock
obj_pool_lock/TEST0 out0.log  ./obj_pool_lock.static-nondebug /tmp/test_obj_pool_lock0/testfile

RUNTESTS: stopping: obj_pool_lock/TEST0 failed, TEST=check FS=non-pmem BUILD=static-nondebug


Expected results:
All the tests passed.

Additional info:

Comment 2 Stefan Hajnoczi 2016-08-17 15:58:46 UTC
(In reply to Yumei Huang from comment #0)

Thanks for posting these failed nvml test cases.

They aren't obvious to me so I will bring them to the attention of Guangrong Xiao who is writing the patches upstream.

Let's leave this BZ open to track these test failures but it doesn't need to block RHEL 7.3 since NVDIMM is not yet feature-complete.

Comment 3 Marcin Ślusarz 2016-08-18 18:18:59 UTC
See https://github.com/pmem/issues/issues/207

Comment 7 Stefan Hajnoczi 2017-05-15 15:58:12 UTC
I reproduced the failures here too.  NVDIMM is still under development.  I'm moving this bug to RHEL 7.5 so it will be checked again in the future.

Comment 8 Stefan Hajnoczi 2017-11-29 15:33:23 UTC
Hi Yumei,
Please rerun the NVDIMM test suite so we know the status for this release cycle.

Eventually upstream will get all tests passing.  We don't need to do anything except continue to track the progress for now.

Thank you!

Comment 9 Yumei Huang 2017-11-30 06:27:42 UTC
Hi Stefan,
I hit an error "PMEM_FS_DIR=/mnt/pmem does not point to a PMEM device" when running the test suite, and I did mount /dev/pmem0 to /mnt/pmem in guest.

Details:
qemu-kvm-rhev-2.10.0-9.el7

Cmdline:
# /usr/libexec/qemu-kvm -m 8G,slots=240,maxmem=20G -smp 16  \

rhel75-64-virtio.qcow2  -M pc,nvdimm=on \

-object memory-backend-file,id=mem1,share=on,mem-path=/tmp/nv0,size=4G -device nvdimm,memdev=mem1,id=nv1 \

-netdev tap,id=hostnet1 -device virtio-net-pci,mac=42:ce:a9:d2:4d:d9,id=idlbq7eA,netdev=hostnet1 \

-no-user-config -nodefaults -numa node -numa node -usb -device usb-tablet,id=input0 -vga qxl -vnc :4  -monitor stdio

In guest:

# ll /dev/pmem0 
brw-rw----. 1 root disk 259, 0 Nov 30 13:59 /dev/pmem0

# mkfs.xfs /dev/pmem0

# mount -o dax /dev/pmem0  /mnt/pmem/

# cat nvml/src/test/testconfig.sh
...
PMEM_FS_DIR=/mnt/pmem
...

# cd nvml/src/test
# ./RUNTESTS
...
error: PMEM_FS_DIR=/mnt/pmem does not point to a PMEM device
RUNTESTS: stopping: blk_nblock/TEST0 failed, TEST=check FS=any BUILD=debug

Comment 10 pagupta 2017-11-30 06:59:08 UTC
Hi,

Can you please try with this option enabled?


# If you don't have real PMEM or PMEM emulation set up, but still want to test
# PMEM codepaths uncomment this line. It will set PMEM_IS_PMEM_FORCE to 1 for
# tests that require pmem.
#

PMEM_FS_DIR_FORCE_PMEM=1

Comment 11 Yumei Huang 2017-11-30 08:08:16 UTC
(In reply to pagupta from comment #10)
> Hi,
> 
> Can you please try with this option enabled?
> 
> 
> # If you don't have real PMEM or PMEM emulation set up, but still want to
> test
> # PMEM codepaths uncomment this line. It will set PMEM_IS_PMEM_FORCE to 1 for
> # tests that require pmem.
> #
> 
> PMEM_FS_DIR_FORCE_PMEM=1

With this option enabled, all tests passed except "obj_ctl_prefault" and "ex_libpmemobj_cpp".

# ./RUNTESTS ex_libpmemobj_cpp
ex_libpmemobj_cpp/TEST0: SETUP (check/pmem/debug)
../unittest/unittest.sh: line 727: ../../examples/libpmemobj/cpp/queue: No such file or directory
ex_libpmemobj_cpp/TEST0 failed with exit code 127.
RUNTESTS: stopping: ex_libpmemobj_cpp/TEST0 failed, TEST=check FS=pmem BUILD=debug


# ./RUNTESTS obj_ctl_prefault
obj_ctl_prefault/TEST0: SETUP (check/pmem/debug)
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: PASS			[00.173 s]
obj_ctl_prefault/TEST0: SETUP (check/pmem/nondebug)
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: PASS			[00.068 s]
obj_ctl_prefault/TEST0: SETUP (check/pmem/static-debug)
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: PASS			[00.188 s]
obj_ctl_prefault/TEST0: SETUP (check/pmem/static-nondebug)
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: PASS			[00.062 s]
obj_ctl_prefault/TEST0: SETUP (check/non-pmem/debug)
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
obj_ctl_prefault/TEST0: START: obj_ctl_prefault
RUNTESTS: stopping: obj_ctl_prefault/TEST0 failed, TEST=check FS=non-pmem BUILD=debug

Comment 12 pagupta 2017-11-30 09:47:05 UTC
I guess you have : libpmemobj-devel, libpmemobj packages and other dependencies?

Pankaj

Comment 13 Yumei Huang 2017-12-01 03:08:54 UTC
(In reply to pagupta from comment #12)
> I guess you have : libpmemobj-devel, libpmemobj packages and other
> dependencies?
> 
> Pankaj

Yes, I have installed libpmemobj-devel, libpmemobj packages in guest. 

I don't think the failed cases are because of dependencies. Some cases skipped instead of failed when lack of dependencies or configuration , e.g.

out_err_mt/TEST1: SKIP valgrind-devel package (ver 3.7 or later) required

pmempool_check/TEST20: SKIP DEVICE_DAX_PATH does not specify enough dax devices (min: 1)

pmem_movnt_align/TEST4: SKIP not compiled with support for Valgrind pmemcheck


And I don't understand why need to enable PMEM_FS_DIR_FORCE_PMEM for the test. Is it trying to find out if the test suite works well when no pmem device? 

Thanks,
Yumei Huang

Comment 14 pagupta 2017-12-01 12:01:28 UTC
Hello Yumei,

Thanks for testing this. It looks like latest NVML tests only considering
device DAX as pmem device. As we are not using device DAX? so we are getting
this error. See below commit:

https://github.com/pmem/nvml/commit/9831acece3d1883afc764f31ca6f754b6ee44a1d#diff-573f7ef9c3a0387876f4d5e345241b8e

> 
> Yes, I have installed libpmemobj-devel, libpmemobj packages in guest. 
> 
> I don't think the failed cases are because of dependencies. Some cases
> skipped instead of failed when lack of dependencies or configuration , e.g.
> 
> out_err_mt/TEST1: SKIP valgrind-devel package (ver 3.7 or later) required
> 
> pmempool_check/TEST20: SKIP DEVICE_DAX_PATH does not specify enough dax
> devices (min: 1)
> 
> pmem_movnt_align/TEST4: SKIP not compiled with support for Valgrind pmemcheck
> 
> 
> And I don't understand why need to enable PMEM_FS_DIR_FORCE_PMEM for the
> test. Is it trying to find out if the test suite works well when no pmem
> device? 

Reason for this is filesystem DAX does not guarantee persistent data until
userspace does explicit msync/fsync (before MAP_SYNC). This document explains some of the details:

http://pmem.io/nvml/libpmem/

The case is even different for KVM as we are testing 'fake DAX' range here 
which is just mmaped file at hostside that require an explicit flush from guest.
Still we don't have that feature upstream and we don't have device DAX if there is no hardware. So, better option is to test with 'PMEM_FS_DIR_FORCE_PMEM'.
 
Thanks,
Pankaj

Comment 15 Jeff Moyer 2017-12-01 15:35:23 UTC
(In reply to Yumei Huang from comment #13)

> And I don't understand why need to enable PMEM_FS_DIR_FORCE_PMEM for the
> test. Is it trying to find out if the test suite works well when no pmem
> device? 

The NVML code really wants to know if it can flush to persistence safely from userspace (w/o calling msync).  Right now, file systems still require fsync/msync to ensure that metadata is flushed to the storage.

There's a patch set that recently went in upstream to allow applications to specify a MAP_SYNC parameter to mmap.  That will allow nvml to flush to persistence without calling msync (and at that point in time, you won't have to use the force flag for the tests).

The reason device dax is considered "pmem" is that there is no file system, so flush from userspace is safe.

Comment 16 Yumei Huang 2017-12-04 05:45:35 UTC
Thanks Pankaj and Jeff for your explanation. It helps QE understand better. We will enable option 'PMEM_FS_DIR_FORCE_PMEM' for current test. Thanks!

Comment 17 Robert Hoo 2018-02-01 08:54:05 UTC
*** Bug 1539541 has been marked as a duplicate of this bug. ***

Comment 18 belinda 2018-08-09 06:54:46 UTC
Reproduced on RHEL7.6 Alpha

Comment 19 belinda 2018-09-06 08:27:51 UTC
Reproduced on RHEL7.6 Beta. 
Note : run ./RUNTESTS
       error reproduced on Native and VM.
       obj_ctl_prefault/TEST0 failed, TEST=check FS=non-pmem BUILD=debug

Comment 20 belinda 2018-10-29 06:37:43 UTC
No reproduced on RHEL7.6 RC.

Comment 21 Ademar Reis 2018-12-10 22:13:27 UTC
Based on latest comments, this has been fixed in RHEL-7.6 (comment #10, comment #20 and others). Can you please retest?

Comment 22 Yumei Huang 2018-12-11 08:39:25 UTC
Tested against qemu-kvm-rhev-2.12.0-20.el7, with PMEM_FS_DIR_FORCE_PMEM=1, all tests passed or skipped, no error.


Note You need to log in before you can comment on or make changes to this bug.