Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1355927 - qemu SIGABRT when doing inactive blockcommit with external system checkpoint snapshot
Summary: qemu SIGABRT when doing inactive blockcommit with external system checkpoint ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: John Snow
QA Contact: Qianqian Zhu
Jiri Herrmann
URL:
Whiteboard:
Depends On:
Blocks: 1362179
TreeView+ depends on / blocked
 
Reported: 2016-07-13 03:09 UTC by Han Han
Modified: 2016-11-07 21:24 UTC (History)
15 users (show)

Fixed In Version: qemu-kvm-rhev-2.6.0-26.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-07 21:24:03 UTC


Attachments (Terms of Use)
gdb backtrace info (deleted)
2016-07-13 03:09 UTC, Han Han
no flags Details
gdb backtrace for qemu-kvm-rhev-2.6.0-24.el7.x86_64 (deleted)
2016-09-13 02:55 UTC, Han Han
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description Han Han 2016-07-13 03:09:05 UTC
Created attachment 1179045 [details]
gdb backtrace info

Description of problem:
As subject

Version-Release number of selected component (if applicable):
libvirt-2.0.0-2.el7.x86_64
qemu-kvm-rhev-2.6.0-13.el7.x86_64


How reproducible:
70%

Steps to Reproduce:
1. Start a VM ,do external snapshots with memory snapshots, then do blockcommit. Just like the following script:
DOM=re
path=/var/lib/libvirt/images

virt-clone -o V -n re --auto-clone
virsh start $DOM && sleep 30
for i in s{1..3};do
    virsh snapshot-create-as $DOM $i  --memspec file=$path/$DOM-mem.$i
done
virsh blockcommit $DOM vda --top $path/$DOM.s2 --base $path/$DOM.s1 --wait --verbose
if [ $? -ne 0 ];then
    echo BUG 
    exit 1
fi
#virsh blockjob $DOM vda --pivot
virsh destroy $DOM
virsh undefine $DOM --remove-all-storage --snapshots-metadata
rm $path/$DOM*

2. There is an error when doing blockcommit:
Block commit: [  1 %]error: failed to query job for disk vda
error: Unable to read from monitor: Connection reset by peer

# abrt-cli ls|head
id 794cf2e297ce12615411f1e082b23ff685503206
reason:         qemu-kvm killed by SIGABRT
time:           Wed 13 Jul 2016 10:42:35 AM CST
cmdline:        /usr/libexec/qemu-kvm -name guest=re,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-61-re/master-key.aes -machine pc-i440fx-rh$
l7.3.0,accel=kvm,usb=off,vmport=off -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 67500331-ecd4-4f24-87a4-d092607e529d -no-user-config -nodefaults -chardev socket,id=$
harmonitor,path=/var/lib/libvirt/qemu/domain-61-re/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=d$
scard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0xb -device ich9-usb-ehci1,id$
usb,bus=pci.0,addr=0x6.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -$
evice ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0xa -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -d$
ive file=/var/lib/libvirt/images/re.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,b$
otindex=1 -netdev tap,fd=49,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:84:05:6b,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charser
ial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-61-re/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,c
hardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,n
ame=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5901,addr=0.0.0.0,disable-ticketing,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram6
4_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredi
r -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -device virtio-balloon-pci,id=balloon0,bus=pci.0
,addr=0x8 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x9 -msg timestamp=on
package:        qemu-kvm-rhev-2.6.0-13.el7
uid:            107 (qemu)
Directory:      /var/spool/abrt/ccpp-2016-07-12-22:42:35-26397
Run 'abrt-cli report /var/spool/abrt/ccpp-2016-07-12-22:42:35-26397' for creating a case in Red Hat Customer Portal

Actual results:
As step2

Expected results:
No SIGABRT

Additional info:
The bug will block RHEVM create snapshots and delete snapshots test. Pls fix it ASAP.

Comment 3 yangyang 2016-09-09 05:32:57 UTC
External system checkpoint snapshot is used by default in RHEVM. This bug blocks snapshots deletion test in RHEVM

Comment 4 Ademar Reis 2016-09-09 14:32:15 UTC
(In reply to yangyang from comment #3)
> External system checkpoint snapshot is used by default in RHEVM. This bug
> blocks snapshots deletion test in RHEVM

But was it working before?

I'll let John Snow better clarify it here, but my understanding is that this is not expected to work due to limitations in the current blockjobs implementation.

Comment 5 yangyang 2016-09-12 03:26:01 UTC
(In reply to Ademar Reis from comment #4)
> (In reply to yangyang from comment #3)
> > External system checkpoint snapshot is used by default in RHEVM. This bug
> > blocks snapshots deletion test in RHEVM
> 
> But was it working before?
> 
> I'll let John Snow better clarify it here, but my understanding is that this
> is not expected to work due to limitations in the current blockjobs
> implementation.

It works with qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64

Steps are as follows:
1. create 3 external system checkpoint snapshots
# virsh snapshot-create-as vm-mig s1 --memspec file=/tmp/vm-mig.mem1
Domain snapshot s1 created
# virsh snapshot-create-as vm-mig s2 --memspec file=/tmp/vm-mig.mem2
Domain snapshot s2 created
# virsh snapshot-create-as vm-mig s3 --memspec file=/tmp/vm-mig.mem3
Domain snapshot s3 created

2. do in active blockcommit
# virsh blockcommit vm-mig vdb --top /tmp/yy/nfs2/RHEL-latest.s2 --base /tmp/yy/nfs2/RHEL-latest.s1 --wait --verbose
Block commit: [100 %]
Commit complete

Comment 6 Eric Blake 2016-09-12 15:05:15 UTC
(In reply to Han Han from comment #0)

Attempting to offer a QMP translation:

> Steps to Reproduce:
> 1. Start a VM ,do external snapshots with memory snapshots, then do
> blockcommit. Just like the following script:
> DOM=re
> path=/var/lib/libvirt/images
> 
> virt-clone -o V -n re --auto-clone
> virsh start $DOM && sleep 30
> for i in s{1..3};do
>     virsh snapshot-create-as $DOM $i  --memspec file=$path/$DOM-mem.$i

Under the hood, these steps are extending the backing chain via a combination of HMP savevm and QMP blockdev-snapshot-sync (the memory state is being saved in $path/$DOM-mem.$i, and the new disk image names are being auto-selected by libvirt).  So at the end of the loop, you roughly have:

/path/to/image/$DOM.s <- /path/to/image/$DOM.s1 <- /path/to/image/$DOM.s2 <- /path/to/image/$DOM.s3

as your backing chain.

> done
> virsh blockcommit $DOM vda --top $path/$DOM.s2 --base $path/$DOM.s1 --wait
> --verbose

This maps to QMP block-commit, but not at the active layer, followed by query-block-job as needed to wait for the job to reach completion (a non-active commit finishes automatically).  If successful, it shortens the chain to:

/path/to/image/$DOM.s <- /path/to/image/$DOM.s1 <- /path/to/image/$DOM.s3

> if [ $? -ne 0 ];then
>     echo BUG 
>     exit 1
> fi
> #virsh blockjob $DOM vda --pivot
> virsh destroy $DOM
> virsh undefine $DOM --remove-all-storage --snapshots-metadata
> rm $path/$DOM*

and this just cleans up for a retry.

Depending on how fast the for loop executed, it is feasible that $DOM.s2 has no additional guest content beyond what is in $DOM.s1, which may or may not be relevant to whether the commit job had work to do.

Comment 7 John Snow 2016-09-12 23:54:16 UTC
Having a heck of a time reproducing this, but I'm actively working on it. So far I haven't any luck reproducing this with qemu-kvm-rhev-2.6.0-24.el7 and a Fedora 24 guest, using libvirtd 1.3.3.2.

Comment 8 John Snow 2016-09-13 00:31:53 UTC
I've used upstream 2.6.0, qemu-kvm-rhev-2.6.0-13.el7, and qemu-kvm-rhev-2.6.0-24.el7 but I can't seem to get this to reproduce.

Ran over 10 times in each configuration, can't seem to trigger this BZ. Not sure where to look next -- it's possible that my libvirt version difference might be affecting something and I'll have to try a libvirt 2.0 based package to see what you are seeing.

Can you include the XML and/or an un-truncated version of the command-line (the copy-paste from the abrt-cli there appears to be truncated in places with '$' signs) so I can look for differences?

In addition, is it possible to ask for a retest to occur under qemu-kvm-rhev-2.6.0-24.el7 and confirm that the bug should be easy to reproduce on that version?

Sorry for the inconvenience.
Thank you,
--John.

Comment 9 Han Han 2016-09-13 02:55:54 UTC
Created attachment 1200332 [details]
gdb backtrace for qemu-kvm-rhev-2.6.0-24.el7.x86_64

I reproduced it by RHEVM when deleting the second snapshot in the three snapshots(with mem snapshot). Then vm will be power-off unexpectedly (qemu killed by SIGABRT) 
Actually, the bug was originally found via RHEVM, which is more easily to reproduce the bug. For RHEVM's response is slow, I did as comment1 in libvirt. It seems RHEVM is more easily to reproduce it.

Comment 11 John Snow 2016-09-14 20:05:55 UTC
Reproduced, found patches that resolve the issue for 2.7.0, backported and posted.

Comment 12 John Snow 2016-09-14 20:08:33 UTC
More info:

I finally reproduced this -- I had to add wait periods between each snapshot to ensure that each snapshot was dirty. THEN I can reproduce this.

For posterity, each snapshot from libvirt as described above induces something like the following: http://pastebin.test.redhat.com/411547

And after the snapshot commands, we get something like this as Eric says:
[base] <-- [S1] <-- [S2] <-- [S3]


From QEMU, I see bdrv_inactivate called once per each snapshot; when we perform snapshot #2, we see:
bdrv_inactivate: setting BDRV_O_INACTIVE on bs(0x55afc12a9ff0)->flags (2002)

As far as I can tell, this flag is never again lifted on this BDS; later when we issue the block-commit:

virsh blockcommit $DOM vda --top $path/$DOM.s2 --base $path/$DOM.s1 --wait



And this command wants us to write "S2" (non-active layer) into "S1" (non-active layer) -- this fails almost immediately.

This is definitely fixed in 2.7.0 by Fam Zheng, who was fixing some unrelated issues with invalidate and inactivate not working as perfect opposites of each other; see:
https://lists.nongnu.org/archive/html/qemu-devel/2016-05/msg01572.html

Comment 13 Kashyap Chamarthy 2016-09-14 21:39:41 UTC
(In reply to John Snow from comment #12)
> More info:
> 
> I finally reproduced this -- I had to add wait periods between each snapshot
> to ensure that each snapshot was dirty. THEN I can reproduce this.
> 
> For posterity, each snapshot from libvirt as described above induces
> something like the following: http://pastebin.test.redhat.com/411547

[Meta-comment: Paste bins expire :-).  For later reference, posting the content (QMP traffic between libvirt and QEMU) from the above paste-bin here:
-------------------------------------------------------------------------------
2016-09-13 19:22:32.836+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"stop","id":"libvirt-94"}' for write with FD -1
2016-09-13 19:22:33.025+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"migrate_set_speed","arguments":{"value":9223372036853727232},"id":"libvirt-95"}' for write with FD -1
2016-09-13 19:22:33.029+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"getfd","arguments":{"fdname":"migrate"},"id":"libvirt-96"}' for write with FD 27
2016-09-13 19:22:33.030+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"migrate","arguments":{"detach":true,"blk":false,"inc":false,"uri":"fd:migrate"},"id":"libvirt-97"}' for write with FD -1
2016-09-13 19:22:36.588+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"query-migrate","id":"libvirt-98"}' for write with FD -1
2016-09-13 19:22:36.597+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"migrate_set_speed","arguments":{"value":9223372036853727232},"id":"libvirt-99"}' for write with FD -1
2016-09-13 19:22:41.902+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"transaction","arguments":{"actions":[{"type":"blockdev-snapshot-sync","data":{"device":"drive-virtio-disk0","snapshot-file":"/media/ext/img/han.s2","format":"qcow2"}}]},"id":"libvirt-100"}' for write with FD -1
2016-09-13 19:22:41.939+0000: 32527: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"cont","id":"libvirt-101"}' for write with FD -1
------------------------------------------------------------------------------- 


[...]

Comment 14 Miroslav Rezanina 2016-09-20 12:29:03 UTC
Fix included in qemu-kvm-rhev-2.6.0-26.el7

Comment 16 Qianqian Zhu 2016-09-21 08:50:12 UTC
Reproduced with:
qemu-kvm-rhev-debuginfo-2.6.0-24.el7.x86_64
libvirt-2.0.0-9.el7.x86_64
kernel-3.10.0-509.el7.x86_64

Steps:
virsh # snapshot-create-as win2k8r2 s1 --memspec file=/tmp/vm-mig.mem1 
Domain snapshot s1 created
virsh # snapshot-create-as win2k8r2 s2 --memspec file=/tmp/vm-mig.mem2 
Domain snapshot s2 created
virsh # snapshot-create-as win2k8r2 s3 --memspec file=/tmp/vm-mig.mem3 
Domain snapshot s3 created
virsh # blockcommit win2k8r2 sda --top /mntnfs/win2008-r2-64-virtio-scsi.s2  --base /mntnfs/win2008-r2-64-virtio-scsi.s1 --wait --verbose

Result:
Block commit: [  0 %]error: failed to query job for disk sda
error: Unable to read from monitor: Connection reset by peer

Verified with:
qemu-kvm-rhev-2.6.0-26.el7.x86_64
libvirt-2.0.0-9.el7.x86_64
kernel-3.10.0-509.el7.x86_64

Same steps as above.

Result:
virsh # blockcommit win2k12r2 sda --top /mntnfs/win2012-64r2-virtio-scsi.s2   --base /mntnfs/win2012-64r2-virtio-scsi.1474447010  --wait --verbose
Block commit: [100 %]
Commit complete

Moving to verified based on the above result.

Comment 18 errata-xmlrpc 2016-11-07 21:24:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.