Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1354090 - Boot guest with vhostuser server mode, QEMU prompt 'Segmentation fault' after executing '(qemu)system_powerdown'
Summary: Boot guest with vhostuser server mode, QEMU prompt 'Segmentation fault' after...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Marc-Andre Lureau
QA Contact: Pei Zhang
URL:
Whiteboard:
: 1355659 (view as bug list)
Depends On:
Blocks: 1356892
TreeView+ depends on / blocked
 
Reported: 2016-07-09 12:13 UTC by Pei Zhang
Modified: 2016-11-07 21:22 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-rhev-2.6.0-15.el7
Doc Type: Bug Fix
Doc Text:
When a guest virtual machine was configured as a vhost-user server and the back end was restarted, the guest was not able to recover. With this update, the back end is able to discard the SET_VRING_BASE value and resume from index. This allows the guest to recover successfully in the described scenario.
Clone Of:
: 1356892 (view as bug list)
Environment:
Last Closed: 2016-11-07 21:22:24 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description Pei Zhang 2016-07-09 12:13:07 UTC
Description of problem:
Boot guest with vhostuser server mode, QEMU prompt 'Segmentation fault' after executing '(qemu)system_powerdown'. 

Version-Release number of selected component (if applicable):
host:
3.10.0-460.el7.x86_64
qemu-kvm-rhev-2.6.0-12.el7.x86_64

guest:
3.10.0-456.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Run a slirp/vlan in a background process
# /usr/libexec/qemu-kvm \
-net none \
-net socket,vlan=0,udp=localhost:4444,localaddr=localhost:5555 \
-net user,vlan=0

2. Start qemu with vhost-user as server mode
# /usr/libexec/qemu-kvm  -m 1024 -smp 2 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc \
-chardev socket,id=char0,path=/tmp/vubr.sock,server \
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=mynet1,mac=54:52:00:1a:2c:01 \
/home/pezhang/rhel7.3.qcow2 \
-monitor stdio \
-vga std -vnc :10 \

QEMU waiting for connection on: disconnected:unix:/tmp/vubr.sock,server

3. Start vubr as vhostuser client
# ./vhost-user-bridge -c

4. After guest boot up, shutdown it. Qemu will prompt error info.
(qemu) system_powerdown 
(qemu) Segmentation fault

Actual results:
Qemu doesn't quit well.

Expected results:
Qemu should quit correctly without any error info.

Additional info:
1. gdb info
(qemu) system_powerdown 
(qemu) 
Program received signal SIGSEGV, Segmentation fault.
0x00005555558f3128 in object_class_dynamic_cast (class=class@entry=0x55555945ee80, 
    typename=typename@entry=0x5555559c33f9 "qio-channel") at qom/object.c:662
662	    if (type->class->interfaces &&
...
(gdb) bt
#0  0x00005555558f3128 in object_class_dynamic_cast (class=class@entry=0x55555945ee80, 
    typename=typename@entry=0x5555559c33f9 "qio-channel") at qom/object.c:662
#1  0x00005555558f32b5 in object_class_dynamic_cast_assert (class=0x55555945ee80, 
    typename=typename@entry=0x5555559c33f9 "qio-channel", file=file@entry=0x555555a3f366 "io/channel.c", line=line@entry=60, 
    func=func@entry=0x555555a3f500 <__func__.20656> "qio_channel_writev_full") at qom/object.c:712
#2  0x000055555595b817 in qio_channel_writev_full (ioc=0x55555945ef80, iov=0x7fffffff9890, niov=1, fds=0x0, nfds=0, errp=0x0)
    at io/channel.c:60
#3  0x00005555557c6974 in io_channel_send_full (ioc=0x55555945ef80, buf=0x7fffffff99a0, len=20, fds=0x0, nfds=0)
    at qemu-char.c:966
#4  0x00005555557c6a33 in tcp_chr_write (chr=<optimized out>, buf=<optimized out>, len=<optimized out>) at qemu-char.c:2658
#5  0x00005555557c76eb in qemu_chr_fe_write_buffer (s=s@entry=0x555556c03c20, buf=buf@entry=0x7fffffff99a0 "\v", len=20, 
    offset=offset@entry=0x7fffffff9950) at qemu-char.c:250
#6  0x00005555557ca073 in qemu_chr_fe_write_all (s=s@entry=0x555556c03c20, buf=buf@entry=0x7fffffff99a0 "\v", 
    len=len@entry=20) at qemu-char.c:310
#7  0x0000555555749f1f in vhost_user_write (msg=msg@entry=0x7fffffff99a0, fds=fds@entry=0x0, fd_num=fd_num@entry=0, 
    dev=0x555556b3dc00, dev=0x555556b3dc00) at /usr/src/debug/qemu-2.6.0/hw/virtio/vhost-user.c:195
#8  0x000055555574aa2c in vhost_user_get_vring_base (dev=0x555556b3dc00, ring=0x7fffffff9ae0)
    at /usr/src/debug/qemu-2.6.0/hw/virtio/vhost-user.c:364
#9  0x00005555557481b0 in vhost_virtqueue_stop (dev=dev@entry=0x555556b3dc00, vdev=vdev@entry=0x555559c14328, 
    vq=0x555556b3dd38, idx=0) at /usr/src/debug/qemu-2.6.0/hw/virtio/vhost.c:924
#10 0x00005555557497c4 in vhost_dev_stop (hdev=hdev@entry=0x555556b3dc00, vdev=vdev@entry=0x555559c14328)
    at /usr/src/debug/qemu-2.6.0/hw/virtio/vhost.c:1290
#11 0x0000555555735c28 in vhost_net_stop_one (net=0x555556b3dc00, dev=dev@entry=0x555559c14328)
    at /usr/src/debug/qemu-2.6.0/hw/net/vhost_net.c:288
#12 0x000055555573610b in vhost_net_stop (dev=dev@entry=0x555559c14328, ncs=<optimized out>, 
    total_queues=total_queues@entry=1) at /usr/src/debug/qemu-2.6.0/hw/net/vhost_net.c:367
#13 0x0000555555733455 in virtio_net_vhost_status (status=7 '\a', n=0x555559c14328)
    at /usr/src/debug/qemu-2.6.0/hw/net/virtio-net.c:158
#14 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>) at /usr/src/debug/qemu-2.6.0/hw/net/virtio-net.c:224
---Type <return> to continue, or q <return> to quit---
#15 0x00005555558b720f in qmp_set_link (name=name@entry=0x555556b30bd0 "mynet1", up=up@entry=false, 
    errp=errp@entry=0x7fffffffbc68) at net/net.c:1368
#16 0x00005555558bd7ef in net_vhost_user_event (opaque=0x555556b30bd0, event=5) at net/vhost-user.c:226
#17 0x00005555557cc132 in qemu_chr_free (chr=0x555556c03c20) at qemu-char.c:4037
#18 0x00005555557cce7f in qemu_chr_cleanup () at qemu-char.c:4574
#19 0x00005555556c7611 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4676


2. This bug was caused by patches of Bug 1347077.

Comment 3 Ademar Reis 2016-07-12 16:10:56 UTC
*** Bug 1355659 has been marked as a duplicate of this bug. ***

Comment 4 Marc-Andre Lureau 2016-07-14 20:59:54 UTC
patches are now merged upstream, backport has been sent to rhvirt for review,  brew builds:

7.2.z:
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=11354503

7.3:
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=11354521

Comment 6 Miroslav Rezanina 2016-07-20 08:50:57 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.19

Comment 8 Miroslav Rezanina 2016-07-22 09:12:05 UTC
Fix included in qemu-kvm-rhev-2.6.0-15.el7

Comment 10 Pei Zhang 2016-07-22 09:59:48 UTC
Verification:
Versions:qemu-kvm-rhev-2.6.0-15.el7.x86_64

Steps:
1. Run a slirp/vlan in a background process
2. Start qemu with vhost-user as server mode
3. Start vubr as vhostuser client
(Step1~3 have same commands and results as Description)

4. After guest boot up, shutdown it. Qemu quit well.
(qemu) system_powerdown 
or 
in guest:
# shutdown -h now

So this bug has been fixed well. Thank you.

Comment 12 errata-xmlrpc 2016-11-07 21:22:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.