Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1518543 - "qemu-img info" hangs to get information of the NBD disk image which is in use
Summary: "qemu-img info" hangs to get information of the NBD disk image which is in use
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 8.0
Assignee: Eric Blake
QA Contact: Tingting Mao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-29 07:16 UTC by yilzhang
Modified: 2019-02-22 22:11 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug


Attachments (Terms of Use)

Description yilzhang 2017-11-29 07:16:20 UTC
Description of problem:
"qemu-img info" hangs to get information of the NBD disk image which is in use

Version-Release number of selected component (if applicable):
Host kernel:   3.10.0-797.el7.ppc64le
Guest kernel:  3.10.0-797.el7.ppc64le
qemu-kvm-rhev: qemu-kvm-rhev-2.10.0-8.el7

How reproducible: 100%


Steps to Reproduce:
1. Export an image file on NBD server side
# qemu-img create -f qcow2  -o preallocation=full   /home/nbd_dataimage_0.qcow2  1G
#  qemu-nbd -f raw  /home/nbd_dataimage_0.qcow2    -p 9000 -t &

2. On NBD client side, get info of this NBD image
# qemu-img info  nbd:10.16.69.87:9000
image: nbd://10.16.69.87:9000
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: unavailable
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

3. Boot up one guest, and use the above NBD image as one data disk
-device virtio-scsi-pci,bus=pci.0,id=scsi1 \
-drive file=nbd:10.16.69.87:9000,if=none,cache=none,id=drive_ddisk_2,format=qcow2,werror=stop,rerror=stop \
-device scsi-hd,drive=drive_ddisk_2,bus=scsi1.0,id=ddisk_2 \

4. Get info of this NBD image again
# qemu-img info  nbd:10.16.69.87:9000


Actual results:
qemu-img hangs in step4

Expected results:
qemu-img should return instead of hang


Additional info:
x86 also has this problem
Host kernel:   3.10.0-799.el7.x86_64
qemu-kvm-rhev: qemu-kvm-rhev-2.10.0-9.el7

Comment 2 Eric Blake 2017-11-29 13:59:02 UTC
This is a fundamental limitation of the NBD protocol - only the server can control whether an image is read-only or read-write, and it is up to the server to decide how many simultaneous read-write connections are permitted (in qemu's case, at most 1).  So as long as there is a read-write client connected to the server, no other client can connect read-only.

That said, I think this may be worth an extension to the NBD protocol to permit a client to inform the server that it only wants a read-only connection; at which point the server can be enhanced to allow read-only clients in parallel with a read-write client, so that qemu-img info could "work" for such an enhanced server (note that "work" is a relative term - just as with file op-blockers, you can't trust a read-only connection to always be consistent with regards to guest-visible contents is the writing client is in the middle of making modifications).  I'll propose the idea upstream, and see what the NBD community thinks.

Comment 3 Eric Blake 2017-11-29 16:00:22 UTC
Extension proposal:
https://lists.debian.org/nbd/2017/nbd-201711/msg00046.html

While writing that up, I note that qemu's current implementation does not even bother to allow parallel connections (where secondary connections are marked NBD_FLAG_READ_ONLY), which is possible even without the extension; so there is certainly room for improvement in upstream qemu regardless of what upstream NBD says.


Note You need to log in before you can comment on or make changes to this bug.