Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1360538 - qemu-img can't create qcow2 format image with gluster backend.
Summary: qemu-img can't create qcow2 format image with gluster backend.
Keywords:
Status: CLOSED DUPLICATE of bug 1356372
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: glusterfs
Version: 7.3
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: pre-dev-freeze
: ---
Assignee: sankarshan
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-27 03:06 UTC by weliao
Modified: 2016-08-25 09:10 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-25 09:10:35 UTC


Attachments (Terms of Use)

Description weliao 2016-07-27 03:06:46 UTC
Description of problem:
qemu-img can't create qcow2 format image with gluster backend.

Version-Release number of selected component (if applicable):
glusterfs server:
3.10.0-447.el7.x86_64
glusterfs-server-3.7.9-10.el7rhgs.x86_64
glusterfs client:
3.10.0-478.el7.x86_64
qemu-img-rhev-2.6.0-15.el7.x86_64
glusterfs-3.7.9-10.el7rhgs.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Create a qcow2 format image with gluster backend.
# qemu-img create -f qcow2 gluster://10.66.9.230/test-volume/test18.qcow2 1G
2.
3.

Actual results:
Create failed,hang with this command, need ctrl+c to finish this command.

Expected results:
create success.

Additional info:
# qemu-img create -f qcow2 gluster://10.66.9.230/test-volume/test18.qcow2 1G
Formatting 'gluster://10.66.9.230/test-volume/test18.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-07-27 03:03:05.583106] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0) coming up
[2016-07-27 03:03:05.583143] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-07-27 03:03:05.585257] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-1: parent translators are ready, attempting connect on transport
[2016-07-27 03:03:05.585926] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-07-27 03:03:05.588559] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-1: changing port to 49152 (from 0)
[2016-07-27 03:03:05.589720] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 03:03:05.590255] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-07-27 03:03:05.590268] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 03:03:05.590309] I [MSGID: 108005] [afr-common.c:4142:afr_notify] 0-test-volume-replicate-0: Subvolume 'test-volume-client-0' came back up; going online.
[2016-07-27 03:03:05.590518] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-07-27 03:03:05.591639] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 03:03:05.592241] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-1: Connected to test-volume-client-1, attached to remote volume '/home/brick1'.
[2016-07-27 03:03:05.592261] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 03:03:05.599667] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-1: Server lk version = 1
[2016-07-27 03:03:05.601487] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-test-volume: switched to graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0)
[2016-07-27 03:03:05.706629] I [MSGID: 114021] [client.c:2122:notify] 0-test-volume-client-0: current graph is no longer active, destroying rpc_client 
[2016-07-27 03:03:05.706691] I [MSGID: 114021] [client.c:2122:notify] 0-test-volume-client-1: current graph is no longer active, destroying rpc_client 
[2016-07-27 03:03:05.706739] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-test-volume-client-0: disconnected from test-volume-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2016-07-27 03:03:05.706755] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-test-volume-client-1: disconnected from test-volume-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2016-07-27 03:03:05.706789] E [MSGID: 108006] [afr-common.c:4164:afr_notify] 0-test-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2016-07-27 03:03:05.706933] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1
[2016-07-27 03:03:05.707132] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=156 max=2 total=3
[2016-07-27 03:03:05.707325] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=108 max=1 total=1
[2016-07-27 03:03:05.707334] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-client-0: size=1300 max=2 total=13
[2016-07-27 03:03:05.707340] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-client-1: size=1300 max=3 total=13
[2016-07-27 03:03:05.707348] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-replicate-0: size=10548 max=2 total=10
[2016-07-27 03:03:05.707451] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-dht: size=1148 max=0 total=0
[2016-07-27 03:03:05.707491] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-dht: size=2316 max=2 total=8
[2016-07-27 03:03:05.707570] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-read-ahead: size=188 max=0 total=0
[2016-07-27 03:03:05.707577] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-readdir-ahead: size=60 max=0 total=0
[2016-07-27 03:03:05.707583] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-io-cache: size=68 max=0 total=0
[2016-07-27 03:03:05.707588] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-io-cache: size=252 max=1 total=4
[2016-07-27 03:03:05.707595] I [io-stats.c:2951:fini] 0-test-volume: io-stats translator unloaded
[2016-07-27 03:03:05.707722] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2
[2016-07-27 03:03:05.707765] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1
[2016-07-27 03:03:06.582364] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0) coming up
[2016-07-27 03:03:06.582401] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-07-27 03:03:06.584477] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-1: parent translators are ready, attempting connect on transport
[2016-07-27 03:03:06.585141] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-07-27 03:03:06.587310] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-1: changing port to 49152 (from 0)
[2016-07-27 03:03:06.588704] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 03:03:06.589153] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-07-27 03:03:06.589166] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 03:03:06.589208] I [MSGID: 108005] [afr-common.c:4142:afr_notify] 0-test-volume-replicate-0: Subvolume 'test-volume-client-0' came back up; going online.
[2016-07-27 03:03:06.589424] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-07-27 03:03:06.590796] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 03:03:06.591418] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-1: Connected to test-volume-client-1, attached to remote volume '/home/brick1'.
[2016-07-27 03:03:06.591432] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 03:03:06.599471] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-1: Server lk version = 1
[2016-07-27 03:03:06.601370] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-test-volume: switched to graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0)
[2016-07-27 03:03:06.603551] E [glfs-fops.c:746:glfs_io_async_cbk] (-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_writev_cbk+0x24c) [0x7fca5ae8a31c] -->/lib64/libgfapi.so.0(+0xb81d) [0x7fca7193881d] -->/lib64/libgfapi.so.0(+0xb736) [0x7fca71938736] ) 0-gfapi: invalid argument: iovec [Invalid argument]

Comment 2 weliao 2016-07-27 06:17:45 UTC
test also can't install guest used gluster backend with raw format image, if mount gluster backend to /mnt, use /mnt/rhel7.raw can install success.

# /usr/libexec/qemu-kvm -name rhel7.3 -M pc -cpu SandyBridge -m 4096 -realtime mlock=off -nodefaults -smp 4 -drive file=gluster://10.66.9.230/test-volume/rhel7.raw,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -kernel vmlinuz -initrd initrd.img -append method=http://download.eng.pek2.redhat.com//pub/rhel/nightly/RHEL-7.3-20160722.n.0/compose/Server/x86_64/os -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:22:33:44:92,bus=pci.0,addr=0x3,disable-legacy=off,disable-modern=off -vga qxl -spice port=5900,disable-ticketing, -monitor stdio  -boot menu=on  -qmp tcp:0:4444,server,nowait
[2016-07-27 06:10:54.420734] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0) coming up
[2016-07-27 06:10:54.420772] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-07-27 06:10:54.422844] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-1: parent translators are ready, attempting connect on transport
[2016-07-27 06:10:54.423511] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-07-27 06:10:54.426070] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-1: changing port to 49152 (from 0)
[2016-07-27 06:10:54.427333] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 06:10:54.427766] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-07-27 06:10:54.427775] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 06:10:54.427804] I [MSGID: 108005] [afr-common.c:4142:afr_notify] 0-test-volume-replicate-0: Subvolume 'test-volume-client-0' came back up; going online.
[2016-07-27 06:10:54.427975] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-07-27 06:10:54.429097] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 06:10:54.429578] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-1: Connected to test-volume-client-1, attached to remote volume '/home/brick1'.
[2016-07-27 06:10:54.429587] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 06:10:54.437233] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-1: Server lk version = 1
[2016-07-27 06:10:54.439193] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-test-volume: switched to graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0)
QEMU 2.6.0 monitor - type 'help' for more information
(qemu) c
(qemu) main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 1.942000 ms, bitrate 96722395 bps (92.241664 Mbps)
inputs_connect: inputs channel client create
red_dispatcher_set_cursor_peer: 
[2016-07-27 06:11:21.720675] E [glfs-fops.c:746:glfs_io_async_cbk] (-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_fsync_cbk+0x13b) [0x7fd0ab2fcb9b] -->/lib64/libgfapi.so.0(+0xb84d) [0x7fd0c835884d] -->/lib64/libgfapi.so.0(+0xb736) [0x7fd0c8358736] ) 0-gfapi: invalid argument: iovec [Invalid argument]

Comment 3 weliao 2016-07-27 07:57:11 UTC
QE retested with following versions:
server:
glusterfs-libs-3.7.9-2.el7rhgs.x86_64
glusterfs-cli-3.7.9-2.el7rhgs.x86_64
glusterfs-3.7.9-2.el7rhgs.x86_64
glusterfs-api-3.7.9-2.el7rhgs.x86_64
glusterfs-server-3.7.9-2.el7rhgs.x86_64
glusterfs-fuse-3.7.9-2.el7rhgs.x86_64
glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64
userspace-rcu-0.7.9-2.el7rhgs.x86_64

client:
glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64
glusterfs-3.7.9-2.el7rhgs.x86_64
glusterfs-api-3.7.9-2.el7rhgs.x86_64
glusterfs-libs-3.7.9-2.el7rhgs.x86_64
userspace-rcu-0.7.9-2.el7rhgs.x86_64

no this issue, so will mark to Regression and change the Component to glusterfs

Comment 5 Chao Yang 2016-08-25 05:25:17 UTC
Hi Wei,

Could you please retry with glusterfs-3.7.9-12 and update here?

Comment 6 weliao 2016-08-25 09:10:35 UTC
glusterfs-3.7.9-12 work well.

*** This bug has been marked as a duplicate of bug 1356372 ***


Note You need to log in before you can comment on or make changes to this bug.