Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1360535 - No need print debug info when qemu-img create image with glusterfs backend.
Summary: No need print debug info when qemu-img create image with glusterfs backend.
Keywords:
Status: POST
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: glusterfs
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Prasanna Kumar Kalever
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-27 03:01 UTC by weliao
Modified: 2018-09-28 21:42 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:


Attachments (Terms of Use)

Description weliao 2016-07-27 03:01:48 UTC
Description of problem:
qemu-img create a image with glusterfs backend,print more debug info.

Version-Release number of selected component (if applicable):
glusterfs server:
3.10.0-447.el7.x86_64
glusterfs-server-3.7.9-10.el7rhgs.x86_64
glusterfs client:
3.10.0-478.el7.x86_64
qemu-img-rhev-2.6.0-15.el7.x86_64
glusterfs-3.7.9-10.el7rhgs.x86_64

How reproducible:
100%

Steps to Reproduce:
1. qemu-img create -f raw gluster://10.66.9.230/test-volume/test13 1G
Formatting 'gluster://10.66.9.230/test-volume/test13', fmt=raw size=1073741824
[2016-07-27 02:51:11.503587] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0) coming up
[2016-07-27 02:51:11.503624] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-07-27 02:51:11.505733] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-1: parent translators are ready, attempting connect on transport
[2016-07-27 02:51:11.506407] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-07-27 02:51:11.508878] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-1: changing port to 49152 (from 0)
[2016-07-27 02:51:11.510252] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 02:51:11.510723] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-07-27 02:51:11.510736] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 02:51:11.510778] I [MSGID: 108005] [afr-common.c:4142:afr_notify] 0-test-volume-replicate-0: Subvolume 'test-volume-client-0' came back up; going online.
[2016-07-27 02:51:11.510971] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-07-27 02:51:11.512254] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 02:51:11.512780] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-1: Connected to test-volume-client-1, attached to remote volume '/home/brick1'.
[2016-07-27 02:51:11.512804] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 02:51:11.520229] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-1: Server lk version = 1
[2016-07-27 02:51:11.522287] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-test-volume: switched to graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0)
[2016-07-27 02:51:11.626924] I [MSGID: 114021] [client.c:2122:notify] 0-test-volume-client-0: current graph is no longer active, destroying rpc_client 
[2016-07-27 02:51:11.626986] I [MSGID: 114021] [client.c:2122:notify] 0-test-volume-client-1: current graph is no longer active, destroying rpc_client 
[2016-07-27 02:51:11.627036] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-test-volume-client-0: disconnected from test-volume-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2016-07-27 02:51:11.627082] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-test-volume-client-1: disconnected from test-volume-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2016-07-27 02:51:11.627121] E [MSGID: 108006] [afr-common.c:4164:afr_notify] 0-test-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2016-07-27 02:51:11.627248] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1
[2016-07-27 02:51:11.627364] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=156 max=2 total=3
[2016-07-27 02:51:11.627672] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=108 max=1 total=1
[2016-07-27 02:51:11.627681] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-client-0: size=1300 max=2 total=13
[2016-07-27 02:51:11.627687] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-client-1: size=1300 max=3 total=13
[2016-07-27 02:51:11.627695] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-replicate-0: size=10548 max=2 total=10
[2016-07-27 02:51:11.627936] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-dht: size=1148 max=0 total=0
[2016-07-27 02:51:11.627974] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-dht: size=2316 max=2 total=8
[2016-07-27 02:51:11.628055] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-read-ahead: size=188 max=0 total=0
[2016-07-27 02:51:11.628062] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-readdir-ahead: size=60 max=0 total=0
[2016-07-27 02:51:11.628068] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-io-cache: size=68 max=0 total=0
[2016-07-27 02:51:11.628073] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume-io-cache: size=252 max=1 total=4
[2016-07-27 02:51:11.628080] I [io-stats.c:2951:fini] 0-test-volume: io-stats translator unloaded
[2016-07-27 02:51:11.628198] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2
[2016-07-27 02:51:11.628210] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1

2.
3.

Actual results:
print more debug info.

Expected results:
no need print debug info.


Additional info:

Comment 2 weliao 2016-07-27 05:40:15 UTC
if used gluster backend image boot a guest, also print more debug info:

# /usr/libexec/qemu-kvm -name cephfs_rhel7.3 -M pc -cpu SandyBridge -m 4096 -realtime mlock=off -nodefaults -smp 4 -drive file=gluster://10.66.9.230/test-volume/test8,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -kernel vmlinuz -initrd initrd.img -append method=http://download.eng.pek2.redhat.com//pub/rhel/nightly/RHEL-7.3-20160722.n.0/compose/Server/x86_64/os -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:22:33:44:92,bus=pci.0,addr=0x3,disable-legacy=off,disable-modern=off -vga qxl -spice port=5900,disable-ticketing, -monitor stdio  -boot menu=on  -qmp tcp:0:4444,server,nowait
[2016-07-27 05:38:32.309967] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0) coming up
[2016-07-27 05:38:32.310004] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-0: parent translators are ready, attempting connect on transport
[2016-07-27 05:38:32.312132] I [MSGID: 114020] [client.c:2113:notify] 0-test-volume-client-1: parent translators are ready, attempting connect on transport
[2016-07-27 05:38:32.312933] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-0: changing port to 49152 (from 0)
[2016-07-27 05:38:32.315348] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume-client-1: changing port to 49152 (from 0)
[2016-07-27 05:38:32.316572] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 05:38:32.317045] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-0: Connected to test-volume-client-0, attached to remote volume '/home/brick1'.
[2016-07-27 05:38:32.317059] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 05:38:32.317100] I [MSGID: 108005] [afr-common.c:4142:afr_notify] 0-test-volume-replicate-0: Subvolume 'test-volume-client-0' came back up; going online.
[2016-07-27 05:38:32.317283] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-0: Server lk version = 1
[2016-07-27 05:38:32.318519] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-07-27 05:38:32.319035] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume-client-1: Connected to test-volume-client-1, attached to remote volume '/home/brick1'.
[2016-07-27 05:38:32.319057] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-07-27 05:38:32.326839] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume-client-1: Server lk version = 1
[2016-07-27 05:38:32.328363] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-test-volume: switched to graph 6c6f6361-6c68-6f73-742e-6c6f63616c64 (0)
QEMU 2.6.0 monitor - type 'help' for more information
(qemu)

Comment 3 Jeff Cody 2016-09-07 06:44:06 UTC
At some point, libgfapi started becoming more verbose with debug messages, even when not requested by QEMU.  Even if the debug level is set to 0, libgfapi is still rather verbose.

Comment 4 Prasanna Kumar Kalever 2016-09-18 18:44:18 UTC
weliao,

Ideally, this should be the command to use,

# qemu-img create -o debug=0 -f raw gluster://10.66.9.230/test-volume/test13 1G

Let me know if you still see logs with the above command as well.


QEMU has its "-o debug={0..9}' option from qemu-kvm-rhev-2.6.0-13.el7

In case if you still see logs, then I guess the issue could be libgfapi log level is overridden by some other number in the gluster stack.

Let me check with this and come back again, meanwhile can you please confirm from your side ?

Comment 5 jingzhao 2016-09-19 02:23:17 UTC
(In reply to Prasanna Kumar Kalever from comment #4)
> weliao,
> 
> Ideally, this should be the command to use,
> 
> # qemu-img create -o debug=0 -f raw gluster://10.66.9.230/test-volume/test13
> 1G
> 
> Let me know if you still see logs with the above command as well.
> 
> 
> QEMU has its "-o debug={0..9}' option from qemu-kvm-rhev-2.6.0-13.el7
> 
> In case if you still see logs, then I guess the issue could be libgfapi log
> level is overridden by some other number in the gluster stack.
> 
> Let me check with this and come back again, meanwhile can you please confirm
> from your side ?

Test it on my side, and also print debug log when set the value as 0
qemu-kvm-rhev-2.6.0-23.el7.x86_64

[root@jinzhao e1000e]# qemu-img create -o debug=0 -f raw gluster://10.66.4.211/test-volume1/test.raw 1G
Formatting 'gluster://10.66.4.211/test-volume1/test.raw', fmt=raw size=1073741824 debug=0
[2016-09-19 02:22:17.928608] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 6a696e7a-6861-6f2d-3437-37352d323031 (0) coming up
[2016-09-19 02:22:17.928629] I [MSGID: 114020] [client.c:2106:notify] 0-test-volume1-client-0: parent translators are ready, attempting connect on transport
[2016-09-19 02:22:17.932939] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-test-volume1-client-0: changing port to 49153 (from 0)
[2016-09-19 02:22:17.936958] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-test-volume1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-09-19 02:22:17.937946] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-test-volume1-client-0: Connected to test-volume1-client-0, attached to remote volume '/home/brick'.
[2016-09-19 02:22:17.937956] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-test-volume1-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-09-19 02:22:17.944618] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-test-volume1-client-0: Server lk version = 1
[2016-09-19 02:22:17.946940] I [MSGID: 104041] [glfs-resolve.c:869:__glfs_active_subvol] 0-test-volume1: switched to graph 6a696e7a-6861-6f2d-3437-37352d323031 (0)
[2016-09-19 02:22:18.053162] I [MSGID: 114021] [client.c:2115:notify] 0-test-volume1-client-0: current graph is no longer active, destroying rpc_client 
[2016-09-19 02:22:18.053263] I [MSGID: 114018] [client.c:2030:client_rpc_notify] 0-test-volume1-client-0: disconnected from test-volume1-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2016-09-19 02:22:18.053431] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1
[2016-09-19 02:22:18.053601] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=156 max=2 total=3
[2016-09-19 02:22:18.053726] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=108 max=1 total=1
[2016-09-19 02:22:18.053735] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-client-0: size=1300 max=2 total=7
[2016-09-19 02:22:18.053740] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-dht: size=1148 max=0 total=0
[2016-09-19 02:22:18.053779] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-dht: size=2284 max=2 total=8
[2016-09-19 02:22:18.053847] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-read-ahead: size=188 max=0 total=0
[2016-09-19 02:22:18.053853] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-readdir-ahead: size=52 max=0 total=0
[2016-09-19 02:22:18.053860] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-io-cache: size=68 max=0 total=0
[2016-09-19 02:22:18.053864] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-test-volume1-io-cache: size=252 max=1 total=4
[2016-09-19 02:22:18.053872] I [io-stats.c:2951:fini] 0-test-volume1: io-stats translator unloaded
[2016-09-19 02:22:18.054004] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1


Thanks
Jing Zhao

Comment 6 Prasanna Kumar Kalever 2016-09-22 10:14:01 UTC
As mentioned in comment 4 there was an override of log level in io-stats xlator of gluster, now this issue is fixed upstream http://review.gluster.org/#/c/15112/


Note You need to log in before you can comment on or make changes to this bug.