Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1360461

Summary: gdeploy hangs if device has a filesystem signature
Product: Red Hat Gluster Storage Reporter: Cal Calhoun <ccalhoun>
Component: gdeployAssignee: Sachidananda Urs <surs>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: low Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: jliedy, rcyriac, rhinduja, smohan
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.3 Async   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: gdeploy-2.0.1-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-07 11:34:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1351522    
Attachments:
Description Flags
Gdeploy conf file
none
Gdeploy output none

Description Cal Calhoun 2016-07-26 19:56:50 UTC
Description of problem:

Device formatted with btrfs.

Without using wipefs, gdeploy hangs indefinitely at pv creation with no error.

Once wipefs was used to remove the filesystem signature, gdeploy worked without issue.

Version-Release number of selected component (if applicable):

RHEL7, gdeploy 2.0

How reproducible:

Consistently

Comment 2 Jonathan Liedy 2016-07-27 16:17:57 UTC
This bug is based of of Red Hat support case #01674530

Comment 4 Sachidananda Urs 2016-08-24 16:37:00 UTC
https://github.com/gluster/gdeploy/commit/b36d268 fixes the issue.

Comment 5 Manisha Saini 2016-10-17 10:38:46 UTC
pvcreation is failing with gdeploy when the device is formatted with btrfs filesystem

Steps:
1.Format device with btrfs file system
 #mkfs.btrfs /dev/sdb /dev/sdc

# btrfs filesystem show
Label: none  uuid: 7860bda1-5e7a-479c-b0af-f481ba8a14ff
	Total devices 2 FS bytes used 112.00KiB
	devid    1 size 5.00GiB used 1.53GiB path /dev/sdb
	devid    2 size 5.00GiB used 1.51GiB path /dev/sdc

2. Run Gdeploy script to create pv,vg

Observation:

Pvcreation fails with error :

failed: [10.70.37.97] (item=/dev/sdb) => {"failed": true, "failed_when_result": true, "item": "/dev/sdb", "msg": "WARNING: btrfs signature detected on /dev/sdb at offset 65600. Wipe it? [y/n]: n\n  Aborted wiping of btrfs.\n  1 existing signature left on the device.\n  Aborting pvcreate on /dev/sdb.\n", "rc": 5}

# rpm -qa | grep gluster
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch

# rpm -qa | grep gdeploy
gdeploy-2.0.1-2.el7rhgs.noarch

Attaching the gdeploy conf file and its output.

Comment 6 Manisha Saini 2016-10-17 10:39:25 UTC
Created attachment 1211302 [details]
Gdeploy conf file

Comment 7 Manisha Saini 2016-10-17 10:39:55 UTC
Created attachment 1211303 [details]
Gdeploy output

Comment 8 Sachidananda Urs 2016-10-18 05:43:37 UTC
Manisha, since wiping filesystem signature is a risky task. gdeploy by default does not wipe the filesystem signature.

wipefs=yes should be set in [backend-setup] or [pv] sections.

For example:

[hosts]
10.70.42.166
10.70.41.241

[backend-setup]
devices=vdb
wipefs=yes

Should be used while using with btrfs. If `wipefs' is left out, it is taken as  `no'.

Your configuration should be:

[hosts]
10.70.37.202
10.70.37.97

[backend-setup]
devices=sdb,sdc
vgs=vg1,vg2
pools=pool1,pool2
lvs=lv1,lv2
wipefs=yes
mountpoints=/mnt/data1,/mnt/data2
brick_dirs=/mnt/data1/1,/mnt/data2/2

[volume]
action=create
volname=vol1
replica=yes
replica_count=2
force=yes

[clients]
action=mount
volname=vol1
hosts=10.70.37.137
fstype=glusterfs
client_mount_points=/mnt/gg1/

Comment 9 Manisha Saini 2016-10-18 06:56:28 UTC
With setting wipefs=yes option in config file, wiping of btrfs filesystem signature and  PV creation is successfull.

Hence marking this bug as Verified.

# rpm -qa | grep gluster
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch

# rpm -qa | grep gdeploy
gdeploy-2.0.1-2.el7rhgs.noarch

Comment 11 errata-xmlrpc 2017-02-07 11:34:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0260.html