Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1058046 - glusterd stop exit with non zero code when should exit with zero
Summary: glusterd stop exit with non zero code when should exit with zero
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: scripts
Version: 3.4.2
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-26 17:47 UTC by Ted Miller
Modified: 2015-10-07 13:50 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-07 13:49:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
host install log (deleted)
2014-01-26 18:24 UTC, Ted Miller
no flags Details

Description Ted Miller 2014-01-26 17:47:31 UTC
Description of problem:
Re-install fails (clean install untested) if glusterd is running when reinstall starts.

reinstall succeeds only after executing "service glusterd stop" on host

Version-Release number of selected component (if applicable):
all hosts & engine running on Centos 6.5, just updated
storage: glusterfs
ovirt-engine     3.3.0.1-1.el6
ovirt-engine-lib 3.3.2-1.el6
ovirt-host-deploy.noarch 1.1.3-1.el6


How reproducible: always


Steps to Reproduce:
1. start with host "up"
2. put host into "maintenance"
3. click GUI "reinstall" link
4. GUI has message "Host installation failed. Fix installation issues and try to
Re-Install"
5. GUI "Events" shows:
    Host office2a installation failed. Command returned failure code 1 during SSH session 'root@10.41.65.2'.
    Installing Host office2a. Stage: Termination.
    Installing Host office2a. Retrieving installation logs to: '/var/log/ovirt-engine/host-deploy/ovirt-20140126122137-10.41.65.2-6075adb2.log'.
    Installing Host office2a. Stage: Pre-termination.
    Failed to install Host office2a. Failed to execute stage 'Closing up': Command '/sbin/service' failed to execute.
    Installing Host office2a. Starting gluster.

Actual results:
1. GUI has message "Host installation failed. Fix installation issues and try to
Re-Install"
2. GUI "Events" shows:
    Host office2a installation failed. Command returned failure code 1 during SSH session 'root@10.41.65.2'.
    Installing Host office2a. Stage: Termination.
    Installing Host office2a. Retrieving installation logs to: '/var/log/ovirt-engine/host-deploy/ovirt-20140126122137-10.41.65.2-6075adb2.log'.
    Installing Host office2a. Stage: Pre-termination.
    Failed to install Host office2a. Failed to execute stage 'Closing up': Command '/sbin/service' failed to execute.
    Installing Host office2a. Starting gluster.

Expected results:
Host in "up" condition

Additional info:
2 hosts, separate engine

Comment 1 Alon Bar-Lev 2014-01-26 18:03:01 UTC
Attaching /var/log/ovirt-engine/host-deploy/ovirt-20140126122137-10.41.65.2-6075adb2.log will be wise.

Comment 2 Ted Miller 2014-01-26 18:24:08 UTC
Created attachment 855721 [details]
host install log

Attaching host re-install log

Comment 3 Alon Bar-Lev 2014-01-26 18:34:55 UTC
(In reply to Ted Miller from comment #2)
> Created attachment 855721 [details]
> host install log
> 
> Attaching host re-install log

Thanks.

2014-01-26 12:21:37 DEBUG otopi.plugins.otopi.services.rhel plugin.executeRaw:366 execute: ('/sbin/service', 'glusterd', 'stop'), executable='None', cwd='None', env=None
2014-01-26 12:21:37 DEBUG otopi.plugins.otopi.services.rhel plugin.executeRaw:383 execute-result: ('/sbin/service', 'glusterd', 'stop'), rc=1
2014-01-26 12:21:37 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:441 execute-output: ('/sbin/service', 'glusterd', 'stop') stdout:
Stopping glusterd:[  OK  ]

2014-01-26 12:21:37 DEBUG otopi.plugins.otopi.services.rhel plugin.execute:446 execute-output: ('/sbin/service', 'glusterd', 'stop') stderr:


service glusterd stop is failing although should always return 0, unless stop is actually failing.

please update the gluster version you actually use.

Comment 4 Ted Miller 2014-01-26 18:53:53 UTC
(In reply to Alon Bar-Lev from comment #3)
> please update the gluster version you actually use.

yum list installed gluster*
Loaded plugins: fastestmirror, presto, priorities, security
Loading mirror speeds from cached hostfile
 * base: mirror.wiredtree.com
 * epel: epel.mirror.constant.com
 * extras: centos.mbni.med.umich.edu
 * updates: mirror.wiredtree.com
Installed Packages
glusterfs.x86_64                3.4.2-1.el6         @glusterfs-epel
glusterfs-api.x86_64            3.4.2-1.el6         @glusterfs-epel
glusterfs-cli.x86_64            3.4.2-1.el6         @glusterfs-epel
glusterfs-fuse.x86_64           3.4.2-1.el6         @glusterfs-epel
glusterfs-libs.x86_64           3.4.2-1.el6         @glusterfs-epel
glusterfs-rdma.x86_64           3.4.2-1.el6         @glusterfs-epel
glusterfs-server.x86_64         3.4.2-1.el6         @glusterfs-epel

Comment 5 Ted Miller 2014-01-26 19:06:05 UTC
Observation: Seems to be a glusterfs problem on host

[root@office2a ~]$ service glusterd stop
[root@office2a ~]$                                        [  OK  ]
[root@office2a ~]$ service glusterd status
glusterd dead but subsys locked
[root@office2a ~]$ service glusterd restart
Starting glusterd:                                         [  OK  ]

Comment 6 Ted Miller 2014-01-26 19:12:25 UTC
Same result on other host:

[root@office4a ~]$ service glusterd stop
[root@office4a ~]$                                         [  OK  ]
[root@office4a ~]$ service glusterd status
glusterd dead but subsys locked
[root@office4a ~]$ service glusterd start
Starting glusterd:                                         [  OK  ]
[root@office4a ~]$ service glusterd status
glusterd (pid  23730) is running...

Comment 7 Ted Miller 2014-01-26 19:31:32 UTC
Reinstall: Did not finish, error messages as before.

[root@office2a ~]$ service glusterd status
glusterd dead but subsys locked
[root@office2a ~]$ service glusterd stop
[root@office2a ~]$ echo $?
0
[root@office2a ~]$ service glusterd status
glusterd dead but subsys locked
[root@office2a ~]$

Re-install now finished

One thing I note: When gluster had been running (Comment 5) the "service glusterd stop" command produced an "[OK]" (on the following/wrong line).  When glusterd was "dead", no "[OK]" was produced, although the exit code seems to be 0.

Comment 9 Niels de Vos 2015-05-17 22:01:12 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 10 Kaleb KEITHLEY 2015-10-07 13:49:43 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Comment 11 Kaleb KEITHLEY 2015-10-07 13:50:53 UTC
GlusterFS 3.4.x has reached end-of-life.\                                                   \                                                                               If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.