Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1509789 - The output of the "gluster help" command is difficult to read
Summary: The output of the "gluster help" command is difficult to read
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: 3.13
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On: 1474768 1509786
Blocks: 1498730
TreeView+ depends on / blocked
 
Reported: 2017-11-06 04:24 UTC by Nithya Balachandran
Modified: 2017-12-08 17:45 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1509786
Environment:
Last Closed: 2017-12-08 17:45:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Nithya Balachandran 2017-11-06 04:24:51 UTC
+++ This bug was initially created as a clone of Bug #1509786 +++

+++ This bug was initially created as a clone of Bug #1474768 +++

Description of problem:

Running "gluster help" returns 67 lines of text. It is difficult to find the information one is looking for. 

Version-Release number of selected component (if applicable):


How reproducible:
Consistently

Steps to Reproduce:
1. Run "gluster help"
2.
3.

Actual results:

[root@rhgs313-7 ~]# gluster help
volume info [all|<VOLNAME>] - list information of all volumes
volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force] - create a new volume of specified type with mentioned bricks
volume delete <VOLNAME> - delete volume specified by <VOLNAME>
volume start <VOLNAME> [force] - start volume specified by <VOLNAME>
volume stop <VOLNAME> [force] - stop volume specified by <VOLNAME>
volume tier <VOLNAME> status
volume tier <VOLNAME> start [force]
volume tier <VOLNAME> stop
volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force]
volume tier <VOLNAME> detach <start|stop|status|commit|[force]>
 - Tier translator specific operations.
volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>... - NOTE: this is old syntax, will be depreciated in next release. Please use gluster volume tier <vol> attach [<replica COUNT>] <NEW-BRICK>...
volume detach-tier <VOLNAME>  <start|stop|status|commit|force> - NOTE: this is old syntax, will be depreciated in next release. Please use gluster volume tier <vol> detach {start|stop|commit} [force]
volume add-brick <VOLNAME> [<stripe|replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME>
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> - remove brick from volume <VOLNAME>
volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}} - rebalance operations
volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} - replace-brick operations
volume set <VOLNAME> <KEY> <VALUE> - set options for volume <VOLNAME>
volume help - display help for the volume command
volume log <VOLNAME> rotate [BRICK] - rotate the log file for corresponding volume/brick
volume log rotate <VOLNAME> [BRICK] - rotate the log file for corresponding volume/brick NOTE: This is an old syntax, will be deprecated from next release.
volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer
volume reset <VOLNAME> [option] [force] - reset all the reconfigured options
volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...] - Geo-sync operations
volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumulative|clear]|stop} [nfs] - volume profile operations
volume quota <VOLNAME> {enable|disable|list [<path> ...]| list-objects [<path> ...] | remove <path>| remove-objects <path> | default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {limit-objects <path> <number> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} - quota translator specific operations
volume inode-quota <VOLNAME> enable - quota translator specific operations
volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] |
volume top <VOLNAME> {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>] - volume top operations
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad|tierd]] [detail|clients|mem|inode|fd|callpool|tasks] - display status of all or specified volume(s)/brick
volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}] - self-heal commands on volume specified by <VOLNAME>
volume statedump <VOLNAME> [[nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <hostname:process-id>]] - perform statedump on bricks
volume list - list all volumes in cluster
volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path
volume barrier <VOLNAME> {enable|disable} - Barrier/unbarrier file operations on a volume
volume get <VOLNAME|all> <key|all> - Get the value of the all options or given option for volume <VOLNAME> or all option. gluster volume get all all is to get all global options
volume bitrot <VOLNAME> {enable|disable} |
volume bitrot <volname> scrub-throttle {lazy|normal|aggressive} |
volume bitrot <volname> scrub-frequency {hourly|daily|weekly|biweekly|monthly} |
volume bitrot <volname> scrub {pause|resume|status|ondemand} - Bitrot translator specific operation. For more information about bitrot command type  'man gluster'
volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK> commit}} - reset-brick operations
peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>
peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME>
peer status - list status of peers
peer help - Help command for peer 
pool list - list all the nodes in the pool (including localhost)
quit - quit
help - display command options
exit - exit
snapshot help - display help for snapshot commands
snapshot create <snapname> <volname> [no-timestamp] [description <description>] [force] - Snapshot Create.
snapshot clone <clonename> <snapname> - Snapshot Clone.
snapshot restore <snapname> - Snapshot Restore.
snapshot status [(snapname | volume <volname>)] - Snapshot Status.
snapshot info [(snapname | volume <volname>)] - Snapshot Info.
snapshot list [volname] - Snapshot List.
snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])| ([activate-on-create <enable|disable>]) - Snapshot Config.
snapshot delete (all | snapname | volume <volname>) - Snapshot Delete.
snapshot activate <snapname> [force] - Activate snapshot volume.
snapshot deactivate <snapname> - Deactivate snapshot volume.
global help - list global commands
get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]] [detail|volumeoptions] - Get local state representation of mentioned daemon

Expected results:


Additional info:

--- Additional comment from Nithya Balachandran on 2017-07-25 08:13:57 EDT ---

Proposed: Split the commands up into classes and display those. The user then has the option to view the help for a single class of commands. For instance:
"gluster help"  will return:

help all           - display all commands                                       
peer help          - display all trusted storage pool management commands          
volume help        - display all volume management commands                     
volume bitrot help - display all volume bitrot commands                         
volume quota help  - display all volume quota commands                          
volume tier help   - display all volume tier commands                           
snapshot help      - display all snapshot commands


"gluster peer help" will return:

peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>
peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME>
peer status - list status of peers
peer help - Help command for peer 
pool list - list all the nodes in the pool (including localhost)


and so on.

--- Additional comment from Nithya Balachandran on 2017-08-01 06:21:13 EDT ---

I see the following issues with the help output :

1. Some of the commands listed do not have a description displayed. These include the quota, tier and bitrot commands.
2. Non-uniform spacing of options:

volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] |

vs

volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica 

3. Entries are not in alphabetical order

--- Additional comment from Worker Ant on 2017-08-01 07:32:22 EDT ---

REVIEW: https://review.gluster.org/17944 (cli: gluster help changes) posted (#1) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Worker Ant on 2017-10-09 11:01:08 EDT ---

REVIEW: https://review.gluster.org/17944 (cli: gluster help changes) posted (#2) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Nithya Balachandran on 2017-10-09 11:06:50 EDT ---

The current patch implements the following behaviour:


[root@server]# gluster help

Gluster help commands

 peer help                -  display help for peer commands
 volume help              -  display help for volume commands
 volume bitrot help       -  display help for volume bitrot commands
 volume quota help        -  display help for volume quota commands
 volume tier help         -  display help for volume tier commands
 snapshot help            -  display help for snapshot commands
 global help              -  list global commands


[root@server]# gluster help all

gluster peer commands
======================

peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME>
peer help - display help for peer commands
peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>
peer status - list status of peers
pool list - list all the nodes in the pool (including localhost)



gluster volume commands
========================

volume add-brick <VOLNAME> [<stripe|replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME>
volume barrier <VOLNAME> {enable|disable} - Barrier/unbarrier file operations on a volume
volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path
volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force] - create a new volume of specified type with mentioned bricks
volume delete <VOLNAME> - delete volume specified by <VOLNAME>
volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...] - Geo-sync operations
volume get <VOLNAME|all> <key|all> - Get the value of the all options or given option for volume <VOLNAME> or all option. gluster volume get all all is to get all global options
volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [summary | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}] - self-heal commands on volume specified by <VOLNAME>
volume help - display help for volume commands
volume info [all|<VOLNAME>] - list information of all volumes
volume list - list all volumes in cluster
volume log <VOLNAME> rotate [BRICK] - rotate the log file for corresponding volume/brick
volume log rotate <VOLNAME> [BRICK] - rotate the log file for corresponding volume/brick NOTE: This is an old syntax, will be deprecated from next release.
volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumulative|clear]|stop} [nfs] - volume profile operations
volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}} - rebalance operations
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> - remove brick from volume <VOLNAME>
volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} - replace-brick operations
volume reset <VOLNAME> [option] [force] - reset all the reconfigured options
volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK> commit}} - reset-brick operations
volume set <VOLNAME> <KEY> <VALUE> - set options for volume <VOLNAME>
volume start <VOLNAME> [force] - start volume specified by <VOLNAME>
volume statedump <VOLNAME> [[nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <hostname:process-id>]] - perform statedump on bricks
volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad|tierd]] [detail|clients|mem|inode|fd|callpool|tasks|client-list] - display status of all or specified volume(s)/brick
volume stop <VOLNAME> [force] - stop volume specified by <VOLNAME>
volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer
volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] |
volume top <VOLNAME> {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>] - volume top operations



gluster bitrot commands
========================

volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand} - Pause/resume the scrubber for <VOLNAME>. Status displays the status of the scrubber. ondemand starts the scrubber immediately.
volume bitrot <VOLNAME> scrub-frequency {hourly|daily|weekly|biweekly|monthly} - Set the frequency of the scrubber for volume <VOLNAME>
volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive} - Set the speed of the scrubber for volume <VOLNAME>
volume bitrot <VOLNAME> {enable|disable} - Enable/disable bitrot for volume <VOLNAME>
volume bitrot help - display help for volume bitrot commands



gluster quota commands
=======================

volume inode-quota <VOLNAME> enable - Enable/disable inode-quota for <VOLNAME>
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} - Set quota timeout for <VOLNAME>
volume quota <VOLNAME> {enable|disable|list [<path> ...]| list-objects [<path> ...] | remove <path>| remove-objects <path> | default-soft-limit <percent>} - Enable/disable and configure quota for <VOLNAME>
volume quota <VOLNAME> {limit-objects <path> <number> [<percent>]} - Set the maximum number of entries allowed in <path> for <VOLNAME>
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} - Set maximum size for <path> for <VOLNAME>
volume quota help - display help for volume quota commands



gluster tier commands
======================

volume tier <VOLNAME> start [force] - Start the tier service for <VOLNAME>
volume tier <VOLNAME> status - Display tier status for <VOLNAME>
volume tier <VOLNAME> stop [force] - Stop the tier service for <VOLNAME>
volume tier help - display help for volume tier commands



gluster snapshot commands

=========================

snapshot activate <snapname> [force] - Activate snapshot volume.
snapshot clone <clonename> <snapname> - Snapshot Clone.
snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])| ([activate-on-create <enable|disable>]) - Snapshot Config.
snapshot create <snapname> <volname> [no-timestamp] [description <description>] [force] - Snapshot Create.
snapshot deactivate <snapname> - Deactivate snapshot volume.
snapshot delete (all | snapname | volume <volname>) - Snapshot Delete.
snapshot help - display help for snapshot commands
snapshot info [(snapname | volume <volname>)] - Snapshot Info.
snapshot list [volname] - Snapshot List.
snapshot restore <snapname> - Snapshot Restore.
snapshot status [(snapname | volume <volname>)] - Snapshot Status.


get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]] [detail|volumeoptions] - Get local state representation of mentioned daemon
global help - list global commands



[root@server]# gluster peer help

gluster peer commands
======================

peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME>
peer help - display help for peer commands
peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME>
peer status - list status of peers
pool list - list all the nodes in the pool (including localhost)

--- Additional comment from Worker Ant on 2017-10-16 11:27:22 EDT ---

REVIEW: https://review.gluster.org/17944 (cli: gluster help changes) posted (#3) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Worker Ant on 2017-10-17 10:48:08 EDT ---

REVIEW: https://review.gluster.org/17944 (cli: gluster help changes) posted (#4) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Worker Ant on 2017-10-23 01:47:35 EDT ---

REVIEW: https://review.gluster.org/17944 (cli: gluster help changes) posted (#5) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Worker Ant on 2017-10-23 02:33:05 EDT ---

REVIEW: https://review.gluster.org/17944 (cli: gluster help changes) posted (#6) for review on master by N Balachandran (nbalacha@redhat.com)

--- Additional comment from Worker Ant on 2017-11-02 08:54:40 EDT ---

COMMIT: https://review.gluster.org/17944 committed in master by  

------------- cli: gluster help changes

gluster cli help now shows only the top level
help commands. gluster <component> help will now show
help commands for <component>.

Change-Id: I263f53a0870d80ef4cfaad455fdaa47e2ac4423b
BUG: 1474768
Signed-off-by: N Balachandran <nbalacha@redhat.com>

--- Additional comment from Worker Ant on 2017-11-05 23:16:35 EST ---

REVIEW: https://review.gluster.org/18666 (cli: gluster help changes) posted (#1) for review on release-3.12 by N Balachandran

Comment 1 Worker Ant 2017-11-06 04:35:34 UTC
REVIEW: https://review.gluster.org/18667 (cli: gluster help changes) posted (#1) for review on release-3.13 by N Balachandran

Comment 2 Worker Ant 2017-11-16 14:38:12 UTC
COMMIT: https://review.gluster.org/18667 committed in release-3.13 by \"N Balachandran\" <nbalacha@redhat.com> with a commit message- cli: gluster help changes

gluster cli help now shows only the top level
help commands. gluster <component> help will now show
help commands for <component>.

> BUG: 1474768
> Signed-off-by: N Balachandran <nbalacha@redhat.com>
(cherry picked from commit 89dc54f50c9f800ca4446ea8fe736e4860588845)
Change-Id: I263f53a0870d80ef4cfaad455fdaa47e2ac4423b
BUG: 1509789
Signed-off-by: N Balachandran <nbalacha@redhat.com>

Comment 3 Shyamsundar 2017-12-08 17:45:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.