Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1367817 - Help for vdsClient for glusterVolumehealInfo has unreadable formatting
Summary: Help for vdsClient for glusterVolumehealInfo has unreadable formatting
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Documentation
Version: 4.18.10
Hardware: All
OS: All
unspecified
low vote
Target Milestone: ovirt-4.1.0-beta
: 4.19.2
Assignee: Ramesh N
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-17 14:38 UTC by Lukas Svaty
Modified: 2017-02-01 14:38 UTC (History)
2 users (show)

Fixed In Version: vdsm-gluster-4.19.1-24
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-01 14:38:34 UTC
oVirt Team: Gluster
rule-engine: ovirt-4.1+
rule-engine: planning_ack+
rnachimu: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 63480 master MERGED gluster: fix help test for glusterVolumehealInfo in vdsClient 2016-09-14 13:21:47 UTC

Description Lukas Svaty 2016-08-17 14:38:12 UTC
Description of problem:
vdcClient help for glusterVolumeHealInfo is displayed each character on new line.

Version-Release number of selected component (if applicable):
vdsm-4.18.10-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:
1. vdsClient | grep -A 150 glusterVolumeHealInfo

Actual results:
glusterVolumeHealInfo
	[
	v
	o
	l
	u
	m
	e
	N
	a
	m
	e
	=
	<
	v
	o
	l
	u
	m
	e
	_
	n
	a
	m
	e
	>
	]

Comment 1 RamaKasturi 2017-01-27 10:27:49 UTC
Verified and works fine with build vdsm-gluster-4.19.2-2.el7ev.noarch.

Executed the command "vdsClient | grep -A 150 glusterVolumeHealInfo" and i do not see the output getting displayed each character on new line.

[root@rhsqa-grafton4 ~]# vdsClient | grep -A 150 glusterVolumeHealInfo
glusterVolumeHealInfo
	[volumeName=<volume_name>]
	<volume_name> is existing volume name 
	lists self-heal info for the gluster volume
glusterVolumeProfileInfo
	volumeName=<volume_name> [nfs={yes|no}]
	<volume_name> is existing volume name
	get gluster volume profile info
glusterVolumeProfileStart
	volumeName=<volume_name>
	<volume_name> is existing volume name
	start gluster volume profile
glusterVolumeProfileStop
	volumeName=<volume_name>
	<volume_name> is existing volume name
	stop gluster volume profile
glusterVolumeRebalanceStart
	volumeName=<volume_name> [rebalanceType=fix-layout] [force={yes|no}]
	<volume_name> is existing volume name
	start volume rebalance
glusterVolumeRebalanceStatus
	volumeName=<volume_name>
	<volume_name> is existing volume name
	get volume rebalance status
glusterVolumeRebalanceStop
	volumeName=<volume_name> [force={yes|no}]
	<volume_name> is existing volume name
	stop volume rebalance
glusterVolumeRemoveBrickCommit
	volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
	<volume_name> is existing volume name
	<brick[,brick, ...]> is existing brick(s)
	commit volume remove bricks
glusterVolumeRemoveBrickForce
	volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
	<volume_name> is existing volume name
	<brick[,brick, ...]> is existing brick(s)
	force volume remove bricks
glusterVolumeRemoveBrickStart
	volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
	<volume_name> is existing volume name
	<brick[,brick, ...]> is existing brick(s)
	start volume remove bricks
glusterVolumeRemoveBrickStatus
	volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
	<volume_name> is existing volume name
	<brick[,brick, ...]> is existing brick(s)
	get volume remove bricks status
glusterVolumeRemoveBrickStop
	volumeName=<volume_name> bricks=<brick[,brick, ...]> [replica=<count>]
	<volume_name> is existing volume name
	<brick[,brick, ...]> is existing brick(s)
	stop volume remove bricks
glusterVolumeReplaceBrickCommitForce
	volumeName=<volume_name> existingBrick=<existing_brick> newBrick=<new_brick> 
	<volume_name> is existing volume name
	<existing_brick> is existing brick
	<new_brick> is new brick
	commit volume replace brick
glusterVolumeReset
	volumeName=<volume_name> [option=<option>] [force={yes|no}]
	<volume_name> is existing volume name
	reset gluster volume or volume option
glusterVolumeSet
	volumeName=<volume_name> option=<option> value=<value>
	<volume_name> is existing volume name
	<option> is volume option
	<value> is value to volume option
	set gluster volume option
glusterVolumeSetOptionsList
	
	list gluster volume set options
glusterVolumeSnapshotConfigList
	volumeName=<volume_name>
	get gluster volume snapshot configuration
glusterVolumeSnapshotConfigSet
	volumeName=<volume_name>optionName=<option_name>optionValue=<option_value>
	Set gluster snapshot configuration at volume leval
glusterVolumeSnapshotCreate
	volumeName=<volume_name> snapName=<snap_name> [snapDescription=<description of snapshot>] [force={yes|no}]
	create gluster volume snapshot
glusterVolumeSnapshotDeleteAll
	volumeName=<volume name>
	delete all snapshots for given volume
glusterVolumeSnapshotList
	[volumeName=<volume_name>]
	snapshot list for given volume
glusterVolumeStart
	volumeName=<volume_name> [force={yes|no}]
	<volume_name> is existing volume name
	start gluster volume
glusterVolumeStatsInfoGet
	volumeName=<volume name>
	Returns total, free and used space(bytes) of gluster volume
glusterVolumeStatus
	volumeName=<volume_name> [brick=<existing_brick>] [option={detail | clients | mem}]
	<volume_name> is existing volume name
	option=detail gives brick detailed status
	option=clients gives clients status
	option=mem gives memory status
	
	get volume status of given volume with its all brick or specified brick
glusterVolumeStop
	volumeName=<volume_name> [force={yes|no}]
	<volume_name> is existing volume name
	stop gluster volume
glusterVolumesList
	[volumeName=<volume_name>]
	[remoteServer=<remote_server]
	<volume_name> is existing volume name <remote_server> is a remote host name 
	list all or given gluster volume details
hibernate
	<vmId> <hiberVolHandle>
	Hibernates the desktop
hostdevChangeNumvfs
	<device_name>, <numvfs>
	Change number of virtual functions for given physical function.
hostdevHotplug
	<vmId> <hostdevspec>
	Hotplug hostdevto existing VM
	hostdevspec    specification of the device
hostdevHotunplug
	<vmId> <hostdevspec>
	Hotplug hostdevto existing VM
	names    names of the devices
hostdevListByCaps
	[<caps>]
	Get available devices on host with given capability. Leave caps empty to list all devices.
hostdevReattach
	<device_name>
	Reattach device back to a host.
hotplugDisk
	<vmId> <drivespec>
	Hotplug disk to existing VM
	drivespec parameters list: r=required, o=optional
	r   iface:<ide|virtio> - Unique identification of the existing VM.
	r   index:<int> - disk index unique per interface virtio|ide
	r   [pool:UUID,domain:UUID,image:UUID,volume:UUID]|[GUID:guid]|[UUID:uuid]
	r   format: cow|raw
	r   readonly: True|False   - default is False
	r   propagateErrors: off|on   - default is off
	o   bootOrder: <int>  - global boot order across all bootable devices
	o   shared: exclusive|shared|none
	o   optional: True|False
hotplugMemory
	<vmId> <memDeviceSpec>
	Hotplug memory to a running VM NUMA node
	memDeviceSpec parameters list: r=required, o=optional
	r   size: memory size to plug in mb.
	r   node: guest NUMA node id to plug into
hotplugNic


Note You need to log in before you can comment on or make changes to this bug.