Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 156865 - another case where GUI doesn't validate it's own output
Summary: another case where GUI doesn't validate it's own output
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: redhat-config-cluster
Version: 4
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jim Parsons
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-05-04 19:25 UTC by Corey Marthaler
Modified: 2009-04-16 20:08 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-06-13 22:10:37 UTC


Attachments (Terms of Use)

Description Corey Marthaler 2005-05-04 19:25:30 UTC
Description of problem:
I used the gui to create this config and when starting it up again, it gives
this error about the 2nd fence device:
Relax-NG validity error : Extra element fencedevices in interleave
/etc/cluster/cluster.conf:28: element fencedevices: Relax-NG validity error :
Element cluster failed to validate content
/etc/cluster/cluster.conf fails to validate

[root@link-10 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="2" name="link-cluster">
        <cman/>
        <fence_daemon clean_start="0" post_fail_delay="20" post_join_delay="20"/>
        <clusternodes>
                <clusternode name="link-10.lab.msp.redhat.com" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="apc" port="2" switch="2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="link-12.lab.msp.redhat.com" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="apc" port="4" switch="2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="link-08" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="apc" port="8" switch="1"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>      ###### LINE 28 #####
                <fencedevice agent="fence_apc" ipaddr="link-apc" login="apc"
name="apc" passwd="apc"/>
                <fencedevice agent="fence_apc" ipaddr="sfsd" login="fsdfsdf"
name="my fence device" passwd="sdf"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="my fail over domain" ordered="0"
restricted="0">
                                <failoverdomainnode
name="link-10.lab.msp.redhat.com" priority="1"/>
                                <failoverdomainnode
name="link-12.lab.msp.redhat.com" priority="1"/>
                                <failoverdomainnode name="link-08" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <clusterfs device="dfsdfs" fstype="gfs"
mountpoint="sdfsdfs" name="my resource" options="dfsdfs"/>
                        <script file="sdfsdfs" name="my script"/>
                </resources>
                <service name="my service">
                        <netfs export="fhfgh" force_unmount="0" fstype="nfs4"
host="fghfghf" mountpoint="gfgfgf" name="fh" options="ghfghfg"/>
                </service>
                <service name="my knob">
                        <script ref="my script">
                                <nfsexport name="gfhfghfgh"/>
                        </script>
                </service>
        </rm>
</cluster>

Comment 1 Stanko Kupcevic 2005-05-05 15:33:13 UTC
Fixed in 0.9.49

Comment 2 Corey Marthaler 2005-05-24 19:46:17 UTC
jbrassow found another case where there's a validity checking error in versions
-57 and -58:

/etc/cluster/cluster.conf:6: element fence: Relax-NG validity error : Element
clusternode has extra content: fence
/etc/cluster/cluster.conf:5: element clusternode: Relax-NG validity error :
Element clusternodes has extra content: clusternode
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid
sequence in interleave
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error :
Expecting an element gulm, got nothing
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid
sequence in interleave
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Element
cluster failed to validate content
/etc/cluster/cluster.conf:8: element device: validity error : IDREF attribute
name references an unknown ID "APC"
/etc/cluster/cluster.conf fails to validate



file in question:

<?xml version="1.0" ?>
<cluster config_version="2" name="alpha_cluster">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="tng1-1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="APC" port="0" switch="0"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="tng1-2" votes="1"/>
                <clusternode name="tng1-3" votes="1"/>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_apc" ipaddr="tng1-apc" login="apc"
name="APC" passwd="aqpc"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

Comment 3 Stanko Kupcevic 2005-05-24 21:22:17 UTC
Fixed in 0.9.60

Comment 4 Corey Marthaler 2005-05-24 21:56:13 UTC
fix verified in -60.

Comment 5 Corey Marthaler 2005-05-25 19:25:30 UTC
another case in -62:

/etc/cluster/cluster.conf:5: element fence: Relax-NG validity error : Element
clusternode has extra content: fence
/etc/cluster/cluster.conf:4: element clusternode: Relax-NG validity error :
Element clusternodes has extra content: clusternode
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid
sequence in interleave
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error :
Expecting an element gulm, got nothing
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid
sequence in interleave
/etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Element
cluster failed to validate content
/etc/cluster/cluster.conf:7: element device: validity error : IDREF attribute
name references an unknown ID "apc"
/etc/cluster/cluster.conf fails to validate



[root@morph-04 tmp]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="12" name="morph-GULM-cluster">
        <clusternodes>
                <clusternode name="morph-04.lab.msp.redhat.com" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="apc" port="4" switch="1"/>
                                </method>
                        </fence>
                        <multicast addr="88.88.888.888" interface="eth0"/>
                </clusternode>
                <clusternode name="morph-05.lab.msp.redhat.com" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="apc" port="5" switch="1"/>
                                </method>
                        </fence>
                        <multicast addr="88.88.888.888" interface="eth0"/>
                </clusternode>
                <clusternode name="jmhmhjm" votes="1">
                        <multicast addr="88.88.888.888" interface="eth0"/>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_apc" ipaddr="morph-apc" login="apc"
name="apc" passwd="apc"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="sdjfhsdklfjsd" ordered="0"
restricted="0">
                                <failoverdomainnode
name="morph-04.lab.msp.redhat.com" priority="1"/>
                                <failoverdomainnode
name="morph-05.lab.msp.redhat.com" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <clusterfs device="dfgdfgdfgdf" fstype="gfs"
mountpoint="gdfg" name="dfgd" options="dfgdfg"/>
                </resources>
                <service domain="sdjfhsdklfjsd" exclusive="1"
name="dkjfslkserviceskdfl">
                        <clusterfs device="vbcbvc" fstype="gfs"
mountpoint="cvbc" name="cdbc" options="vbcv">
                                <clusterfs device="bcvbc" fstype="gfs"
mountpoint="cvbcv" name="cvbcb" options="vb"/>
                        </clusterfs>
                </service>
                <service domain="sdjfhsdklfjsd" exclusive="1" name="xcvxc">
                        <clusterfs device="vxcvx" fstype="gfs"
mountpoint="xcvxc" name="xcv" options="cv"/>
                </service>
                <service domain="sdjfhsdklfjsd" name="SERVICE">
                        <clusterfs device="fgdfgdfgdfgdfg" fstype="gfs"
mountpoint="dfgd" name="gffgdfg"options="dfgd"/>
                </service>
        </rm>
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <cman>
                <multicast addr="88.88.888.888"/>
        </cman>
</cluster>


Comment 6 Stanko Kupcevic 2005-05-25 19:51:18 UTC
Fixed in 0.9.64

Comment 7 Corey Marthaler 2005-05-31 19:08:40 UTC
another case:

Relax-NG validity error : Extra element rm in interleave
/etc/cluster/cluster.conf:34: element rm: Relax-NG validity error : Element
cluster failed to validate content
/etc/cluster/cluster.conf fails to validate

config file:

[root@morph-01 tmp]# cat /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster config_version="6" name="alpha_cluster">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="ngnghn" votes="1">
                        <fence>
                                <method name="1"/>
                                <method name="2"/>
                                <method name="3"/>
                                <method name="4"/>
                                <method name="5"/>
                                <method name="6">
                                        <device name="ghngh" port="ryj"
switch="ryujryuj"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="ghngng" votes="1"/>
                <clusternode name="gnghng" votes="1">
                        <fence>
                                <method name="1"/>
                                <method name="2"/>
                                <method name="3">
                                        <device name="ghngh" port="ryujryu"
switch="yur"/>
                                </method>
                                <method name="4"/>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_apc" ipaddr="gnhng" login="nghngh"
name="ghngh" passwd="ngh"/>
                <fencedevice agent="fence_apc" ipaddr="___" login="___"
name="___" passwd="___"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="nghnghn" ordered="0" restricted="0"/>
                        <failoverdomain name="m,fhj" ordered="0" restricted="0">
                                <failoverdomainnode name="ngnghn" priority="1"/>
                                <failoverdomainnode name="ghngng" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <clusterfs device="hnghngh" fstype="gfs"
mountpoint="nghng" name="ghngh" options="nghn"/>
                </resources>
                <service name="nghnghng"/>
                <service domain="m,fhj" name="fjkryujrk">
                        <clusterfs ref="ghngh"/>
                        <clusterfs device="jryujr" fstype="gfs"
mountpoint="yujryu" name="yujr" options="r">
                                <nfsexport name="ryujrryurjy">
                                        <clusterfs device="ujryujryuj"
fstype="gfs" mountpoint="yujry" name="ryjur" options=""/>
                                </nfsexport>
                        </clusterfs>
                        <clusterfs ref="ghngh"/>
                </service>
        </rm>
</cluster>


Comment 8 Jim Parsons 2005-06-06 15:00:00 UTC
Fixed in 0.9.70-1.0

Comment 9 Corey Marthaler 2005-06-13 22:10:37 UTC
fix verified.


Note You need to log in before you can comment on or make changes to this bug.