Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1359627 - [ceph-ansible] RHEL install on a IPv6 setup fails
Summary: [ceph-ansible] RHEL install on a IPv6 setup fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: documentation
Version: 2
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: 3
Assignee: Aron Gunn
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-25 07:36 UTC by Tejas
Modified: 2018-11-19 05:33 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:33:23 UTC


Attachments (Terms of Use)

Description Tejas 2016-07-25 07:36:38 UTC
Description of problem:
I am trying to install a RHEL cluster using the IPv6 address.
However the MON is taking the IPv4 address and the OSD is taking the IPv6 address

This seems to be a regression since this has worked before in our testing.

Version-Release number of selected component (if applicable):
ceph-ansible-1.0.5-31.el7scon.noarch

How reproducible:
Always

Steps to Reproduce:
1. Configure group_vars/all file to take the IPv6 network
2. Run ansible-playbook, cluster creation is successful  
3. MON is running on IPv4 address



Additional info:

The playbook log and the group_vars files are located at :
magna006:/root/ipv6_bug/

I have one node which has both MOn and OSD

root@magna086 ~]# ifconfig
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.8.128.86  netmask 255.255.248.0  broadcast 10.8.135.255
        inet6 2620:52:0:880:225:90ff:fefc:23fa  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::225:90ff:fefc:23fa  prefixlen 64  scopeid 0x20<link>
        ether 00:25:90:fc:23:fa  txqueuelen 1000  (Ethernet)
        RX packets 11517101  bytes 2053090786 (1.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 418638  bytes 71379306 (68.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfb920000-fb93ffff  

eno2: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 00:25:90:fc:23:fb  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfb900000-fb91ffff  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 7  bytes 992 (992.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 992 (992.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


root@magna086 ~]# ceph -s
    cluster 15f4edf3-01fa-4b2b-8ed8-65e6628caf95
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs degraded
            64 pgs stuck inactive
            64 pgs undersized
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {magna086=10.8.128.86:6789/0}
            election epoch 3, quorum 0 magna086
     osdmap e13: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v23: 64 pgs, 1 pools, 0 bytes data, 0 objects
            100 MB used, 2791 GB / 2791 GB avail
                  64 undersized+degraded+peered
[root@magna086 ~]# 
[root@magna086 ~]# 
[root@magna086 ~]# cat /etc/ceph/ceph.conf 
# Please do not change this file directly since it is managed by Ansible and will be overwritten

[global]
fsid = 15f4edf3-01fa-4b2b-8ed8-65e6628caf95
max open files = 131072
[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor

[mon]
[mon.magna086]
host = magna086
# we need to check if monitor_interface is defined in the inventory per host or if it's set in a group_vars file
mon addr = 10.8.128.86

[osd]
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = noatime,largeio,inode64,swalloc
osd journal size = 500
cluster_network = 2620:52:0:880:225:90ff:fefc:0/64
public_network = 2620:52:0:880:225:90ff:fefc:0/64


installer node:
Monitor options
#
# You must define either monitor_interface or monitor_address. Preference
# will go to monitor_interface if both are defined.
monitor_interface: eno1
#monitor_address: 0.0.0.0
#mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf

## OSD options
#
journal_size: 500
public_network: 2620:52:0:880:225:90ff:fefc:0/64
#cluster_network: "{{ public_network }}"

Comment 2 Andrew Schoen 2016-07-25 14:24:46 UTC
If you use "monitor_interface" you'll always get the IPV4 address. If you need IPV6, you can use "monitor_address" and define it manually. I'm going to push this out of v2 as there is a workaround for this.

Comment 4 Harish NV Rao 2016-07-27 07:27:18 UTC
The info in comment 2 needs to be added as part of installation guide for 2.0.

Doc, team please get in touch with Tejas and Rachana for further details.

Comment 7 Tejas 2016-07-28 15:43:33 UTC
hi Aron,

    While installing using ceph-ansiblw using IPv6, we need to set the 
monitor_address : <IPv6 add>
instead of monitor_interface in the "group_vars/all" file

Please mention this as a note in the Doc.

Thanks,
Tejas

Comment 9 Tejas 2016-08-01 13:53:51 UTC
Checked the docs, moving to Verified state.


Note You need to log in before you can comment on or make changes to this bug.