Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1366036 - Unable to create striped raid on VGs with 1k extent sizes... again
Summary: Unable to create striped raid on VGs with 1k extent sizes... again
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1179970
TreeView+ depends on / blocked
 
Reported: 2016-08-10 20:08 UTC by Corey Marthaler
Modified: 2017-08-01 21:47 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.171-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 21:47:18 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Description Corey Marthaler 2016-08-10 20:08:49 UTC
Description of problem:
This is another regression of the test case in bugs 834050, 1067112, 


[root@host-078 ~]#  pvcreate --setphysicalvolumesize 1G /dev/sdb1 /dev/sdb2 /dev/sdc1 /dev/sdc2 /dev/sdf1 /dev/sdf2 /dev/sdg1 /dev/sdg2 /dev/sdh1 /dev/sdh2
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sdb2" successfully created.
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/sdc2" successfully created.
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sdf2" successfully created.
  Physical volume "/dev/sdg1" successfully created.
  Physical volume "/dev/sdg2" successfully created.
  Physical volume "/dev/sdh1" successfully created.
  Physical volume "/dev/sdh2" successfully created.

[root@host-078 ~]# vgcreate -s 1K raid_sanity /dev/sdb1 /dev/sdb2 /dev/sdc1 /dev/sdc2 /dev/sdf1 /dev/sdf2 /dev/sdg1 /dev/sdg2 /dev/sdh1 /dev/sdh2
  Volume group "raid_sanity" successfully created

[root@host-078 ~]# pvscan
  PV /dev/sdb1   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdb2   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdc1   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdc2   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdf2   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdg2   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]
  PV /dev/sdh2   VG raid_sanity     lvm2 [1023.00 MiB / 1023.00 MiB free]

[root@host-078 ~]# lvcreate  --type raid1 -m 1 -n raid_on_1Kextent_vg -L 20M raid_sanity
  device-mapper: resume ioctl on (253:6) failed: Input/output error
  Unable to resume raid_sanity-raid_on_1Kextent_vg (253:6)
  Failed to activate new LV.
  Attempted to decrement suspended device counter below zero.

Aug 10 15:02:12 host-078 kernel: device-mapper: raid: Superblocks created for new raid set
Aug 10 15:02:12 host-078 kernel: md/raid1:mdX: not clean -- starting background reconstruction
Aug 10 15:02:12 host-078 kernel: md/raid1:mdX: active with 2 out of 2 mirrors
Aug 10 15:02:12 host-078 kernel: Choosing daemon_sleep default (5 sec)
Aug 10 15:02:12 host-078 kernel: created bitmap (1 pages) for device mdX
Aug 10 15:02:12 host-078 kernel: attempt to access beyond end of device[ 2334.549777] attempt to access beyond end of device
Aug 10 15:02:12 host-078 kernel: dm-2: rw=7185, want=9, limit=2
Aug 10 15:02:12 host-078 kernel: md: super_written gets error=-5, uptodate=0
Aug 10 15:02:12 host-078 kernel: md/raid1:mdX: Disk failure on dm-3, disabling device.#012md/raid1:mdX: Operation continuing on 1 devices.
Aug 10 15:02:12 host-078 kernel: attempt to access beyond end of device
Aug 10 15:02:12 host-078 kernel: dm-4: rw=7185, want=9, limit=2
Aug 10 15:02:12 host-078 kernel: md: super_written gets error=-5, uptodate=0
Aug 10 15:02:12 host-078 kernel: attempt to access beyond end of device
Aug 10 15:02:12 host-078 kernel: dm-4: rw=7185, want=9, limit=2
Aug 10 15:02:12 host-078 kernel: md: super_written gets error=-5, uptodate=0
Aug 10 15:02:12 host-078 kernel: mdX: bitmap file is out of date, doing full recovery
Aug 10 15:02:12 host-078 kernel: attempt to access beyond end of device
Aug 10 15:02:12 host-078 kernel: dm-4: rw=16, want=9, limit=2
Aug 10 15:02:12 host-078 kernel: mdX: bitmap initialisation failed: -5
Aug 10 15:02:12 host-078 kernel: device-mapper: raid: Failed to load bitmap
Aug 10 15:02:12 host-078 kernel: device-mapper: table: 253:6: raid: preresume failed, error = -5




Version-Release number of selected component (if applicable):
3.10.0-489.el7.x86_64

lvm2-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
lvm2-libs-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
lvm2-cluster-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-libs-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-event-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-event-libs-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016


How reproducible:
Everytime

Comment 1 Corey Marthaler 2016-08-10 20:53:03 UTC
Forgot about the issues mentioned in 1179970:
https://bugzilla.redhat.com/show_bug.cgi?id=1179970#c7

This is a dup of that.

Comment 4 Alasdair Kergon 2016-08-17 18:40:14 UTC
So we're looking at a general LV-level restriction here.

Page sizes differ, and we expect VGs to work on different machines. We also expect to be able mix-and-match LV types within a VG regardless of VG extent size.

The kernel raid code imposes a page size restriction.

What we can do is take the worst case of this and impose it on the creation of (or conversion to) raid LVs.

Existing LVs would be unaffected (insofar as they already work).

(Any alignment problems giving non-optimal allocation layouts should be a wider issue that should be addressed separately.)

Comment 5 Heinz Mauelshagen 2016-09-14 14:50:12 UTC
I got patch fixing this up including to make sure that the MetaLVs have decent minimum size. Unlikely to make it into 7.3 because of lag of review/testing.

Comment 6 Jonathan Earl Brassow 2016-09-16 19:58:20 UTC
I'm passing this on to rhel7.4.  I don't believe this regression qualifies as a blocker.  People should not be creating VGs with PEs that are more than 3 orders of magnitude smaller than the default.  No-one uses floppy disks anymore.  I have no problem waiting on this one.

Comment 8 Jonathan Earl Brassow 2017-03-20 13:58:06 UTC
(In reply to Jonathan Earl Brassow from comment #6)
> I'm passing this on to rhel7.4.  I don't believe this regression qualifies
> as a blocker.  People should not be creating VGs with PEs that are more than
> 3 orders of magnitude smaller than the default.  No-one uses floppy disks
> anymore.  I have no problem waiting on this one.

For the same reason, I am pulling the blocker flag.

Comment 10 Heinz Mauelshagen 2017-04-12 16:19:17 UTC
lvm2 2.02.16

# vgs --noh -oname,extentsize nvm
  nvm 1.00k


# lvcreate --ty raid0_meta -L500 -i2 -nr nvm
  Using default stripesize 64.00 KiB.
  Unable to create RAID LV: requires minimum VG extent size 4.00 KiB

Comment 11 Heinz Mauelshagen 2017-04-13 17:20:43 UTC
lvm2 2.02.169 that is

Comment 12 Heinz Mauelshagen 2017-04-13 17:23:26 UTC
Upstream commit 73df2aedf9788dcf2dbf09f20f8783c6d2108e75
in lvm2 since version 2.02.165

Comment 14 Corey Marthaler 2017-05-03 17:16:51 UTC
Verified that all (striped and mirrored) raid create attempts now fail on 1k extent sized VGs, yet pass when attempted on 4k extent sized VGs.


3.10.0-660.el7.x86_64
lvm2-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-libs-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-cluster-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017



# 1k vg attempt

[root@host-126 ~]# vgcreate -s 1K raid_sanity /dev/sda1 /dev/sda2 /dev/sdc1 /dev/sdc2 /dev/sdd1 /dev/sdd2 /dev/sdf1 /dev/sdf2 /dev/sdh1 /dev/sdh2
  Volume group "raid_sanity" successfully created
[root@host-126 ~]# lvcreate  --type raid1 -m 1 -n raid_on_1Kextent_vg -L 20M raid_sanity
  Unable to create RAID LV: requires minimum VG extent size 4.00 KiB


# 4k vg attempt

[root@host-126 ~]# vgcreate -s 4K raid_sanity /dev/sda1 /dev/sda2 /dev/sdb1 /dev/sdb2 /dev/sde1 /dev/sde2 /dev/sdf1 /dev/sdf2 /dev/sdg1 /dev/sdg2
  Volume group "raid_sanity" successfully created
[root@host-126 ~]# lvcreate  --type raid1 -m 1 -n raid_on_4Kextent_vg -L 20M raid_sanity
  Logical volume "raid_on_4Kextent_vg" created.
[root@host-126 ~]# lvs -a -o +devices
  LV                             VG           Attr       LSize   Cpy%Sync Devices
  raid_on_4Kextent_vg            raid_sanity  rwi-a-r---  20.00m 100.00   raid_on_4Kextent_vg_rimage_0(0),raid_on_4Kextent_vg_rimage_1(0)
  [raid_on_4Kextent_vg_rimage_0] raid_sanity  iwi-aor---  20.00m          /dev/sda1(3)
  [raid_on_4Kextent_vg_rimage_1] raid_sanity  iwi-aor---  20.00m          /dev/sda2(3)
  [raid_on_4Kextent_vg_rmeta_0]  raid_sanity  ewi-aor---  12.00k          /dev/sda1(0)
  [raid_on_4Kextent_vg_rmeta_1]  raid_sanity  ewi-aor---  12.00k          /dev/sda2(0)

Comment 15 errata-xmlrpc 2017-08-01 21:47:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.