Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1366296 - LVM RAID - Add support for raid level takeover/reshape (part 2) [NEEDINFO]
Summary: LVM RAID - Add support for raid level takeover/reshape (part 2)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On: 1191630
Blocks: 834579 1189124 1346081 1394039
TreeView+ depends on / blocked
 
Reported: 2016-08-11 14:26 UTC by Jonathan Earl Brassow
Modified: 2017-08-01 21:47 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.169-1.el7
Doc Type: Enhancement
Doc Text:
Support added in LVM for RAID level takeover LVM now provides full support for RAID takeover, previously available as a Technology Preview, which allows users to convert a RAID logical volume from one RAID level to another. This release expands the number of RAID takeover combinations. Support for some transitions may require intermediate steps. New RAID types that are added by means of RAID takeover are not supported in older released kernel versions; these RAID types are raid0, raid0_meta, raid5_n, and raid6_{ls,rs,la,ra,n}_6. Users creating those RAID types or converting to those RAID types on Red Hat Enterprise Linux 7.4 cannot activate the logical volumes on systems running previous releases. RAID takeover is available only on top-level logical volumes in single machine mode (that is, takeover is not available for cluster volume groups or while the RAID is under a snapshot or part of a thin pool).
Clone Of: 1191630
Environment:
Last Closed: 2017-08-01 21:47:18 UTC
slevine: needinfo? (heinzm)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2222 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2017-08-01 18:42:41 UTC

Comment 1 Jonathan Earl Brassow 2016-08-11 14:40:12 UTC
Part 1 included:
 conversions between any of: striped, raid0, raid0_meta, raid4   
 conversions between any of: linear, raid1, mirror

Part 2 will finish-off the remaining combinations.

Comment 3 Heinz Mauelshagen 2017-02-10 22:50:35 UTC
Related upstream commits:
baba3f8 lvconvert: add conversion from/to raid10
a4bbaa3 lvconvert: add segtypes raid6_{ls,rs,la,ra}_6 and conversions to/from it
3673ce4 lvconvert: add segtype raid6_n_6 and conversions to/from it
60ddd05 lvconvert: add segtype raid5_n and conversions to/from it

Comment 4 Heinz Mauelshagen 2017-02-13 17:27:18 UTC
LV types to create for RAID testing (with lvresize/stripes/stripe size variations):
linear
mirror
raid1
striped
raid0
raid0_meta
raid4
raid5
raid5_n
raid5_ls
raid5_rs
raid5_la
raid5_ra
raid6
raid6_zr
raid6_nr
raid6_nc
raid6_n_6
raid6_ls_6
raid6_rs_6
raid6_la_6
raid6_ra_6
raid10
raid10_near

LV types to convert to/from (takeover conversions):
linear <-> raid1
linear <-> mirror
mirror <-> raid1
striped <-> raid0
striped <-> raid0_meta
striped <-> raid4
striped <-> raid5 (i.e raid5_n)
striped <-> raid6 (i.e raid6_n_6)
striped <-> raid10 (i.e. raid10_near)
raid0 <-> raid0_meta
raid0 <-> raid4
raid0 <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n)
raid0_meta <-> raid6 (i.e. raid6_n_6; use --type raid6/raid6_n_n)
raid0 <-> raid10 (i.e. raid10_near; use --type raid10/raid10_near)
raid0_meta <-> raid4
raid0_meta <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n)
raid0_meta <-> raid5 (i.e. raid5_n; use --type raid6/raid6_n_n)
raid0_meta <-> raid10 (i.e. raid10_near)
raid4 <-> raid5 (i.e. raid5_n; use --type raid5/raid5_n)
raid5_n <-> raid6
raid5_ls <-> raid6_ls_6
raid5_rs <-> raid6_rs_6
raid5_ra <-> raid6_ra_6
raid5_la <-> raid6_la_6

Test other raid types but those actually fail converting,
e.g. from striped (> 1 leg) -> raid1 or raid5 <-> raid10 fail.

LV types to convert to/from (reshape layout variations):
raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_n into each other
raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6 into each other

LV types to convert (stripesize variations, i.e. "lvconvert --stripesize N $RaidLV"):
raid4, raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_6,
raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6

LV types to convert (stripes variations, i.e. "lvconvert --stripes N $RaidLV"):
raid4, raid5,raid5_ls,raid5_rs,raid5_la,raid5_ra,raid5_6,
raid6,raid6_zr,raid6_nr,raid6_nc,raid6_ls_6,raid6_rs_6,raid6_la_6,raid6_ra_6
(Removal of stripes needs all previous stripes during the reshape to retrieve
 their data and a second "lvconvert $RaidLV" call to remove them after the
 reshape has finished freeing them)

Convert striped/raid0,raid0_meta into radi5/raid6, change stripesize, convert back

Test region size changes on conversion to raid1/raid4/5/6/10.

Test region size changes on raid1/raid4/5/6/10 without level conversion.

Test failing legs during conversion (up takeover from lower to higher raid level):
- fail an/the addional leg(s) on up conversion from lower raid levels -> no data loss
- failing the previous legs -> potential data loss:
  o striped/raid0/raid0_meta -> data loss
  o raid4/raid5(_n) -> raid6(_n_6) and one previous leg failed -> no data loss
  o new leg fails -> no data loss; test transiewnt failure; test permanent failure
    (vgreduce --removem -f $vg; lvconvert down to the previous layout shall succeed)

Test failing legs during conversion (down takeover from higher to lower raid level):
- any raid level specific transient/permanent failure test apply after conversion
- if data legs fail during conversion but the removed ones -> data loss in case > remaining parity devices;
  e.g. raid5_n -> striped (last dedicated parity leg removed and any of the remaining fail);
  e.g. raid6_n_6 -> raid5_n (last dedicated Q-Syndrome leg removed and one remaing failing -> no data loss

Comment 5 Heinz Mauelshagen 2017-02-28 17:20:42 UTC
lvm2 upstream commit 34caf8317243 and its prerequisites prvide LV types and conversions as of comment #4 (and as of the subset requested in the initial description)

Comment 7 Steven J. Levine 2017-05-05 16:52:22 UTC
Heinz:

I did a little editing of the feature description here (and gave it a title) for the release notes. Does that look ok to you?

Also, I think we only need one release note description for both this BZ and for BZ#1366296 (with the description referencing both BZ numbers). Would that be ok?

Steven

Comment 9 Steven J. Levine 2017-05-22 17:18:32 UTC
Adding info to doc text noting that RAID takeover was previously available as Tech Preview.

Comment 11 errata-xmlrpc 2017-08-01 21:47:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222


Note You need to log in before you can comment on or make changes to this bug.