Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1518121 - multisegment RAID1, allocator uses one disk for both legs
Summary: multisegment RAID1, allocator uses one disk for both legs
Keywords:
Status: NEW
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: 2.02.176
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-28 09:11 UTC by Marian Csontos
Modified: 2018-10-18 12:12 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
rule-engine: lvm-technical-solution?
rule-engine: lvm-test-coverage?


Attachments (Terms of Use)

Description Marian Csontos 2017-11-28 09:11:10 UTC
Description of problem:
When creating RAID1 with 2 legs spanning multiple disks, allocator uses one of the disks for both legs.

Version-Release number of selected component (if applicable):
2.02.176

Affected versions: el7.2, el7.5, not checked other...

How reproducible:
100%

Steps to Reproduce:
- having 3 8GB disks /dev/sd[abc]

    vgcreate vg /dev/sd[abc]
    lvcreate -n t1 -L 4G vg /dev/sda
    lvcreate -n t2 -L 4G vg /dev/sdb
    lvcreate -n r1 -m 1 -L 6G vg
    lvs -aoname,devices


Actual results:

# lvs -aoname,devices
  LV            Devices                      
  r1            r1_rimage_0(0),r1_rimage_1(0)
  [r1_rimage_0] /dev/sdc(1)                  <---- sdc is used for both _rimage_0...
  [r1_rimage_0] /dev/sdb(1024)               
  [r1_rimage_1] /dev/sda(1025)               
  [r1_rimage_1] /dev/sdc(1023)               <---- ...as well as for _rimage_1
  [r1_rmeta_0]  /dev/sdc(0)                  
  [r1_rmeta_1]  /dev/sda(1024)               
  t1            /dev/sda(0)                  
  t2            /dev/sdb(0) 

Expected results:


Additional info:

Comment 1 Marian Csontos 2017-11-28 12:53:58 UTC
In case of failure such device can not be repaired:

# lvconvert --repair vg/r1
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  /dev/vg/r1: read failed after 0 of 4096 at 6442385408: Input/output error
  /dev/vg/r1: read failed after 0 of 4096 at 6442442752: Input/output error
  Couldn't find device with uuid UFK7K0-nGPE-76Rq-F5WC-xGig-UzXP-MnFuDz.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Unable to replace all PVs from vg/r1 at once.
  Failed to replace faulty devices in vg/r1.

TODO: Test with devices in sync.

Workaround: `lvcreate -n r1 -m 1 -L 6G vg /dev/sd[cab]`

Comment 2 Marian Csontos 2017-11-28 14:23:30 UTC
Waited for sync. 100% in sync unable to repair. Also it is possible to read only first 4 GBs, reading from the second segment fails.

Read from first segment:

# dd if=/dev/vg/r1 of=/dev/null skip=4000 count=1 bs=1M
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.00195199 s, 537 MB/s

Read from second segment fails:

# dd if=/dev/vg/r1 of=/dev/null skip=5000 count=1 bs=1M
dd: error reading ‘/dev/vg/r1’: Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00201181 s, 0.0 kB/s

Surprisingly the RAID1 device sanity reports DA:

# dmsetup status
vg-r1_rmeta_1: 0 8192 linear 
vg-r1_rimage_1: 0 8372224 linear 
vg-r1_rimage_1: 8372224 4210688 linear 
vg-t2: 0 8388608 linear 
vg-r1_rmeta_0: 0 8192 linear 
vg-r1_rimage_0: 0 8372224 linear 
vg-r1_rimage_0: 8372224 4210688 linear 
vg-t1: 0 8388608 linear 
vg_stacker_OTzj-root: 0 12582912 linear 
vg-r1: 0 12582912 raid raid1 2 DA 12582912/12582912 idle 0 0 -

And for reference only the segments:

# lvs --segments -aolv_name,pe_ranges,le_ranges
  WARNING: Not using lvmetad because a repair command was run.
  /dev/vg/r1: read failed after 0 of 4096 at 6442385408: Input/output error
  /dev/vg/r1: read failed after 0 of 4096 at 6442442752: Input/output error
  Couldn't find device with uuid jDRXQI-jGSW-BAOG-LB3h-aLhd-8fPb-kHbPZf.
  LV            PE Ranges                             LE Ranges                                
  r1            r1_rimage_0:0-1535 r1_rimage_1:0-1535 [r1_rimage_0]:0-1535,[r1_rimage_1]:0-1535
  [r1_rimage_0] [unknown]:1-1022                      [unknown]:1-1022                         
  [r1_rimage_0] /dev/sdb:1024-1537                    /dev/sdb:1024-1537                       
  [r1_rimage_1] /dev/sda:1025-2046                    /dev/sda:1025-2046                       
  [r1_rimage_1] [unknown]:1023-1536                   [unknown]:1023-1536                      
  [r1_rmeta_0]  [unknown]:0-0                         [unknown]:0-0                            
  [r1_rmeta_1]  /dev/sda:1024-1024                    /dev/sda:1024-1024

Comment 3 Steve D 2018-10-18 12:12:26 UTC
I've just been bitten by this for second time, though I swore I specified PVs manually. 2.02.176 (-4.1ubuntu3) on Ubuntu 18.04.

I also hit a whole load of scrub errors last night - the data stored in the fs seems fine, it looks like when I extended the RAID1 LV in question, the extensions didn't get synced. Still investigating that one - may have been triggered by me trying to work around this bug.

Any thoughts / progress?


Note You need to log in before you can comment on or make changes to this bug.