Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 231453 - kickstart raid install fails with valueerror: md2 is already in the mdList
Summary: kickstart raid install fails with valueerror: md2 is already in the mdList
Keywords:
Status: CLOSED DUPLICATE of bug 172648
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: anaconda
Version: 4.4
Hardware: i686
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Anaconda Maintenance Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-03-08 14:01 UTC by Dave Botsch
Modified: 2007-11-17 01:14 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-03-28 18:53:17 UTC


Attachments (Terms of Use)

Description Dave Botsch 2007-03-08 14:01:48 UTC
Description of problem:

A kickstart installation with linux software raid partitions fails using the
generated ananconda-ks.cfg from the original by hand install. The error given is
VluaeError: md2 is already in the mdList. This error occurs during the
formatting of the partitions stage of the install.

Version-Release number of selected component (if applicable):
rhel4u4

How reproducible:
100% on test system

Steps to Reproduce:
1. Do a normal by hand install. Make several (say 4) raid 1 partitions.
2. Grab the generated anaconda-ks.cfg and uncomment the partition sections
3. Try to reinstall using this ks file. No joy.
  
Actual results:

The error message: ValueError: md2 is already in the mdList

Expected results:

The system installs successfully.

Additional info:

Partition info from the anaconda-ks.cfg file:
clearpart --all
part raid.8 --size=1024 --ondisk=sdb --asprimary
part raid.20 --size=1024 --ondisk=sdc --asprimary
part raid.21 --size=10000 --ondisk=sdc
part raid.9 --size=10000 --ondisk=sdb
part raid.22 --size=1024 --ondisk=sdc
part raid.11 --size=1024 --ondisk=sdb
part swap --size=100 --grow --ondisk=sdc --asprimary
part swap --size=100 --grow --ondisk=sdb --asprimary
part raid.35 --size=100 --grow --ondisk=sde
part raid.32 --size=100 --grow --ondisk=sdd
part /backup --fstype ext3 --size=100 --grow --ondisk=sda
part /vicepa --fstype ext3 --size=100 --ondisk=sda --asprimary
part raid.23 --size=100 --grow --ondisk=sdc
part raid.13 --size=100 --grow --ondisk=sdb
raid /boot --fstype ext3 --level=RAID1 raid.8 raid.20
raid /scratch --fstype ext3 --level=RAID1 raid.32 raid.35
raid /var --fstype ext3 --level=RAID1 raid.9 raid.21
raid /cache --fstype ext3 --level=RAID1 raid.11 raid.22
raid / --fstype ext3 --level=RAID1 raid.13 raid.23

Comment 1 Dave Botsch 2007-03-13 20:50:23 UTC
The kickstart seems to be doing its own numbering scheme and somehow renumbering
stuff it has already numbered or that I have numbered, with respect to the md
numbers.

For example, if I specify --device=mdx in each of the raid kickstart commands,
I'll then end up w. a system that doesn't install and may end up with /dev/md4's
partititions (which are /dev/md4 according to /proc/mdstat) attempting to be
auto numbered after the fact as /dev/md1 ... clearly this can't work.

Comment 2 Dave Botsch 2007-03-13 21:23:04 UTC
I seem to have found the solution...

before attempting to reinstall w. kickstart, boot up into rescue mode and use
fdisk to clear out all partitions.

So, the clearpart command in the kickstart file seems to not be doing the right
thing when software raid partitions are already present on the disk.

Comment 3 Chris Lumens 2007-03-28 18:53:17 UTC

*** This bug has been marked as a duplicate of 172648 ***


Note You need to log in before you can comment on or make changes to this bug.