Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 78008

Summary: Anaconda cannot succesfully format lots of MDs
Product: [Retired] Red Hat Linux Reporter: Chris Grijzen <chrisgrijzen>
Component: raidtoolsAssignee: Doug Ledford <dledford>
Status: CLOSED WONTFIX QA Contact: Brock Organ <borgan>
Severity: medium Docs Contact:
Priority: medium    
Version: 8.0CC: katzj
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2004-11-27 23:10:17 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Chris Grijzen 2002-11-17 12:35:17 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 
1.0.3705)

Description of problem:
When you define lots (say 10) software raid devices (and put filesystems on 
them) using Disk Druid and click next it will format some filesystems and then 
stop the installation stating "An error occured trying to format md1. This 
problem is serious, and the install cannot continue. Press <Enter> to reboot 
your system."

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
In disk druid do the following
1. Create a software raid partition on device sda of 100M
2. Create a software raid partition on device sdb of 100M
3. Create a software raid1 device (sda1+sdb1) ext3 /boot
4. Create a software raid partition on device sda of 1000M
5. Create a software raid partition on device sdb of 1000M
6. Create a software raid1 device (sda2+sdb2) ext3 /usr
7. Create a software raid partition on device sda of 1000M
8. Create a software raid partition on device sdb of 1000M
9. Create a software raid1 device (sda3+sdb3) ext3 /home
10. Create a software raid partition on device sda of 500M
11. Create a software raid partition on device sdb of 500M
12. Create a software raid1 device (sda4+sdb4) ext3 /chroot
13. Create a software raid partition on device sda of 500M
14. Create a software raid partition on device sdb of 500M
15. Create a software raid1 device (sda5+sdb5) ext3 /cache
16. Create a software raid partition on device sda of 500M
17. Create a software raid partition on device sdb of 500M
18. Create a software raid1 device (sda6+sdb6) ext3 /var
19. Create a software raid partition on device sda of 500M
20. Create a software raid partition on device sdb of 500M
21. Create a software raid1 device (sda7+sdb7) ext3 /var/log
22. Create a software raid partition on device sda of 600M
23. Create a software raid partition on device sdb of 600M
24. Create a software raid1 device (sda8+sdb8) swap
25. Create a software raid partition on device sda of 500M
26. Create a software raid partition on device sdb of 500M
27. Create a software raid1 device (sda9+sdb9) ext3 /tmp
28. Create a software raid partition on device sda of 500M
29. Create a software raid partition on device sdb of 500M
30. Create a software raid1 device (sda10+sdb10) ext3 /
31. Create a software raid partition on device sda of REST
32. Create a software raid partition on device sdb of REST
33. Create a software raid1 device (sda11+sdb11) ext3 /data
Have everything formatted and click Next

Actual Results:  "An error occured trying to format md1. This problem is 
serious, and the install cannot continue. Press <Enter> to reboot your system."

Expected Results:  Anaconda should be able to succesfully create all of the 
filesystems on all of the MDs

Additional info:

Comment 1 Jeremy Katz 2002-11-26 21:58:26 UTC
I can verify this, but the error message I'm getting from mkraid is that the
array is active and to run raidstop first.  Checking /proc/mdstat, md1 isn't
active although md10 is.  

Looks like a raidtools bug in how it parses /proc/mdstat for the device in question