Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 155319 - Diskdruid does not recognise existing RAID volumes
Summary: Diskdruid does not recognise existing RAID volumes
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 4
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Anaconda Maintenance Team
QA Contact: Mike McLean
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-04-19 05:56 UTC by Stig Nielsen
Modified: 2007-11-30 22:11 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-07-18 21:44:00 UTC


Attachments (Terms of Use)
dd /dev/md1 output (deleted)
2005-04-27 19:24 UTC, Stig Nielsen
no flags Details
Output of 'lsmod' (deleted)
2005-06-12 19:00 UTC, Stig Nielsen
no flags Details
Output of 'dmesg' (deleted)
2005-06-12 19:01 UTC, Stig Nielsen
no flags Details
Output of 'lspci' (deleted)
2005-06-12 19:01 UTC, Stig Nielsen
no flags Details

Description Stig Nielsen 2005-04-19 05:56:25 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.5) Gecko/20041110 Firefox/1.0

Description of problem:
During installation, when re-using old SCSI RAID volumes, e.g. md0, md1, md2 (created with RH9 and ok during FC3 install), DiskDruid displays "Type" as "foreign".

The Raid type is "0" and combined from 2 partitions all using ext3 file system:
 
/dev/sda2   /dev/sdb2   -> md0 
/dev/sda3   /dev/sdb3   -> md1
/dev/sda4   /dev/sdb4   -> md2
(sda1 and sdb1 are both swap, no raid)

Funny thing is that the devices are identified as 

/dev/sdf2   /dev/sdg2   -> md0 
/dev/sdf3   /dev/sdg3   -> md1
/dev/sdf4   /dev/sdg4   -> md2

Please let me know if I should provide more details.



Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.Choose "Custom Install"
2.
3.
  

Actual Results:  Can't go any further as md2 partition is /home and cannot be formatted 

Expected Results:  Allow me to choose mdx

Additional info:

Comment 1 Jeremy Katz 2005-04-27 05:31:08 UTC
Can you provide a dd of the first meg of one of the raid volumes?

Comment 2 Stig Nielsen 2005-04-27 19:24:47 UTC
Created attachment 113733 [details]
dd /dev/md1 output

Comment 3 Stig Nielsen 2005-04-27 19:37:19 UTC
Thanks Jeremy

Attachment 113733 [details] is an output of ' dd if=/dev/md1 of=md1.dd bs=1M count=8'

Below is part of /etc/fstab as of FC3:

/dev/md0                /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
/dev/md2                /home                   ext3    defaults        1 2
LABEL=/opt              /opt                    ext3    defaults        1 2
/dev/md1                /usr                    ext3    defaults        1 2
/dev/sdb1               swap                    swap    defaults        0 0
/dev/sda1               swap                    swap    defaults        0 0


Comment 4 Jeremy Katz 2005-04-27 20:17:23 UTC
Hrmm, it's definitely showing up as ext3 when I run the same code that's used
for the sniffing.  Is it definitely starting the raid?  If so, can you switch to
tty2 and run

raidstart md2
python -c 'import partedUtils; print partedUtils.sniffFilesystemType("/dev/md2")'

for all of the md*?

Comment 5 Stig Nielsen 2005-04-29 04:28:45 UTC
I'm just gonna post what I see (maybe there's some PATH problems?)
raidstart md0 returns 'usage: raidstart /dev/md[minornum]'

raidstart /dev/md0 returns nothing (seems to accept) but does not seem to
correct the problem (going back to X0, press 'back one step and forward
selecting 'configure with DiskDruid'  

python -c 'import partedUtils; print partedUtils.sniffFilesystemType("/dev/md0")'
returns something like ' partedUtils not found, for all md0,1,2 (I tried to tee
the next command but the file is empty) 

find -name partedUtils returns:
/mnt/runtime/usr/lib/anaconda/partedUtils.py
/mnt/runtime/usr/lib/anaconda/partedUtils.pyc
/mnt/runtime/usr/lib/python2.4/site-packages/partedmodule.so
/mnt/runtime/usr/sbin/parted
/mnt/source/Fedora/RPMS/parted-1.6.22-1.i386.rpm
/mnt/source/Fedora/RPMS/parted-devel-1.6.22-1.i386.rpm

Any suggestions? 

Thanks

Comment 6 Chris Lumens 2005-05-09 16:01:25 UTC
Try

PYTHONPATH=/mnt/runtime/usr/lib/anaconda python -c ....

instead.  The problem there is it just can't fine the location of the anaconda
python modules.

Comment 7 Stig Nielsen 2005-06-10 15:44:10 UTC
Thanks for the info. Sorry for the delay....
raidstart md1
PYTHONPATH=/mnt/runtime/usr/lib/anaconda python -c python -c 'import
partedUtils; print partedUtils.sniffFilesystemType("/dev/md1")'

returns:
* Tried to read pagesize for /dev/md1 in sniffFilesystemType and only read 0
None

Comment 8 Stig Nielsen 2005-06-10 17:10:33 UTC
Actually this is tested on FC4 Test3 (3.92)

PS: there is a mistyper in the previous message, the command used was:
PYTHONPATH=/mnt/runtime/usr/lib/anaconda python -c 'import
partedUtils; print partedUtils.sniffFilesystemType("/dev/md1")'

Comment 9 Stig Nielsen 2005-06-12 19:00:23 UTC
Created attachment 115340 [details]
Output of 'lsmod'

Comment 10 Stig Nielsen 2005-06-12 19:01:14 UTC
Created attachment 115341 [details]
Output of 'dmesg'

Comment 11 Stig Nielsen 2005-06-12 19:01:59 UTC
Created attachment 115342 [details]
Output of 'lspci'

Comment 12 Stig Nielsen 2005-07-18 21:44:00 UTC
I tried the FC4 release and that seems to detect the volumes as ext3. 

To make sure that nothing had changed onm the HW I tried FC4 test 3, with the 
same result as before = did not detect the raid volumes. 

However, what you guys did in the released FC4 corrected the problem, whatever 
it was. 
Please let me know if I need to do further tests. For now anyway, I change the 
status to resolved


Note You need to log in before you can comment on or make changes to this bug.