|Summary:||Installer exits when probing disks if 8TB LUN visible on system|
|Product:||Red Hat Enterprise Linux 4||Reporter:||Gary Case <gcase>|
|Component:||parted||Assignee:||David Cantrell <dcantrell>|
|Status:||CLOSED CURRENTRELEASE||QA Contact:||Brock Organ <borgan>|
|Version:||4.0||CC:||bdonahue, coughlan, rkenna|
|Fixed In Version:||RHEL-4.6||Doc Type:||Bug Fix|
|Doc Text:||Story Points:||---|
|Last Closed:||2008-02-13 22:52:39 UTC||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
Description Gary Case 2005-05-24 17:22:59 UTC
From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.7) Gecko/20050416 Red Hat/1.0.3-1.4.1 Firefox/1.0.3 Description of problem: When an 8TB LUN is advertised to a RHEL4 AS U1 beta (0421.0 build) system, the installer will exit and leave the user at a "You may now reboot your computer" type message after probing for storage devices. The LUN for this test was created on a NetApp NearStore R200. The test computer was an HP DL380. The FC adapter in the RHEL system was a dual-port Qlogic 2312. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Set up an 8TB LUN on storage device, advertise LUN to RHEL system 2. Begin install 3. Proceed through install to point where storage is probed Actual Results: Installer exits and waits for me to reboot. Expected Results: Normal installation with disk druid able to see 8TB LUN or normal installation with LUN not visible to disk druid. Additional info: RHEL could be installed and would function normally when the LUN was not visible. If the LUN was set up after installation, the system could successfully mount a filesystem created on the LUN and could be rebooted without incident. During the install sequence, if I went to a shell prompt before the installer reached the storage section, I could use "fdisk -l" and see the LUN. However, when the installer reached the storage section it would exit.
Comment 2 Jeremy Katz 2005-05-24 19:07:06 UTC
Just exits abnormally with no other text? Can you grab both /tmp/syslog and /tmp/anaconda.log from the shell. Also, what happens if you just run parted /dev/sda from the installer shell?
Comment 3 Gary Case 2005-05-26 23:26:25 UTC
Unfortunately I can't perform any additional testing as I was offsite at NetApp when I discovered the issue. The installer just exited, leaving me at a screen with multiple lintes of text that ended with "you may safely reboot your system" I'm sorry I can't be more specific than that, but I only ran the test as an afterthought as I was leaving to come back to RH. I didn't copy down the exact text, but I was able to repeat the problem in both text and GUI installs. Do we have any way to replicate the problem in-house?
Comment 4 Tom Coughlan 2005-05-27 13:20:58 UTC
> Do we have any way to replicate the problem in-house? Yes, I will reproduce this in Westford.
Comment 5 Rob Kenna 2005-12-14 14:02:25 UTC
Did this get resolved for U3?
Comment 6 Chris Lumens 2007-03-14 20:21:04 UTC
Tom - were you ever able to reproduce this problem, especially on later update releases?
Comment 7 Tom Coughlan 2007-03-14 21:05:00 UTC
Ryan, Please test this on the Winchester in the lab. I'd suggest RHEL 4 U4. Try a 32-bit system and a 64-bit system. Tom
Comment 8 Ryan Powers 2007-03-27 16:35:00 UTC
I just reproduced this on p750.lab (32-bit). I had the exact same behavior using RHEL4 U4 as reported. I'm trying to locate a 64-bit system that I can test this on next.
Comment 9 Ryan Powers 2007-03-28 18:44:49 UTC
I just tried to reproduce this crash on a 64-bit system (hammer7.lab), and it did not have the same issue. It reported that there was a GPT partition table, but there was something wrong with the fake standard partition table (which is entirely possible, I'm not sure what's actually on the disk). This is the same drive that caused the 32-bit installer to fail, and it passed on 64-bit.
Comment 10 David Cantrell 2007-07-23 19:07:58 UTC
Can someone test with 4.5 and see if it's still present? If not, I'd like to perform an install using this storage system. I have an idea as to what's happening, but I need a shell to debug it.
Comment 11 RHEL Product and Program Management 2007-11-29 04:27:03 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
Comment 12 Alexander Todorov 2008-01-09 14:57:30 UTC
Tom, Ryan, do we have a reproducer for that? Automation is desired if possible. Thanks.
Comment 13 Tom Coughlan 2008-01-16 23:22:29 UTC
I set up the same system as comment 8 and tried RHEL 4.6. The storage device happens to be > 8TB for this test: SCSI device sdc: 23385821184 512-byte hdwr sectors (11973540 MB) sda and sdb are normal sized IDE drives. The installer discovered all the storage properly. I selected "automatic partitioning" and "remove all partitions on all disks". The "Disk Setup" page came up and showed the proposal for partitions and LVM configuration. For the big disk, it showed: 8388605 MB sdc1 3030253 MB free and for the root LV: VolGroup00 8865184 MB LogVol00 / ext3 8.85936e+06 MB So it appears to be respecting the 8GB limit in RHEL 4 x86 (I did not check the numbers). When I completed the dialog and the install started, it failed right away with a window saying "An error occurred formatting VolGroup00/LogVol00. The problem is serious. Reboot...". ctrl-alt-F5 shows "mke2fs: Filesystem too large. No more than 2**31-1blocks...". So it appears that Anaconda's attempt to trim the big disk didn't quite work. (It would be nice to have a more informative message in the GUI error message.) This looks different from the original problem. I can retry with exactly 8GB if you like. This system does not have very good remote access or automation, unfortunately. If there is better system in the Westford lab for you to debug on I can probably attach the storage to it.