Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1695911 - --useexisting and --noformat options fail on raid devices [NEEDINFO]
Summary: --useexisting and --noformat options fail on raid devices
Keywords:
Status: NEW
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 30
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Anaconda Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On: 1572602
Blocks: 1695913
TreeView+ depends on / blocked
 
Reported: 2019-04-03 21:35 UTC by Doug Ledford
Modified: 2019-04-05 10:40 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1695913 (view as bug list)
Environment:
Last Closed:
vponcova: needinfo? (vtrefny)


Attachments (Terms of Use)

Description Doug Ledford 2019-04-03 21:35:49 UTC
Description of problem:

When attempting to install on existing raid devices, anaconda fails to save drive configuration and manual configuration is not possible

Version-Release number of selected component (if applicable):

Fedora 30 Server Beta and also Fedora 29 Server

How reproducible:

100%

Steps to Reproduce:
1. Create any raid array of your choice
2. Add part entries for raid array in kickstart
3. Add raid entry in kickstart with --useexisting or --noformat
4. Start install in Graphical mode using kickstart
5. Graphical mode will eventually come up and show an error on drive selection
6. Go into drive pane and read the error listed, it will vary depending on what you were using the raid device for
7. Attempts to use blivet gui to fix the issue will also fail as all of the raid devices you specified with either --useexisting or --noformat will be present, but will have a size of 0 bytes and you will not be allowed to select them for use as filesystem or lvm devices

Actual results:

Unable to complete the install, period.  It won't complete in an automated fashion, and it also won't allow you to resolve the issue manually and move forward that way.

Expected results:

I expected the devices to be used as directed and the install to proceed in an automated fashion.

Additional info:

This is happening on a home built NAS device I have.  It uses a combination of 2 nvme drives and 4 sata disks.  Because of BIOS limitations, the nvme devices can not be the boot devices.  It also uses UEFI as the boot format.  In order to make the system perform well as a NAS, and to be able to boot, there are two raid devices devoted to booting: a 6 disk raid1 at /boot and a 6 disk raid1 at /boot/efi.  Then there is a 4 disk raid5 on the sata disks as a lvm pv.  Then there are three additional raid1 arrays on the nvme devices: a root array, a swap array, and then a lvm pv intended to be used for writeback cache of the lvm logical volumes on the 4 disk raid5 sata pv.  In addition, since I don't want the boot or boot/efi raid devices to be wasting time waiting on reads from the sata disks, those 6 disk raid1 arrays specify the sata disks as all being write-mostly disks.  Recreating this complex md setup is not possible using the raid command, and I don't want the big raid5 array to have to resync after each install, so a combination of --useexisting and --noformat is preferred.  The resulting kickstart looks like this for the disk section:

# Disk partitioning information
ignoredisk --only-use=mmcblk1,nvme0n1,nvme1n1,sda,sdb,sdc,sdd
bootloader --location=mbr --boot-drive=sda
clearpart --none --initlabel
part raid.nvme0n1p1 --fstype="mdmember" --onpart=/dev/nvme0n1p1 --noformat
part raid.nvme0n1p2 --fstype="mdmember" --onpart=/dev/nvme0n1p2 --noformat
part raid.nvme0n1p3 --fstype="mdmember" --onpart=/dev/nvme0n1p3 --noformat
part raid.nvme0n1p4 --fstype="mdmember" --onpart=/dev/nvme0n1p4 --noformat
part raid.nvme0n1p5 --fstype="mdmember" --onpart=/dev/nvme0n1p5 --noformat
part raid.nvme1n1p1 --fstype="mdmember" --onpart=/dev/nvme1n1p1 --noformat
part raid.nvme1n1p2 --fstype="mdmember" --onpart=/dev/nvme1n1p2 --noformat
part raid.nvme1n1p3 --fstype="mdmember" --onpart=/dev/nvme1n1p3 --noformat
part raid.nvme1n1p4 --fstype="mdmember" --onpart=/dev/nvme1n1p4 --noformat
part raid.nvme1n1p5 --fstype="mdmember" --onpart=/dev/nvme1n1p5 --noformat
part raid.sda1 --fstype="mdmember" --onpart=/dev/sda1 --noformat
part raid.sda2 --fstype="mdmember" --onpart=/dev/sda2 --noformat
part raid.sda3 --fstype="mdmember" --onpart=/dev/sda3 --noformat
part raid.sdb1 --fstype="mdmember" --onpart=/dev/sdb1 --noformat
part raid.sdb2 --fstype="mdmember" --onpart=/dev/sdb2 --noformat
part raid.sdb3 --fstype="mdmember" --onpart=/dev/sdb3 --noformat
part raid.sdc1 --fstype="mdmember" --onpart=/dev/sdc1 --noformat
part raid.sdc2 --fstype="mdmember" --onpart=/dev/sdc2 --noformat
part raid.sdc3 --fstype="mdmember" --onpart=/dev/sdc3 --noformat
part raid.sdd1 --fstype="mdmember" --onpart=/dev/sdd1 --noformat
part raid.sdd2 --fstype="mdmember" --onpart=/dev/sdd2 --noformat
part raid.sdd3 --fstype="mdmember" --onpart=/dev/sdd3 --noformat
raid / --device=Root --fstype="ext4" --level=RAID1 --label=Root --useexisting
raid swap --device=Swap --fstype="swap" --level=RAID1 --label=Swap --useexisting
raid /boot --device=Boot --fstype="ext4" --level=RAID1 --label=Boot --useexisting
raid /boot/efi --device=EFI --fstype="efi" --level=RAID1 --fsoptions="umask=0077,shortname=winnt" --label=EFI --useexisting
part /boot/Fedora29-Install --fstype="ext2" --onpart=/dev/mmcblk1p7 --noformat
part /boot/Fedora30-Install --fstype="ext2" --onpart=/dev/mmcblk1p8 --noformat
raid pv.Srv_PV --device=Srv_PV --fstype="lvmpv" --level=RAID5 --label=Srv_PV --useexisting
raid pv.Cache_PV --device=Cache_PV --fstype="lvmpv" --level=RAID1 --useexisting
volgroup srv_vg pv.Srv_PV pv.Cache_PV
logvol /home --fstype="ext4" --size=2097152 --label="home" --name=home --vgname=srv_vg --cachepvs=pv.Cache_PV --cachesize=95620 --cachemode=writeback
logvol /srv/NetBackup --fstype="ext4" --size=1048576 --label="NetBackup" --name=NetBackup --vgname=srv_vg --cachepvs=pv.Cache_PV --cachesize=47810 --cachemode=writeback
logvol /srv/TimeMachine --fstype="ext4" --size=1048576 --label="TimeMachine" --name=TimeMachine --vgname=srv_vg --cachepvs=pv.Cache_PV --cachesize=47810 --cachemode=writeback

This completely fails.  However, if I comment everything out, and then go into the blivet-gui, I can essentially do the same thing as far as the raid devices and regular mountpoint filesystems.  I am not able to create cached lvs in blivet-gui of course.

Comment 1 Vendula Poncova 2019-04-04 09:38:23 UTC
Please, attach logs from the installation. You can find them during the installation in /tmp/*log.

Comment 2 Doug Ledford 2019-04-04 15:22:08 UTC
Ok, so I have to backtrack on this a bit.  This works, if I work around two other issues:

1) The /boot/efi raid partition must use --useexisting and not --noformat (bug #1695913)
2) The logvol command won't work if trying to create a cachelv and using a raid PV as the cache PV (bug #1572602 is the F29 version of this bug, I don't know of a F30 version of the bug, I suspect that the F29 bug should be moved to F30 and fixed there)

If I work around those two issues, then all that remains from this kickstart are:

1) A harmless error in the logs caused by the fact that we can't test the filesystem on /dev/mmcblk1p8.  This is caused by my setup.  I have a persistent install partition for both F29 and F30 on this machine, F29 on mmcblk1p7 and F30 on mmcblk1p8.  There is a custom.cfg grub file in /boot/efi/EFI/fedora/custom.cfg that has stanzas that allow me to boot into the installer from the hard disk boot menu.  When I do that, whatever installer I am booted into is already mounted on /run/install/repo and so the storage subsystem can't mount and examine the filesystem on that partition, even though I have part entries in the kickstart so that they come back after reinstall.  If anaconda were smart enough to know that a device it wants to scan is already mounted somewhere, then it could get the info from the mounted device.

2) Anaconda incorrectly (IMO) warns that /boot/Fedora29-Installer on mmcblk1p7 and /boot/Fedora30-Installer on mmcblk1p8 should be reformatted.  This is not desirable.  As these are mounts and paths that the system doesn't typically use, there is no reason to warn that they should be reformatted.

Comment 3 Vendula Poncova 2019-04-05 10:40:54 UTC
Vojta, could you take a look at this bug, please?


Note You need to log in before you can comment on or make changes to this bug.