Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 234745 - mdadm fails to start raid array
Summary: mdadm fails to start raid array
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: rawhide
Hardware: All
OS: Linux
Target Milestone: ---
Assignee: Doug Ledford
QA Contact: Fedora Extras Quality Assurance
Depends On:
TreeView+ depends on / blocked
Reported: 2007-04-01 12:29 UTC by Bart Vanbrabant
Modified: 2007-11-30 22:12 UTC (History)
3 users (show)

Fixed In Version: Current
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2007-07-16 18:35:51 UTC

Attachments (Terms of Use)

Description Bart Vanbrabant 2007-04-01 12:29:35 UTC
Description of problem:
I've reinstalled a FC6 machine with the latest rawhide version. The initscripts
failed to start my raid array. I found this denial during the boot process in
the audit log:

type=AVC msg=audit(1175417139.101:171): avc:  denied  { read } for  pid=2230
comm="mdadm" name="md0" dev=tmpfs ino=158224
scontext=system_u:system_r:mdadm_t:s0 tcontext=root:object_r:device_t:s0
type=SYSCALL msg=audit(1175417139.101:171): arch=40000003 syscall=5 success=no
exit=-13 a0=97c5190 a1=0 a2=0 a3=46 items=0 ppid=1 pid=2230 auid=4294967295
uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) comm="mdadm"
exe="/sbin/mdadm" subj=system_u:system_r:mdadm_t:s0 key=(null)

When I reboot with selinux in permissive mode my raid array is started correctly. 

Version-Release number of selected component (if applicable):

Do you need any more information?

Comment 1 Daniel Walsh 2007-04-02 17:26:43 UTC
This looks like a labeling problem.  /dev/md0 should be labeled
system_u:object_r:fixed_disk_device_t.  But on your machine it is labeled
system_u:object_r:device_t.  Is this device being created by udev?  If not then
it needs it's labeling fixed before mdadm uses it.  restorecon /dev/md0 will fix
it's labeleing.

Comment 2 Bart Vanbrabant 2007-04-02 18:19:50 UTC
This was a FC6 system which used to work fine with that raid array on which
/home was mounted. The harddisk with / on was dieing so I installed FC7t3 on a
new disk but I didn't let anaconda mount the raid array as /home. When I first
booted the fresh install the raid array wasn't assembled and there was the avc
listed above in the logs. After putting selinux in permissive mode the raid
array got assembled correctly and the lvm volumes on it got activated.

Comment 3 Daniel Walsh 2007-04-02 19:47:51 UTC
Jeremy, do you have any idea why the labeling would be wrong on /dev/md0?

Comment 4 Jeremy Katz 2007-04-03 19:38:40 UTC
Possibly mdadm is creating/recreating the node and not doing so with the right

RAID is annoying in that you have to create the device node to start the array
for the kernel to have the block device for udev to do a device node creation. 
We did recently go to a new upstream bugfix version of mdadm

Comment 5 Daniel Walsh 2007-07-13 15:00:09 UTC
mdadm needs to either use udev to create its nodes of add selinux awareness to
create them with the correct context.

Comment 6 Jeremy Katz 2007-07-16 18:05:37 UTC
(In reply to comment #5)
> mdadm needs to either use udev to create its nodes

This can't be done -- as above, the node has to be created and then you do an
ioctl on the node to actually create the device.  After that, udev could kick
in, but not before :-)

> of add selinux awareness to
> create them with the correct context.

This is probably the correct (though slightly distasteful) answer much like we
had to do for device-mapper.

Comment 7 Daniel Walsh 2007-07-16 18:35:51 UTC
This one looks like it was fixed a long time ago.  mdadm should be creating
devices as fixed_disk_device_t now.  Looks correct in RHEL5/FC6 and beyond.

Note You need to log in before you can comment on or make changes to this bug.