Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1354131 - systemd reports "device appeared twice with different sysfs paths" when using btrfs RAID
Summary: systemd reports "device appeared twice with different sysfs paths" when using...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: systemd
Version: 24
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: systemd-maint
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-09 22:44 UTC by Kristian McColm
Modified: 2017-08-08 15:29 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-08 15:29:43 UTC


Attachments (Terms of Use)

Description Kristian McColm 2016-07-09 22:44:10 UTC
Description of problem:
When using btrfs with RAID, which causes the same UUID to be assigned to multiple physical devices, the following is reported by systemd:

Jul  9 16:59:19 gw systemd: dev-disk-by\x2dlabel-btrfs_gw_data.device: Dev dev-disk-by\x2dlabel-btrfs_gw_data.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/block/sdb and /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdc
Jul  9 16:59:19 gw systemd: dev-disk-by\x2dlabel-btrfs_gw_data.device: Dev dev-disk-by\x2dlabel-btrfs_gw_data.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/block/sdb and /sys/devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdc

Additionally, the block devices are removed from the 'by-uuid' tree under /dev/disk.

Version-Release number of selected component (if applicable):

systemd-229-8.fc24.x86_64
systemd-libs-229-8.fc24.x86_64
systemd-compat-libs-229-8.fc24.x86_64
systemd-udev-229-8.fc24.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create btrfs raid device with the command:

mkfs.btrfs -d raid1 -m raid1 /dev/sdb /dev/sdc

2. Run:

btrfs device scan

3. Check in system log for the error message and /dev/disk/by-id for missing block devices.

Actual results:
Log message reported as per above and block devices not symlinked to 'by-uuid'.

Expected results:
systemd should permit duplicate UUID and allow mapping the UUID to more than one block device.

Additional info:

Comment 1 Dominic Robinson 2016-07-25 10:29:41 UTC
I can confirm this behaviour, there's also an additional message from systemd regarding the devices having the same part label - another side effect of btrfs raid.

Comment 2 Dominic Robinson 2016-07-26 08:21:31 UTC
I had initially thought this was a benign issue, but as I've further configured my server box I've noticed 1 or 2 additional side effects which could be considered fairly severe.

1)  On occasion this is causing the boot process to interrupt and drop to emergency shell. In the syslog there are messages along the lines of "Device dev-disk-by\x2duuid timed out."

2)  Due to duplicate uuids and systemd choking fstab mounts are delayed somewhat which is causing dependent services to start up mid mount. An example of this is tuned where filesystem mounts are achieved mid start up. This causes tuned to crash with the following message "RuntimeError: Set changed size during iteration."

    i). To treat the symptom here (not the cause) you can manually add a RequiresMountsFor= directive into individual unit files.

Point 2 is 100% reproducible - steps to reproduce.

1) Install Fedora 24 minimal across 3 disks selecting btrfs partitioning. /home /boot / on Disk 1. /some-partition on Disk 2 / 3 selecting Raid 1

2) dnf -y install tuned && systemctl start tuned && systemctl enable tuned

3) tuned-adm profile throughput-performance && reboot

4) systemctl status tuned (yields "RuntimeError: Set changed size during iteration.")

Comment 3 Davide Repetto 2016-08-19 15:41:02 UTC
I have the same problem. I have RAID, but with linux md:

#####
[root@dave ~]# mdadm --detail --scan
ARRAY /dev/md/md2 metadata=1.2 name=dave.idp.it:md2 UUID=57e640dc:3e40c7c1:781592ea:9a568a9e
ARRAY /dev/md/md1 metadata=1.2 name=dave.idp.it:md1 UUID=4c917409:f9c90143:c333d638:da4f480b
ARRAY /dev/md/md3 metadata=1.2 name=dave.idp.it:md3 UUID=7fa4b4c8:6e5339a8:b02f26fe:6fce4233
ARRAY /dev/md/md0 metadata=1.0 name=dave.idp.it:md0 UUID=f11e0175:77342542:c9cbc839:2239f9c9

#####
[root@dave ~]# cat /proc/mdstat 
Personalities : [raid10] [raid1]
md0 : active raid1 sdc2[2] sda2[3]
      528064 blocks super 1.0 [2/2] [UU]
      
md3 : active raid10 sdc4[2] sda4[3]
      4190208 blocks super 1.2 256K chunks 2 far-copies [2/2] [UU]
      
md1 : active raid10 sdc3[2] sda3[3]
      33526784 blocks super 1.2 256K chunks 2 far-copies [2/2] [UU]
      
md2 : active raid10 sdc5[2] sda5[0]
      160733184 blocks super 1.2 256K chunks 2 far-copies [2/2] [UU]
      bitmap: 1/2 pages [4KB], 65536KB chunk

#####
[root@dave ~]# journalctl | grep "appeared twice"
ago 19 16:30:16 dave.idp.it systemd[1]: dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device: Dev dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:0/12:0:0:0/block/sr0 and /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:1/12:0:1:0/block/sdc
ago 19 16:30:22 dave.idp.it systemd[1]: dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device: Dev dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:0/12:0:0:0/block/sr0 and /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:1/12:0:1:0/block/sdc
ago 19 16:34:16 dave.idp.it systemd[1]: dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device: Dev dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:0/12:0:0:0/block/sr0 and /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:1/12:0:1:0/block/sdc
ago 19 16:34:23 dave.idp.it systemd[1]: dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device: Dev dev-disk-by\x2dpath-pci\x2d0000:05:05.0\x2data\x2d3.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:0/12:0:0:0/block/sr0 and /sys/devices/pci0000:00/0000:00:14.4/0000:05:05.0/ata13/host12/target12:0:1/12:0:1:0/block/sdc

#####
[root@dave ~]# blkid
/dev/sda1: PARTUUID="d072df5e-d65d-4bdf-84cb-8a3d10067027"
/dev/sda2: UUID="f11e0175-7734-2542-c9cb-c8392239f9c9" UUID_SUB="772bf440-645b-a8ca-1ee0-98c37fff4e6b" LABEL="dave.idp.it:md0" TYPE="linux_raid_member" PARTUUID="1546d220-877b-4b34-91ac-948e02a8e87a"
/dev/sda3: UUID="4c917409-f9c9-0143-c333-d638da4f480b" UUID_SUB="f85f038f-98bd-cfc3-96de-d97924de9811" LABEL="dave.idp.it:md1" TYPE="linux_raid_member" PARTUUID="32d430fc-7d23-41d9-812e-a3e4e05fe92c"
/dev/sda4: UUID="7fa4b4c8-6e53-39a8-b02f-26fe6fce4233" UUID_SUB="464fd818-9cce-283d-ba86-fbbbeb88e956" LABEL="dave.idp.it:md3" TYPE="linux_raid_member" PARTUUID="29fa94be-4b67-4d66-864d-48df8a35d5f8"
/dev/sda5: UUID="57e640dc-3e40-c7c1-7815-92ea9a568a9e" UUID_SUB="31d94494-d197-55a4-9b3b-aa9bf55591c0" LABEL="dave.idp.it:md2" TYPE="linux_raid_member" PARTUUID="b610b624-062c-4fb9-9ecf-07d1a732297a"
/dev/sdb1: PARTLABEL="BIOS boot partition" PARTUUID="cba8897b-4dbd-4aa9-8749-e2bd5c368a5c"
/dev/sdb2: LABEL="davide-swap-hde" UUID="817cf39b-743f-449f-bc38-104d6a5b632e" TYPE="swap" PARTUUID="b1f44955-89c0-4dd9-801f-fa07c4460d07"
/dev/sdb3: LABEL="davide-storage" UUID="968df47b-39ee-47d3-8c90-0a4b1bfc6b2a" TYPE="ext4" PARTUUID="edafe8ce-112d-40b7-9d1d-2a8e410efb0b"
/dev/sdb4: LABEL="backup-bootfs" UUID="d666cac2-bc00-414e-9c28-b8b77481cd75" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="8ea5622d-d5e7-49a7-aa88-60ab614d4c97"
/dev/sdb5: LABEL="backup-rootfs" UUID="550e211c-7beb-4ef1-95fd-0f9f5d773889" TYPE="ext4" PARTUUID="5c936c1c-72ac-9d4d-896d-48acd3def33e"
/dev/md2: LABEL="davide-home" UUID="ee43009c-75f1-49f0-83e9-40bd79c2c973" TYPE="ext4"
/dev/md1: LABEL="davide-root" UUID="3dcfef20-dc3c-4172-b661-1a999f26e61c" UUID_SUB="1d4c15e7-142a-48ba-8277-d620698f731d" TYPE="btrfs"
/dev/md3: LABEL="davide-swap" UUID="92fcf421-9183-47c9-bd40-90f8e3c3c400" TYPE="swap"
/dev/md0: LABEL="davide-boot" UUID="49df9fc5-1439-48a2-9775-8a14dc0f1cf7" TYPE="ext4"
/dev/sdc1: PARTUUID="cf35cef9-586c-40eb-b38a-811a10f6460b"
/dev/sdc2: UUID="f11e0175-7734-2542-c9cb-c8392239f9c9" UUID_SUB="6f67c4de-2aeb-78a4-9c04-c236f9f13183" LABEL="dave.idp.it:md0" TYPE="linux_raid_member" PARTUUID="dc310fd0-ef19-4e78-a017-a94ee65cec10"
/dev/sdc3: UUID="4c917409-f9c9-0143-c333-d638da4f480b" UUID_SUB="c10c4f33-1b96-c3d6-2b40-396daebc99cf" LABEL="dave.idp.it:md1" TYPE="linux_raid_member" PARTUUID="12b5e301-e614-495f-a1db-d35d352a0667"
/dev/sdc4: UUID="7fa4b4c8-6e53-39a8-b02f-26fe6fce4233" UUID_SUB="ce3a9056-0300-33c9-a7ad-3ad3401729ee" LABEL="dave.idp.it:md3" TYPE="linux_raid_member" PARTUUID="aa2bb06a-34a1-450e-9529-958f59b397fe"
/dev/sdc5: UUID="57e640dc-3e40-c7c1-7815-92ea9a568a9e" UUID_SUB="33a6eb1f-b183-eb4d-9c1b-7a24df32fbb9" LABEL="dave.idp.it:md2" TYPE="linux_raid_member" PARTUUID="77d13615-b5d8-423f-a857-72145691338c"

Comment 4 Dominic Robinson 2016-08-27 10:04:13 UTC
Can someone acknowledge this issue please, on average this prevents 1/6 boots from completing due to systemd timeouts ?

Comment 5 Fedora End Of Life 2017-07-25 21:42:38 UTC
This message is a reminder that Fedora 24 is nearing its end of life.
Approximately 2 (two) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 24. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '24'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 24 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 6 Fedora End Of Life 2017-08-08 15:29:43 UTC
Fedora 24 changed to end-of-life (EOL) status on 2017-08-08. Fedora 24 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.