Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1364775 - [RHEL 7.7] raid4 write journal raid create failed on ppc64 and S390x
Summary: [RHEL 7.7] raid4 write journal raid create failed on ppc64 and S390x
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: mdadm
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 7.7
Assignee: Nigel Croxon
QA Contact: guazhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-07 09:35 UTC by Zhang Yi
Modified: 2018-09-10 17:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-09-10 17:26:01 UTC


Attachments (Terms of Use)

Comment 3 Nigel Croxon 2017-06-21 16:42:19 UTC
Since write journal is disabled in RHEL7.3 and 7.4.
I want to close this BZ.
It has been open almost a year and it will not be addressed until next release, 7.5.

-Nigel

Comment 5 guazhang@redhat.com 2018-05-24 03:08:04 UTC
Hello

please check the log in https://bugzilla.redhat.com/show_bug.cgi?id=1358592#c16

Comment 6 Nigel Croxon 2018-07-09 15:47:23 UTC

RHEL7.6 MD test kernel.

http://file.bos.redhat.com/ncroxon/MD-MDADM/kernel-3.10.0-916.el7mdTst76v3.x86_64.rpm

Comment 7 Nigel Croxon 2018-07-17 11:33:37 UTC
Hello Guangwu,

Your comment 5, does not map to this problem.
This is a --write-journal issue.

Comment 8 Nigel Croxon 2018-07-17 11:57:26 UTC
# mdadm -CR /dev/md0 -l4 -n 7 /dev/loop[1-7] --write-journal /dev/loop0 -v
mdadm: chunk size defaults to 512K
mdadm: /dev/loop1 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop2 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop3 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop4 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop5 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop6 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop7 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: /dev/loop0 appears to be part of a raid array:
       level=raid1 devices=8 ctime=Tue Jul 17 07:20:29 2018
mdadm: size set to 247808K
mdadm: creation continuing despite oddities due to --run
mdadm: Defaulting to version 1.2 metadata
[57507.479536] async_tx: api initialized (async)
[57507.510598] xor: automatically using best checksumming function:
[57507.547465]    avx       : 16332.000 MB/sec
[57507.608471] raid6: sse2x1   gen()  6238 MB/s
[57507.644464] raid6: sse2x2   gen()  7664 MB/s
[57507.680466] raid6: sse2x4   gen()  8839 MB/s
[57507.699735] raid6: using algorithm sse2x4 gen() (8839 MB/s)
[57507.724698] raid6: using ssse3x2 recovery algorithm
[57507.777983] md/raid:md0: device loop6 operational as raid disk 5
[57507.804961] md/raid:md0: device loop5 operational as raid disk 4
[57507.831706] md/raid:md0: device loop4 operational as raid disk 3
[57507.858615] md/raid:md0: device loop3 operational as raid disk 2
[57507.885628] md/raid:md0: device loop2 operational as raid disk 1
[57507.913082] md/raid:md0: device loop1 operational as raid disk 0
[57507.946729] md/raid:md0: raid level 4 active with 6 out of 7 devices, algorithm 0
[57507.980358] md/raid456: discard support disabled due to uncertainty.
[57508.008845] Set raid456.devices_handle_discard_safely=Y to override.
[57508.038298] md0: detected capacity change from 0 to 1522532352
[57508.064691] md: recovery of RAID array md0
mdadm: array /dev/md0 started.


# [57509.400578] md: md0: recovery done.

# cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] 
md0 : active raid4 loop7[8] loop0[7](J) loop6[5] loop5[4] loop4[3] loop3[2] loop2[1] loop1[0]
      1486848 blocks super 1.2 level 4, 512k chunk, algorithm 0 [7/7] [UUUUUUU]
      
unused devices: <none>

Comment 9 Nigel Croxon 2018-07-17 12:08:52 UTC
yizhan,

Can you retest on PPC64 and S390 ?

http://download-node-02.eng.bos.redhat.com/nightly/RHEL-7.6-20180716.n.0/compose/Server/

Comment 10 guazhang@redhat.com 2018-07-18 02:18:35 UTC
(In reply to Nigel Croxon from comment #9)
> yizhan,
> 
> Can you retest on PPC64 and S390 ?
> 
> http://download-node-02.eng.bos.redhat.com/nightly/RHEL-7.6-20180716.n.0/
> compose/Server/

Hi

which mdadm package should be used ? the default one is mdadm-4.0-13 which don't support --write-journal.

Comment 12 guazhang@redhat.com 2018-07-19 01:39:20 UTC
Hello
I have tested the journal function on ppc64, but the md state stay at "clean, degraded ", and can not get the finished state in 12H in the server.
so please check the state if expect ?

[root@ibm-p8-kvm-04-guest-10 bitmap]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jul 18 06:13:23 2018
        Raid Level : raid4
        Array Size : 18413568 (17.56 GiB 18.86 GB)
     Used Dev Size : 3068928 (2.93 GiB 3.14 GB)
      Raid Devices : 7
     Total Devices : 8
       Persistence : Superblock is persistent

       Update Time : Wed Jul 18 06:13:23 2018
             State : clean, degraded 
    Active Devices : 6
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 2

        Chunk Size : 512K

Consistency Policy : journal

              Name : 0
              UUID : dc2e26e0:785501ec:d29719af:4f09238c
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       7        1        0      active sync   /dev/loop1
       1       7        2        1      active sync   /dev/loop2
       2       7        3        2      active sync   /dev/loop3
       3       7        4        3      active sync   /dev/loop4
       4       7        5        4      active sync   /dev/loop5
       5       7        6        5      active sync   /dev/loop6
       7       7        0        6      spare rebuilding   /dev/loop0

       8       7        7        -      spare   /dev/loop7
[root@ibm-p8-kvm-04-guest-10 bitmap]#

Comment 14 Nigel Croxon 2018-09-10 17:26:42 UTC
--write-journal is still not supported.

Closing.


Note You need to log in before you can comment on or make changes to this bug.