Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1364365 - Bricks doesn't come online after reboot [ Brick Full ]
Summary: Bricks doesn't come online after reboot [ Brick Full ]
Alias: None
Product: GlusterFS
Classification: Community
Component: posix
Version: 3.8.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
Depends On: 1336764 1360679
TreeView+ depends on / blocked
Reported: 2016-08-05 07:54 UTC by Ashish Pandey
Modified: 2016-08-12 09:48 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1360679
Last Closed: 2016-08-12 09:48:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Comment 1 Ashish Pandey 2016-08-05 07:56:52 UTC
Description of problem:
Rebooted the brick2 and started renaming the files in a brick1 which is full. The brick2 didn't came online after the reboot. Errors were seen in the brick logs.
"Creation of unlink directory failed"

sosreport kept at<bugid>

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create replica 3 volume and mount the volume on client using fuse.
2. Create files using 
for (( i=1; i <= 50; i++ ))
 dd if=/dev/zero of=file$i count=1000 bs=5M status=progress

3. After the creation is done. reboot the second brick.
4. start the renaming process of the files to test$i..n
5. When the second brick comes up it fails with below errors.

[2016-05-05 14:37:45.826772] E [MSGID: 113096] [posix.c:6443:posix_create_unlink_dir] 0-arbiter-posix: Creating directory /rhs/brick1/arbiter/.glusterfs/unlink failed [No space left on device]
[2016-05-05 14:37:45.826856] E [MSGID: 113096] [posix.c:6866:init] 0-arbiter-posix: Creation of unlink directory failed
[2016-05-05 14:37:45.826880] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-arbiter-posix: Initialization of volume 'arbiter-posix' failed, review your volfile again
[2016-05-05 14:37:45.826925] E [graph.c:322:glusterfs_graph_init] 0-arbiter-posix: initializing translator failed
[2016-05-05 14:37:45.826943] E [gr

Comment 2 Vijay Bellur 2016-08-05 08:02:45 UTC
REVIEW: (posix: Do not move and recreate .glusterfs/unlink directory) posted (#1) for review on release-3.8 by Ashish Pandey (

Comment 3 Vijay Bellur 2016-08-10 09:31:54 UTC
COMMIT: committed in release-3.8 by Pranith Kumar Karampuri ( 
commit d5976e2ca90f16074216a32e267e2652acd32bd9
Author: Ashish Pandey <>
Date:   Wed Jul 27 15:49:25 2016 +0530

    posix: Do not move and recreate .glusterfs/unlink directory
    At the time of start of a volume, it is checked if
    .glusterfs/unlink exist or not. If it does, move it
    to landfill and recreate unlink directory. If a volume
    is mounted and we write data on it till we face ENOSPC,
    restart of that volume fails as it will not be able to
    create unlink dir. mkdir will fail with ENOSPC.
    This will not allow volume to restart.
    If .glusterfs/unlink directory exist, don't move it to
    landfill. Delete all the entries inside it.
    master -
    Change-Id: Icde3fb36012f2f01aeb119a2da042f761203c11f
    BUG: 1364365
    Signed-off-by: Ashish Pandey <>
    Smoke: Gluster Build System <>
    CentOS-regression: Gluster Build System <>
    NetBSD-regression: NetBSD Build System <>
    Reviewed-by: Pranith Kumar Karampuri <>

Comment 4 Niels de Vos 2016-08-12 09:48:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report.

glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.


Note You need to log in before you can comment on or make changes to this bug.