Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1516206 - EC DISCARD doesn't punch hole properly
Summary: EC DISCARD doesn't punch hole properly
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Sunil Kumar Acharya
QA Contact:
Depends On:
Blocks: 1518255 1518257 1518260
TreeView+ depends on / blocked
Reported: 2017-11-22 09:33 UTC by Sunil Kumar Acharya
Modified: 2018-03-15 11:21 UTC (History)
1 user (show)

Fixed In Version: glusterfs-4.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1518255 1518257 1518260 (view as bug list)
Last Closed: 2018-03-15 11:21:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Description Sunil Kumar Acharya 2017-11-22 09:33:25 UTC
Description of problem:
DISCARD operation on EC volume doesn't punch hole properly in some cases.

How reproducible:


Steps to Reproduce:
1. Create 4+2 EC volume

2. Create file
dd if=/dev/urandom of=/mnt/file bs=1024 count=8

3. Punch hole
fallocate -p -o 1500 -l 3000 /mnt/file
4. When checked hole size is less than the specified size.
Actual results:

Expected results:

Discard should punch hole of the size specified.

Comment 1 Worker Ant 2017-11-22 09:57:37 UTC
REVIEW: (cluster/ec: EC DISCARD doesn't punch hole properly) posted (#1) for review on master by Sunil Kumar Acharya

Comment 2 Worker Ant 2017-11-28 09:35:06 UTC
COMMIT: committed in master by \"Sunil Kumar Acharya\" <> with a commit message- cluster/ec: EC DISCARD doesn't punch hole properly

DISCARD operation on EC volume was punching hole of lesser
size than the specified size in some cases.

EC was not handling punch hole for tail part in some cases.
Updated the code to handle it appropriately.

BUG: 1516206
Change-Id: If3e69e417c3e5034afee04e78f5f78855e65f932
Signed-off-by: Sunil Kumar Acharya <>

Comment 3 Shyamsundar 2018-03-15 11:21:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.


Note You need to log in before you can comment on or make changes to this bug.