Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1355639 - [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress
Summary: [Bitrot]: Scrub status- Certain fields continue to show previous run's detail...
Alias: None
Product: GlusterFS
Classification: Community
Component: bitrot
Version: 3.8.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
Depends On: 1337444 1352871 1355635
TreeView+ depends on / blocked
Reported: 2016-07-12 07:00 UTC by Kotresh HR
Modified: 2016-09-20 05:13 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1355635
Last Closed: 2016-08-12 09:47:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Description Kotresh HR 2016-07-12 07:00:52 UTC
+++ This bug was initially created as a clone of Bug #1355635 +++

+++ This bug was initially created as a clone of Bug #1352871 +++

+++ This bug was initially created as a clone of Bug #1337444 +++

Description of problem:

In scenarios where scrubber would take more time to finish scrubbing files, a 'scrub status' output when scrubbing is in progress, displays the previous run's information for 'last completed scrub time' and 'duration of last scrub'. This provides an incorrect view to the user, giving the impression that scrubbing has completed. We should ideally have the field 'State of scrub' set to 'In progress' and the above mentioned fields set to '-' . The other two fields (files scrubbed, files skipped) correctly show the present run's details.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

1. Have a 4node cluster, with a dist-rep volume and sharding enabled. Set the scrub frequency to 'hourly'
2. Create 100 1MB files and wait for the scrubber to finish its run.
3. View the scrub status output for the validity of the fields that it shows.
4. Create 5 4GB files and wait for the next run of scrubbing to start. 
5. When the scrubbing is in progress (as seen from scrub.log), issue a 'gluster volume bitrot <volname> scrub status'

Actual results:
Scrub status shows 4 fields with respect to every node. 2 fields are updated as per the current run, and 2 as per the previous run.

Expected results:
Either all the 4 fields must reflect the current run, OR all 4 fields must reflect the previous run. Preferable would be to let the user know that scrubbing is in progress, and update the fields accordingly.

--- Additional comment from Vijay Bellur on 2016-07-12 02:57:06 EDT ---

REVIEW: (feature/bitrot: Show whether scrub is in progress/idle) posted (#1) for review on release-3.7 by Kotresh HR (

Comment 1 Vijay Bellur 2016-07-12 07:02:23 UTC
REVIEW: (feature/bitrot: Show whether scrub is in progress/idle) posted (#1) for review on release-3.8 by Kotresh HR (

Comment 2 Vijay Bellur 2016-07-18 10:53:54 UTC
COMMIT: committed in release-3.8 by Jeff Darcy ( 
commit 24310c41a6ce7a218dca8ca8545ba4d82834497f
Author: Kotresh HR <>
Date:   Mon Jul 4 17:25:57 2016 +0530

    feature/bitrot: Show whether scrub is in progress/idle
    Backport of
    Bitrot scrub status shows whether the scrub is paused
    or active. It doesn't show whether the scrubber is
    actually scrubbing or waiting in the timer wheel
    for the next schedule. This patch shows this status
    with "In Progress" and "Idle" respectively.
    Change-Id: I995d8553d1ff166503ae1e7b46282fc3ba961f0b
    BUG: 1355639
    Signed-off-by: Kotresh HR <>
    (cherry picked from commit f4757d256e3e00132ef204c01ed61f78f705ad6b)
    Smoke: Gluster Build System <>
    NetBSD-regression: NetBSD Build System <>
    CentOS-regression: Gluster Build System <>
    Reviewed-by: Jeff Darcy <>

Comment 3 Niels de Vos 2016-08-12 09:47:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report.

glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.


Note You need to log in before you can comment on or make changes to this bug.