Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1515860 - [RFE] snapshot bricks must be healed too
Summary: [RFE] snapshot bricks must be healed too
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Sunny Kumar
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-21 14:00 UTC by Raghavendra Talur
Modified: 2018-11-19 06:37 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 06:37:14 UTC


Attachments (Terms of Use)

Description Raghavendra Talur 2017-11-21 14:00:29 UTC
Description of problem:
When a node is replaced, we can start a new node and using replace brick commands heal all the replica volumes.

However, snapshots from the devices of lost node are gone forever. We should instead, create new snaps on new node and heal them

How reproducible:
100%

Steps to Reproduce:
1. take a snap of replica 3 volume
2. shutdown a node forever
3. use replace brick to get the lost brick on new node


Actual results:
new node does not have snapshots


Expected results:
snapshots should be healed


Additional info:

Comment 4 Mohammed Rafi KC 2018-11-19 06:37:14 UTC
This will be fixed in glusterd2. We are tracking the issue in upstream 
using [1]. Also there is a similar issue in gd2 for replace brick[2].


So closing this issue, as we are not planning for 3.x series.

[1]: https://github.com/gluster/glusterfs/issues/358
[2]: https://github.com/gluster/glusterd2/issues/1099


Note You need to log in before you can comment on or make changes to this bug.