Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1515860

Summary: [RFE] snapshot bricks must be healed too
Product: Red Hat Gluster Storage Reporter: Raghavendra Talur <rtalur>
Component: snapshotAssignee: Sunny Kumar <sunkumar>
Status: CLOSED UPSTREAM QA Contact: Rahul Hinduja <rhinduja>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.4CC: rhs-bugs, rkavunga, storage-qa-internal
Target Milestone: ---Keywords: FutureFeature, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 06:37:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Raghavendra Talur 2017-11-21 14:00:29 UTC
Description of problem:
When a node is replaced, we can start a new node and using replace brick commands heal all the replica volumes.

However, snapshots from the devices of lost node are gone forever. We should instead, create new snaps on new node and heal them

How reproducible:

Steps to Reproduce:
1. take a snap of replica 3 volume
2. shutdown a node forever
3. use replace brick to get the lost brick on new node

Actual results:
new node does not have snapshots

Expected results:
snapshots should be healed

Additional info:

Comment 4 Mohammed Rafi KC 2018-11-19 06:37:14 UTC
This will be fixed in glusterd2. We are tracking the issue in upstream 
using [1]. Also there is a similar issue in gd2 for replace brick[2].

So closing this issue, as we are not planning for 3.x series.