Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1597662 - Stale entries of snapshots need to be removed from /var/run/gluster/snaps
Summary: Stale entries of snapshots need to be removed from /var/run/gluster/snaps
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
Assignee: Sunny Kumar
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1547903
TreeView+ depends on / blocked
 
Reported: 2018-07-03 11:42 UTC by Sunny Kumar
Modified: 2018-10-23 15:12 UTC (History)
10 users (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1547903
Environment:
Last Closed: 2018-10-23 15:12:54 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Sunny Kumar 2018-07-03 11:42:28 UTC
+++ This bug was initially created as a clone of Bug #1547903 +++

Description of problem:
=-=-=-=-=-=-=-=-=-=-=-=

When a snapshot is created, it creates the entry in /var/run/gluster/snaps/ but when the snapshot is deleted the entry is not removed.


[root@dhcp42-222 ~]# gluster v info
 
Volume Name: disperse
Type: Distributed-Disperse
Volume ID: ae9c0e11-bb59-45ce-a4ac-4030ea54c259
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.222:/bricks/brick0/disperse-b1
Brick2: 10.70.42.193:/bricks/brick0/disperse-b2
Brick3: 10.70.42.207:/bricks/brick0/disperse-b3
Brick4: 10.70.42.32:/bricks/brick0/disperse-b4
Brick5: 10.70.42.178:/bricks/brick0/disperse-b5
Brick6: 10.70.42.141:/bricks/brick0/disperse-b6
Brick7: 10.70.42.222:/bricks/brick1/disperse-b7
Brick8: 10.70.42.193:/bricks/brick1/disperse-b8
Brick9: 10.70.42.207:/bricks/brick1/disperse-b9
Brick10: 10.70.42.32:/bricks/brick1/disperse-b10
Brick11: 10.70.42.178:/bricks/brick1/disperse-b11
Brick12: 10.70.42.141:/bricks/brick1/disperse-b12
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
disperse.eager-lock: off
disperse.optimistic-change-log: off
disperse.parallel-writes: off
disperse.shd-max-threads: 64
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
features.uss: enable
features.barrier: disable
cluster.brick-multiplex: enable
[root@dhcp42-222 ~]# gluster v status
Status of volume: disperse
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.222:/bricks/brick0/disperse-
b1                                          49152     0          Y       30293
Brick 10.70.42.193:/bricks/brick0/disperse-
b2                                          49152     0          Y       28575
Brick 10.70.42.207:/bricks/brick0/disperse-
b3                                          49152     0          Y       16336
Brick 10.70.42.32:/bricks/brick0/disperse-b
4                                           49152     0          Y       21967
Brick 10.70.42.178:/bricks/brick0/disperse-
b5                                          49152     0          Y       25767
Brick 10.70.42.141:/bricks/brick0/disperse-
b6                                          49152     0          Y       6653 
Brick 10.70.42.222:/bricks/brick1/disperse-
b7                                          49152     0          Y       30293
Brick 10.70.42.193:/bricks/brick1/disperse-
b8                                          49152     0          Y       28575
Brick 10.70.42.207:/bricks/brick1/disperse-
b9                                          49152     0          Y       16336
Brick 10.70.42.32:/bricks/brick1/disperse-b
10                                          49152     0          Y       21967
Brick 10.70.42.178:/bricks/brick1/disperse-
b11                                         49152     0          Y       25767
Brick 10.70.42.141:/bricks/brick1/disperse-
b12                                         49152     0          Y       6653 
Snapshot Daemon on localhost                49153     0          Y       11365
Self-heal Daemon on localhost               N/A       N/A        Y       14086
Quota Daemon on localhost                   N/A       N/A        Y       14095
Snapshot Daemon on 10.70.42.193             49153     0          Y       6266 
Self-heal Daemon on 10.70.42.193            N/A       N/A        Y       28566
Quota Daemon on 10.70.42.193                N/A       N/A        Y       6198 
Snapshot Daemon on 10.70.42.178             49153     0          Y       4445 
Self-heal Daemon on 10.70.42.178            N/A       N/A        Y       25758
Quota Daemon on 10.70.42.178                N/A       N/A        Y       4374 
Snapshot Daemon on 10.70.42.32              49153     0          Y       490  
Self-heal Daemon on 10.70.42.32             N/A       N/A        Y       21958
Quota Daemon on 10.70.42.32                 N/A       N/A        Y       413  
Snapshot Daemon on 10.70.42.141             49153     0          Y       17489
Self-heal Daemon on 10.70.42.141            N/A       N/A        Y       6644 
Quota Daemon on 10.70.42.141                N/A       N/A        Y       17403
Snapshot Daemon on 10.70.42.207             49153     0          Y       27743
Self-heal Daemon on 10.70.42.207            N/A       N/A        Y       16327
Quota Daemon on 10.70.42.207                N/A       N/A        Y       27675
 
Task Status of Volume disperse
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp42-222 ~]# gluster snap list
No snapshots present
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     12      12     221
[root@dhcp42-222 ~]# gluster snap create s1 disperse
snapshot create: success: Snap s1_GMT-2018.02.22-08.39.52 created successfully
[root@dhcp42-222 ~]# gluster snap list
s1_GMT-2018.02.22-08.39.52
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     13      13     248
[root@dhcp42-222 ~]# cd /var/run/gluster/snaps/s
s1_GMT-2018.02.22-08.39.52/    snap-disperse-1/               snap-disperse-3/               snap-disperse-6/               snap-disperse-9/               
snap1_GMT-2018.02.21-03.43.33/ snap-disperse-10/              snap-disperse-4/               snap-disperse-7/               
snap1_GMT-2018.02.21-03.45.11/ snap-disperse-2/               snap-disperse-5/               snap-disperse-8/               
[root@dhcp42-222 ~]# cd /var/run/gluster/snaps/s1_GMT-2018.02.22-08.39.52/
[root@dhcp42-222 s1_GMT-2018.02.22-08.39.52]# ls
638748025a6a433dbfee5e78343461e7
[root@dhcp42-222 s1_GMT-2018.02.22-08.39.52]# cd
[root@dhcp42-222 ~]# gluster snap delete s1_GMT-2018.02.22-08.39.52
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: s1_GMT-2018.02.22-08.39.52: snap removed successfully
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     13      13     248
[root@dhcp42-222 ~]# gluster snap list
No snapshots present
[root@dhcp42-222 ~]# ls /var/run/gluster/snaps/ | wc
     13      13     248
[root@dhcp42-222 ~]# 


As seen above, after deletion of snapshot the entry of that snapshot is not removed from /var/run/gluster/snaps/



Version-Release number of selected component (if applicable):
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



How reproducible:
=-=-=-=-=-=-=-=-=

Always


Steps to Reproduce:
=-=-=-=-=-=-=-=-=-=

1. Create a snapshot
2. Verify /var/run/gluster/snaps/ for the newly created snapshot entry
3. Delete the snapshot
4. Verify /var/run/gluster/snaps/ for the deletion of the newly created snapshot entry


Actual results:
=-=-=-=-=-=-=-=

Entries are not deleted from /var/run/gluster/snaps/ after snapshot delete.


Expected results:
=-=-=-=-=-=-=-=-=

Deleted snapshot entry must not be present under /var/run/gluster/snaps/

Comment 1 Worker Ant 2018-07-03 11:49:02 UTC
REVIEW: https://review.gluster.org/20454 (snapshot : remove stale entry) posted (#1) for review on master by Sunny Kumar

Comment 2 Worker Ant 2018-07-12 04:36:29 UTC
COMMIT: https://review.gluster.org/20454 committed in master by "Amar Tumballi" <amarts@redhat.com> with a commit message- snapshot : remove stale entry

        During snap delete after removing brick-path we should remove
        snap-path too i.e. /var/run/gluster/snaps/<snap-name>.

        During snap deactivate also we should remove snap-path.

Change-Id: Ib80b5d8844d6479d31beafa732e5671b0322248b
fixes: bz#1597662
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>

Comment 4 Shyamsundar 2018-10-23 15:12:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.