Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1059771 - Calculating lvm thin pool snapshot space requirements
Summary: Calculating lvm thin pool snapshot space requirements
Keywords:
Status: CLOSED DUPLICATE of bug 1119839
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On: 1119839
Blocks: 1044717
TreeView+ depends on / blocked
 
Reported: 2014-01-30 15:49 UTC by Dave Sullivan
Modified: 2018-12-05 17:06 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 948001
Environment:
Last Closed: 2014-10-29 23:34:36 UTC


Attachments (Terms of Use)

Comment 3 Marian Csontos 2014-02-04 16:20:31 UTC
The shrink_slab is Bug 1056647 thing fixed by the kernel in the latest nightly.

Comment 14 Jonathan Earl Brassow 2014-07-15 01:07:45 UTC
Calculating the amount of space required for a snapshot depends on how much the origin (or the snapshot!) will change.  If you plan to overwrite every block after every snapshot, then the amount of space you need is (1 + #snapshots)*size.  If nothing is ever written, then very little space is consumed for each snapshot (although there would be no reason for a snapshot then).

We cannot deliver what you are asking for.  We can only do our best to educate users on how to proceed and handle failures as they come WRT this problem.  There are two ways to handle out-of-space conditions: wait for more space to be added or spit-out errors.  The user will have to decide the type of behavior they want for when these things happen.  (Someone else needs to weigh-in if both of these options are available when space is exhausted for the thin data LV.)

The behavior described in lvmthin.7 under the following sections is:
* Data space exhaustion (aka thin dataLV)
  Writes block until more space is provided.
* Metadata space exhaustion (aka thin metadataLV)
  Errors are returned.

If you want a solution in which errors are returned when data space is exhausted, that will need to be requested.

Comment 16 Dusty Mabe 2014-07-15 03:17:33 UTC
(In reply to Jonathan Earl Brassow from comment #14)
> Calculating the amount of space required for a snapshot depends on how much
> the origin (or the snapshot!) will change.
<snip>  

Disclaimer: This may be oversimplifying things so please forgive me if I am wrong.  

What I was saying in comment #11 is that, for a read-only snapshot, the size of the read-only snapshot (the allocation % within the thin pool) does not change. You are right that the size of the pool's usage will grow when files are created/deleted in the origin but the actual blocks used by the read-only snapshot stay the same (i.e. it is the origin growing and not the snapshot). Assuming that the pool had enough space in it for the origin before you took the snapshot, then taking a read-only snapshot and increasing the size of the pool by the size of the snapshot (at the time of creation) should mean the origin still has the same amount of free space to grow into as it did before the snapshot was taken. 

I think what Dan was requesting in comment #12 was that there be a way to automatically increase the pool by the size of the origin at the time the snapshot is created. I'm not saying this should/shouldn't be done as there are quite a few assumptions you have to make; 1) the snapshot will be read only, 2) there is enough space within the vg to extend the pool, etc... It would also be pretty simple for the user to just extend the pool themselves immediately before taking the snapshot. 

> 
> We cannot deliver what you are asking for.  We can only do our best to
> educate users on how to proceed and handle failures as they come WRT this
> problem.  There are two ways to handle out-of-space conditions: wait for
> more space to be added or spit-out errors.  The user will have to decide the
> type of behavior they want for when these things happen.  (Someone else
> needs to weigh-in if both of these options are available when space is
> exhausted for the thin data LV.)

The current behavior is to block, right? The original title of this bug was "lvm thin pool will hang system when full". I think hanging/blocking for a full thin pool is new behavior when it comes to LVM in general. In the past (old snapshots), if you filled up a snapshot it was no longer usable and you would get errors. Similarly, if you filled up a normal LV (by using dd) you would end up with errors too as you were trying to write past the end of the device. I guess that is the behavior I would expect when a thin pool fills up. Maybe at least make it configurable so the user can choose what they prefer (block or error). 

Thoughts?

Comment 18 Jonathan Earl Brassow 2014-07-15 15:49:59 UTC
> The current behavior is to block, right? The original title of this bug was
> "lvm thin pool will hang system when full". I think hanging/blocking for a
> full thin pool is new behavior when it comes to LVM in general. In the past
> (old snapshots), if you filled up a snapshot it was no longer usable and you
> would get errors. Similarly, if you filled up a normal LV (by using dd) you
> would end up with errors too as you were trying to write past the end of the
> device. I guess that is the behavior I would expect when a thin pool fills
> up. Maybe at least make it configurable so the user can choose what they
> prefer (block or error). 
> 
> Thoughts?

The current behavior is to block.

You will never get parity of behavior with the old snapshots, I don't think.  You could always write to the (fully-provisioned) origin before - regardless of whether the snapshots ran out of space and only those snapshots that ran out of space would be invalidated.  If you choose to receive errors with the new snapshots, you will also receive errors from the origin when writing new blocks too.

I've added bug 1119839 to address the ability to set thin volumes to error when they run out of space.  If there are no other issues to discuss on this bug, perhaps I will close that one as a duplicate of this bug and change the subject of this bug.

Comment 21 Dave Sullivan 2014-09-22 14:15:03 UTC
I think this is probably good enough.

https://bugzilla.redhat.com/show_bug.cgi?id=1119839

I'm not sure I have a good enough understanding of things technically to articulate what I'm after.

But the error of the above BZ sounds like an improvement.

Thx.

Comment 22 Jonathan Earl Brassow 2014-10-29 23:34:36 UTC
Ok, then I am going to close this bug as a duplicate of bug 1119839.  We shall proceed with that solution.

*** This bug has been marked as a duplicate of bug 1119839 ***


Note You need to log in before you can comment on or make changes to this bug.