Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1519377 - Filesystem gets corrupted when VDO is filled [NEEDINFO]
Summary: Filesystem gets corrupted when VDO is filled
Keywords:
Status: CLOSED DUPLICATE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kmod-kvdo
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Bryan Gurney
QA Contact: Jakub Krysl
URL:
Whiteboard:
Depends On:
Blocks: 1517911
TreeView+ depends on / blocked
 
Reported: 2017-11-30 16:25 UTC by Jakub Krysl
Modified: 2019-04-11 14:15 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-11 14:15:19 UTC
msakai: needinfo? (ldelouw)


Attachments (Terms of Use)
EXT4 on thin LV /var/log/messages (deleted)
2017-11-30 16:25 UTC, Jakub Krysl
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Article) 3966841 None None None 2019-03-27 17:54:07 UTC

Internal Links: 1647562

Description Jakub Krysl 2017-11-30 16:25:39 UTC
Created attachment 1360989 [details]
EXT4 on thin LV /var/log/messages

Description of problem:
When VDO is filled the filesystem gets corrupted. This is a continuation from BZ 1517911. Here are results of XFS and EXT4 filesystems I tested:

# vdo create --name vdo --device /dev/mapper/rhel_storageqe--74-lv 
# mkfs.xfs -K /dev/mapper/vdo
# mount /dev/mapper/vdo vdo
# dd if=/dev/urandom of=vdo/random bs=4K status=progress
6393659392 bytes (6.4 GB) copied, 72.054950 s, 88.7 MB/s
dd: error writing ‘vdo/random’: No space left on device
1562052+0 records in
1562051+0 records out
6398160896 bytes (6.4 GB) copied, 72.3204 s, 88.5 MB/s
/var/log/messages:
[  293.791577] Buffer I/O error on dev dm-3, logical block 524319, lost async page write
[  293.822456] Buffer I/O error on dev dm-3, logical block 530463, lost async page write
[  293.822460] Buffer I/O error on dev dm-3, logical block 530464, lost async page write
[  293.822461] Buffer I/O error on dev dm-3, logical block 530465, lost async page write
[  293.822464] Buffer I/O error on dev dm-3, logical block 530466, lost async page write
[  293.822465] Buffer I/O error on dev dm-3, logical block 530467, lost async page write
[  293.822467] Buffer I/O error on dev dm-3, logical block 530468, lost async page write
[  293.822468] Buffer I/O error on dev dm-3, logical block 530469, lost async page write
[  293.822470] Buffer I/O error on dev dm-3, logical block 530470, lost async page write
[  293.822471] Buffer I/O error on dev dm-3, logical block 530471, lost async page write
[  302.531495] buffer_io_error: 661455 callbacks suppressed
[  302.559073] Buffer I/O error on dev dm-3, logical block 1212443, lost async page write
--snip--
[  302.675832] Buffer I/O error on dev dm-3, logical block 1215169, lost async page write
[  308.524202] buffer_io_error: 172992 callbacks suppressed
[  308.551992] Buffer I/O error on dev dm-3, logical block 1385499, lost async page write
--snip--
[  308.591889] Buffer I/O error on dev dm-3, logical block 1409042, lost async page write
[  310.821256] XFS (dm-3): metadata I/O error: block 0x600058 ("xlog_iodone") error 28 numblks 64
[  310.828972] XFS (dm-3): metadata I/O error: block 0x8 ("xfs_buf_iodone_callback_error") error 28 numblks 8
[  310.913386] XFS (dm-3): Failing async write on buffer block 0x20. Retrying async write.
[  310.950975] XFS (dm-3): Failing async write on buffer block 0x28. Retrying async write.
[  310.988773] XFS (dm-3): Failing async write on buffer block 0x8. Retrying async write.
[  311.025896] XFS (dm-3): Log I/O Error Detected.  Shutting down filesystem
[  311.057590] XFS (dm-3): Please umount the filesystem and rectify the problem(s)

At this point umounting the directory and mounting it again produces this:
# mount /dev/mapper/vdo vdo
mount: mount /dev/mapper/vdo on /root/vdo failed: No space left on device

Direct I/O with newly created VDO:
# dd if=/dev/urandom oflag=direct of=vdo/direct bs=4k status=progress
2133049344 bytes (2.1 GB) copied, 359.150537 s, 5.9 MB/s
dd: error writing ‘vdo/direct’: Input/output error
522555+0 records in
522554+0 records out
2140381184 bytes (2.1 GB) copied, 360.112 s, 5.9 MB/s
and /var/log/messages:
(snipping all I/O errors, they are same a in previous case)
[12878.030516] XFS (dm-3): metadata I/O error: block 0x6000f8 ("xlog_iodone") error 28 numblks 64        
[12878.039201] XFS (dm-3): Log I/O Error Detected.  Shutting down filesystem                             
[12878.045990] XFS (dm-3): Please umount the filesystem and rectify the problem(s) 

This issue is not specific to XFS on VDO, here is EXT4 on VDO:
# dd if=/dev/urandom of=vdo/random bs=4K status=progress
4491812864 bytes (4.5 GB) copied, 56.090496 s, 80.1 MB/s
dd: error writing ‘vdo/random’: Read-only file system
1098935+0 records in
1098934+0 records out
4501233664 bytes (4.5 GB) copied, 56.3436 s, 79.9 MB/s
/var/log/messages:
[ 1002.161502] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3108864 starting block 631529)
[ 1002.228204] Buffer I/O error on device dm-3, logical block 631529
[ 1002.255681] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631530)
[ 1002.317968] Buffer I/O error on device dm-3, logical block 631530
[ 1002.345288] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631532)
[ 1002.405415] Buffer I/O error on device dm-3, logical block 631532
[ 1002.432844] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631534)
[ 1002.493037] Buffer I/O error on device dm-3, logical block 631534
[ 1002.520389] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631536)
[ 1002.580549] Buffer I/O error on device dm-3, logical block 631536
[ 1002.607781] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631538)
[ 1002.667760] Buffer I/O error on device dm-3, logical block 631538
[ 1002.698018] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631540)
[ 1002.760896] Buffer I/O error on device dm-3, logical block 631540
[ 1002.788175] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631542)
[ 1002.848365] Buffer I/O error on device dm-3, logical block 631542
[ 1002.875679] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631544)
[ 1002.935862] Buffer I/O error on device dm-3, logical block 631544
[ 1002.963237] EXT4-fs warning (device dm-3): ext4_end_bio:316: I/O error -28 writing to inode 12 (offset 2139095040 size 3547136 starting block 631546)
[ 1003.023103] Buffer I/O error on device dm-3, logical block 631546
[ 1003.070344] JBD2: Detected IO errors while flushing file data on dm-3-8
[ 1003.100238] Aborting journal on device dm-3-8.
[ 1003.101997] EXT4-fs (dm-3): ext4_writepages: jbd2_start: 5120 pages, ino 12; err -30
[ 1003.102001] EXT4-fs (dm-3): ext4_writepages: jbd2_start: 6144 pages, ino 12; err -30
[ 1003.191280] buffer_io_error: 209161 callbacks suppressed
[ 1003.219068] Buffer I/O error on dev dm-3, logical block 557056, lost sync page write
[ 1003.254574] JBD2: Error -5 detected when updating journal superblock for dm-3-8.
[ 1003.802202] JBD2: Detected IO errors while flushing file data on dm-3-8
[ 1008.105584] Buffer I/O error on dev dm-3, logical block 0, lost sync page write
[ 1008.145586] EXT4-fs error (device dm-3): ext4_journal_check_start:56: Detected aborted journal
[ 1008.187529] EXT4-fs (dm-3): Remounting filesystem read-only
[ 1008.212965] EXT4-fs (dm-3): previous I/O error to superblock detected
[ 1008.242183] Buffer I/O error on dev dm-3, logical block 0, lost sync page write
[ 1008.275000] EXT4-fs (dm-3): ext4_writepages: jbd2_start: 6144 pages, ino 12; err -30

# umount vdo
/var/log/messages:
[ 1113.065132] EXT4-fs (dm-3): previous I/O error to superblock detected
[ 1113.099377] Buffer I/O error on dev dm-3, logical block 0, lost sync page write
[ 1113.133335] VFS: Dirty inode writeback failed for block device dm-3 (err=-5).

# mount /dev/mapper/vdo vdo
/var/log/messages:
[ 1115.016817] Buffer I/O error on dev dm-3, logical block 557056, lost sync page write
[ 1115.054903] JBD2: Error -5 detected when updating journal superblock for dm-3-8.
[ 1115.092471] Buffer I/O error on dev dm-3, logical block 0, lost sync page write
[ 1115.125654] Buffer I/O error on dev dm-3, logical block 557056, lost sync page write
[ 1115.160022] JBD2: Error -5 detected when updating journal superblock for dm-3-8.
[ 1115.193170] Aborting journal on device dm-3-8.
[ 1115.213229] Buffer I/O error on dev dm-3, logical block 557056, lost sync page write
[ 1115.248008] JBD2: Error -5 detected when updating journal superblock for dm-3-8.



I could not reproduce this on thinly provisioned LV (which is similar to VDO):
# lvcreate -n pool -T -L 2G vg                                                       
  Using default stripesize 64.00 KiB.
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "pool" created.
# lvcreate -n lv -T -V 6G vg/pool                                                    
  Using default stripesize 64.00 KiB.
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool vg/pool (2.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "lv" created.
# mkfs.xfs -K /dev/mapper/vg-lv
# mount /dev/mapper/vg-lv lv
# dd if=/dev/urandom of=lv/direct oflag=direct bs=4K status=progress
2134970368 bytes (2.1 GB) copied, 390.701373 s, 5.5 MB/s
dd: error writing ‘lv/direct’: Input/output error
521553+0 records in
521552+0 records out
2136276992 bytes (2.1 GB) copied, 451.014 s, 4.7 MB/s
Nothing in /var/log/messages.

(recreated LV at this point)
# dd if=/dev/urandom of=lv/normal bs=4K status=progress
6383173632 bytes (6.4 GB) copied, 100.772492 s, 63.3 MB/s
dd: error writing ‘lv/direct’: No space left on device
1562008+0 records in
1562007+0 records out
6397980672 bytes (6.4 GB) copied, 100.855 s, 63.4 MB/s
/var/log/messages:
[87430.594163] buffer_io_error: 1048566 callbacks suppressed
[87430.599570] Buffer I/O error on dev dm-6, logical block 632960, lost async page write
--snip--
[87430.670056] Buffer I/O error on dev dm-6, logical block 632969, lost async page write

EXT4 on thin LV:
# dd if=/dev/urandom of=lv/random bs=4K status=progress
5163442176 bytes (5.2 GB) copied, 312.814163 s, 16.5 MB/s^C
1260614+0 records in
1260614+0 records out
5163474944 bytes (5.2 GB) copied, 313.264 s, 16.5 MB/s
I had to interrupt it at this point as the actual speed of dd was in KB/s. Attaching /var/log/messages as it is too long.

Version-Release number of selected component (if applicable):
kmod-kvdo-6.1.0.55-10
vdo-6.1.0.55-9
xfsprogs-4.5.0-13.el7
e2fsprogs-1.42.9-10.el7

How reproducible:
100%

Steps to Reproduce:
1. vdo create
2. create filesystem on vdo
3. mount vdo
4. fill mounted vdo with some data

Actual results:
I/O errors followed by filesystem crashing and getting corrupted

Expected results:
I/O errors without any filesystem crash or corruption

Additional info:

Comment 2 Andy Walsh 2017-11-30 18:56:22 UTC
When you run into the "VDO full" condition, do you end up trying to "Grow Physical", or run a fsck against the volume before mounting it (or after the first failed attempt at mounting it)?

I wonder if the issue isn't 'corruption', but more that we have no space to work, and therefore the system just can't operate the mount at that point.

Comment 3 Sweet Tea Dorminy 2017-11-30 21:40:21 UTC
We've seen such behaviors regularly when one of our tests, with a filesystem atop VDO, accidentally runs out of space.

When VDO receives a write request to some logical address, it allocates a free block and will write the data at that location if it neither dedupes nor compresses. If that logical address was mapped to some physical address before this write happened, that old physical address might be freed after the logical address has been updated (if no other logical address maps there). 

XFS, and other filesystems, have a journal storing pending writes. The journal is in a fixed location, so from VDO's point of view the same logical addresses are being written over and over with unique data. 

In order for a filesystem to recover, it has to do some writes, possibly to the journal region. 

I suspect (I have not confirmed) that dm-thin overwrites data in place --- once a logical address is mapped to a particular physical address, later writes to that logical address will overwrite that particular physical address. This means a filesystem that ran out of space on dm-thin can recover as long as it's only overwriting data and not writing to new addresses. 

Filesystems on VDO can't write at all, even overwrites, when VDO is full, because we never overwrite old data before the new data is on disk.

Comment 4 Jakub Krysl 2017-12-01 16:04:41 UTC
I tested increasing physical size and the filesystem is mountable again, so it is not corrupted. But this is just a workaround, as increasing physical size might not be possible at the moment or even at all.

So the prefered fix to this is to duplicate thinp behaviour as closely as possible with the goal to give user access to his data. This might even mean locking the vdo, disabling dedupe and setting it to read-only...as long as user can access his data.

If that is not possible at all, documenting this behaviour really well with the workaround. Also part of this solution are probably some early warnings (RFE BZ 1519307) saying the user should prepare more physical space asap.

The reason for prefering fix is because some filesystems might not handle this very gracefully, one example is the EXT4 that gets basically stuck writing and very slow speed (in B/s) and spamming error messages for every byte it cannot write.

Comment 5 Sweet Tea Dorminy 2017-12-01 17:36:18 UTC
We have considered fixing before for similar reasons -- XFS's previous behavior (now tunable and not default) was to infinitely retry failed writes, which for obvious reasons caused problems on a VDO that ran out of space.

Comment 6 Louis Imershein 2017-12-01 18:24:36 UTC
Out of curiosity:

1. Does mounting readonly work?
2. Does  relocating the xfs journal to a non-VDO block device allow us to mount the filesystem on a full VDO volume?  

I note that its not enough to mount read-only, you need be able to mount as a writeable filesystem with discards enabled to clean up space. Either we need to figure out how to combine optimized and non-optimized storage to allow us to get mounted and be able to do discards or we need to suggest that users keep some storage in reserve when provisioning VDO - or maybe even do it by default so that support has a safety valve.

Comment 7 Andy Walsh 2017-12-01 18:31:05 UTC
The problem with putting in a reserve, is that you can only tap into it until you use it all.  How many times is acceptable to run into that situation?  Once we've tapped out the reserves, the same problem applies.

Documenting that we recommend using a storage medium that can be expanded and telling the user to use a reserve of their own making, is the approach I've been trying to state at this point, but that's not a solution other than "You're doing it wrong" when they run into the issue.

Comment 8 Sweet Tea Dorminy 2017-12-01 18:40:25 UTC
1 way to have a reserve from the user's perspective is to partition the VDO logical space into two logical volumes, say 'reserve' and 'actual', fill 'reserve' with /dev/urandom, and then only use 'actual'. If you run out of space on 'actual', you can overwrite 'reserve' with /dev/zero to free some VDO space, recover the filesystem on 'actual' and delete some stuff, then fill 'reserve' with random data again to get a reserve back.

Comment 9 Luc de Louw 2018-01-26 21:01:21 UTC
growPhysical is failing as well.

[root@rhel75beta ~]# vdo growPhysical -n vdo1 
vdo: ERROR - Cannot grow physical on VDO vdo1; device-mapper: message ioctl on vdo1  failed: Invalid argument
vdo: ERROR - device-mapper: message ioctl on vdo1  failed: Invalid argument

At the end, it seems to be a unrecoverable error

Comment 10 Sweet Tea Dorminy 2018-01-26 21:07:55 UTC
Hi; can you check in journalctl for messages from VDO about what the invalid argument was?

Comment 11 Luc de Louw 2018-01-26 21:13:10 UTC
(In reply to Luc de Louw from comment #9)
> growPhysical is failing as well.
> 
> [root@rhel75beta ~]# vdo growPhysical -n vdo1 
> vdo: ERROR - Cannot grow physical on VDO vdo1; device-mapper: message ioctl
> on vdo1  failed: Invalid argument
> vdo: ERROR - device-mapper: message ioctl on vdo1  failed: Invalid argument
> 
> At the end, it seems to be a unrecoverable error

Hi there,

journalctl does not provide any information:

Jan 26 21:55:15 rhel75beta.example.com vdo[1450]: ERROR - device-mapper: message ioctl on vdo1  failed: Invalid argument
Jan 26 22:00:00 rhel75beta.example.com kernel: kvdo0:dmsetup: Preparing to resize physical to 28835840
Jan 26 22:00:00 rhel75beta.example.com kernel: kvdo0:dmsetup: Done preparing to resize physical
Jan 26 22:00:00 rhel75beta.example.com kernel: kvdo0:dmsetup: suspending device 'vdo1'
Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: device 'vdo1' suspended
Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: Requested physical block count 28835840 not greater than 28835840
Jan 26 22:00:01 rhel75beta.example.com vdo[1802]: ERROR - Cannot grow physical on VDO vdo1; device-mapper: message ioctl on vdo1  failed: Invalid argument
Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: resuming device 'vdo1'
Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: device 'vdo1' resumed
Jan 26 22:00:01 rhel75beta.example.com vdo[1802]: ERROR - device-mapper: message ioctl on vdo1  failed: Invalid argument

Comment 12 Sweet Tea Dorminy 2018-01-26 21:17:19 UTC
Hi Luc;
>Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: Requested
physical block count 28835840 not greater than 28835840

This indicates the storage under VDO hasn't expanded, so VDO can't expand into new space. Without new space, VDO still doesn't have any more free blocks so cannot accept more writes.

Comment 13 Luc de Louw 2018-01-26 21:21:49 UTC
(In reply to Sweet Tea Dorminy from comment #12)
> Hi Luc;
> >Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: Requested
> physical block count 28835840 not greater than 28835840
> 
> This indicates the storage under VDO hasn't expanded, so VDO can't expand
> into new space. Without new space, VDO still doesn't have any more free
> blocks so cannot accept more writes.

That is strange....

hypervisor:/vm-images# qemu-img resize rhel75beta-vdo-disk.qcow2 +10G
Image resized.

[root@rhel75beta ~]# fdisk -l /dev/vdb

Disk /dev/vdb: 118.1 GB, 118111600640 bytes, 230686720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rhel75beta ~]# 

There should be 10G available on the (virtual) physical disk

Comment 14 Luc de Louw 2018-01-26 21:21:50 UTC
(In reply to Sweet Tea Dorminy from comment #12)
> Hi Luc;
> >Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: Requested
> physical block count 28835840 not greater than 28835840
> 
> This indicates the storage under VDO hasn't expanded, so VDO can't expand
> into new space. Without new space, VDO still doesn't have any more free
> blocks so cannot accept more writes.

That is strange....

hypervisor:/vm-images# qemu-img resize rhel75beta-vdo-disk.qcow2 +10G
Image resized.

[root@rhel75beta ~]# fdisk -l /dev/vdb

Disk /dev/vdb: 118.1 GB, 118111600640 bytes, 230686720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rhel75beta ~]# 

There should be 10G available on the (virtual) physical disk

Comment 15 Luc de Louw 2018-01-26 21:29:44 UTC
(In reply to Sweet Tea Dorminy from comment #12)
> Hi Luc;
> >Jan 26 22:00:01 rhel75beta.example.com kernel: kvdo0:dmsetup: Requested
> physical block count 28835840 not greater than 28835840
> 
> This indicates the storage under VDO hasn't expanded, so VDO can't expand
> into new space. Without new space, VDO still doesn't have any more free
> blocks so cannot accept more writes.

I have the disk additional 100GB, so its more than double to original size and tried again. Same result, different numbers:

[   14.958349] kvdo0:dmsetup: Preparing to resize physical to 57671680
[   14.961344] kvdo0:dmsetup: Done preparing to resize physical
[   14.963874] kvdo0:dmsetup: suspending device 'vdo1'
[   15.182655] kvdo0:dmsetup: device 'vdo1' suspended
[   15.186369] kvdo0:dmsetup: Requested physical block count 57671680 not greater than 57671680

Comment 16 Matthew Sakai 2018-01-27 01:57:31 UTC
This is so odd. There is nothing inherent about a grow physical operation which would cause it to fail, even if the existing storage is completely full.  Could I suggest you move this issue to a new BZ so that we can try to work it out independent from the out-of-space issue?

Some more information might help, as well. Can you get stats out this VDO volume, to see how much space it thinks it's using? And also, a longer log of the recent operations related to this VDO might help us figure out what's happening here.

As an example, one way to get this error is to do a successful grow physical operation, and then immediately launch another. The second grow physical would fail in this way.

> [   14.958349] kvdo0:dmsetup: Preparing to resize physical to 57671680
> [   14.961344] kvdo0:dmsetup: Done preparing to resize physical
> [   14.963874] kvdo0:dmsetup: suspending device 'vdo1'
> [   15.182655] kvdo0:dmsetup: device 'vdo1' suspended
> [   15.186369] kvdo0:dmsetup: Requested physical block count 57671680 not
> greater than 57671680

Based on these messages, it appears that the VDO volume believes it is already using all 220G of your device.

Comment 18 Matthew Sakai 2018-06-13 18:07:34 UTC
Looking at this again after several months, the issue with the grow physical operation may well be an instance of bug 1582647.

Comment 20 Magnus Glantz 2018-12-05 13:46:30 UTC
Please note that I see this in Red Hat Enterprise Linux 8 Snapshot 1 (latest at the time of writing)

Comment 24 Bryan Gurney 2019-03-15 16:55:52 UTC
The KCS article "Managing Thin Provisioning with Virtual Data Optimizer" has been published at https://access.redhat.com/articles/3966841

Comment 26 Bryan Gurney 2019-04-11 14:15:19 UTC

*** This bug has been marked as a duplicate of bug 1657152 ***


Note You need to log in before you can comment on or make changes to this bug.