Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1512127 - Creating a VDO volume over the metadata of a previously removed VDO volume fails
Summary: Creating a VDO volume over the metadata of a previously removed VDO volume fails
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: vdo
Version: 7.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Ken Raeburn
QA Contact: Jakub Krysl
Depends On:
Blocks: 1510558
TreeView+ depends on / blocked
Reported: 2017-11-10 22:37 UTC by Bryan Gurney
Modified: 2019-03-06 01:11 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-04-10 15:48:32 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0871 None None None 2018-04-10 15:49:01 UTC

Description Bryan Gurney 2017-11-10 22:37:02 UTC
Description of problem:
Attempting to create a VDO volume on the same device as a previously existing VDO volume results in the error "vdo: ERROR - vdoformat: Cannot format device already containing a valid VDO!"

This is a follow-up to BZ 1510558, which seems to have been partially resolved with kmod-kvdo- and vdo-

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create a VDO volume on a spare device.
2. Remove the VDO volume created in step 1.
3. Create a VDO volume on the same spare device as before.

Actual results:
The "vdo create" command fails with "vdo: ERROR - vdoformat: Cannot format device already containing a valid VDO!"

Expected results:
The "vdo create" command succeeds.

Additional info:
It is possible to work around this by zeroing at least the VDO superblock, if not the entire

Comment 2 Bryan Gurney 2017-11-13 12:58:02 UTC
I forgot to finish the statement in the "Additional Info" section:

It is possible to work around this by zeroing at least the VDO superblock, if not the entire geometry block, index, and VDO superblock.

Comment 3 Jakub Krysl 2017-11-14 12:59:21 UTC
Setting this to 'Test Blocker' to elevate priority. This prevents us from creating another vdo. Workaround exists by zeroing the superblock, but this is more of an hack and without this workaround the vdo cannot be created again.

Comment 5 bjohnsto 2017-11-14 18:45:42 UTC
vdo remove should probably dd the first 4k block of the device.

Comment 7 Jakub Krysl 2017-12-04 13:19:39 UTC

vdo remove now runs direct dd over the index superblock:
# vdo remove --name vdo --verbose
Removing VDO vdo
Stopping VDO vdo
    dmsetup status vdo
    udevadm settle
    dmsetup remove vdo
    dd if=/dev/zero of=/dev/mapper/mpatha oflag=direct bs=4096 count=1

Becuase of this there is no superblock left and the next vdo is successfully created:
# vdo create --device=/dev/mapper/mpatha --name=vdo --verbose
Creating VDO vdo
    pvcreate -qq --test /dev/mapper/mpatha
    modprobe kvdo
    vdoformat --uds-checkpoint-frequency=0 --uds-memory-size=0.25 /dev/mapper/mpatha
    vdodumpconfig /dev/mapper/mpatha
Starting VDO vdo
    dmsetup status vdo
    modprobe kvdo
    vdodumpconfig /dev/mapper/mpatha
    dmsetup create vdo --uuid VDO-b3d10bf0-209b-4ea3-a24e-b70c35de5871 --table '0 209715200 dedupe /dev/mapper/mpatha 4096 disabled 0 32768 16380 on sync vdo ack=1,bio=4,bioRotationInterval=64,cpu=2,hash=1,logical=1,physical=1'
    dmsetup status vdo
Starting compression on VDO vdo
    dmsetup message vdo 0 compression on
    dmsetup status vdo
VDO instance 1 volume is ready at /dev/mapper/vdo

[ 1498.607669] kvdo1:dmsetup: starting device 'vdo' device instantiation 1 (ti=ffffb2dd81031040) write policy sync
[ 1498.657060] kvdo1:dmsetup: zones: 1 logical, 1 physical, 1 hash; base threads: 5
[ 1498.805539] kvdo1:dmsetup: uds: kvdo1:dedupeQ: loading or rebuilding index: dev=/dev/mapper/mpatha offset=4096 size=2781704192
[ 1498.805903] uds: kvdo1:dedupeQ: Failed loading or rebuilding index: UDS Error: No index found (1061)
[ 1498.805910] kvdo1:dedupeQ: Error opening index dev=/dev/mapper/mpatha offset=4096 size=2781704192: UDS Error: No index found (1061)
uds: kvdo1:dedupeQ: creating index: dev=/dev/mapper/mpatha offset=4096 size=2781704192
uds: kvdo1:dedupeQ: Using 6 indexing zones for concurrency.

[ 1499.018973] Setting UDS index target state to online
[ 1499.042948] kvdo1:dmsetup: device 'vdo' started
[ 1499.064259] kvdo1:dmsetup: resuming device 'vdo'
[ 1499.085649] kvdo1:dmsetup: device 'vdo' resumed
[ 1499.124396] kvdo1:packerQ: compression is enabled

Comment 10 errata-xmlrpc 2018-04-10 15:48:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.