Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1055487 - glusterfs backend does not support discard
Summary: glusterfs backend does not support discard
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jeff Cody
QA Contact: FuXiangChun
URL:
Whiteboard:
Depends On: 1037503
Blocks: GlusterThinProvisioning 1103845 1136534
TreeView+ depends on / blocked
 
Reported: 2014-01-20 11:39 UTC by Paolo Bonzini
Modified: 2016-08-04 07:26 UTC (History)
21 users (show)

Fixed In Version: qemu-kvm-rhev-2.1.2-1.el7.x86_64
Doc Type: Enhancement
Doc Text:
Clone Of: 1037503
: GlusterThinProvisioning 1136534 (view as bug list)
Environment:
Last Closed: 2016-08-04 07:26:28 UTC


Attachments (Terms of Use)

Comment 1 Tushar Katarki 2014-01-28 19:01:50 UTC
For additional background on this, please see the following blog: 

http://raobharata.wordpress.com/2013/08/07/unmapdiscard-support-in-qemu-glusterfs/

Comment 3 juzhang 2014-08-15 02:11:56 UTC
Hi Hai and Jeff,

If this bz is only fixed in qemu-kvm-rhev build, Could you update the bz component to qemu-kvm-rhev?

Best Regards,
Junyi

Comment 5 Sibiao Luo 2014-10-11 09:52:24 UTC
verified this issue on qemu-kvm-rhev-2.1.2-1.el7.x86_64.

host info:
# uname -r && rpm -q qemu-kvm
3.10.0-123.9.2.el7.x86_64
qemu-kvm-rhev-2.1.2-1.el7.x86_64

Steps:
1. Create a 1G raw/qcow2 image on an XFS/ext4 file system.
# qemu-img create -f raw gluster://10.66.106.35/volume_sluo/sluo.raw 1G
Formatting 'gluster://10.66.106.35/volume_sluo/sluo.raw', fmt=raw size=1073741824

2. Start qemu with a command-line like the following:
e.g:/usr/libexec/qemu-kvm...-device virtio-scsi-pci,id=scsi2,indirect_desc=off,event_idx=off,bus=pci.0,addr=0x8 -drive file=gluster://10.66.106.35/volume_sluo/sluo.raw,if=none,id=drive-hd-disk,media=disk,format=raw,cache=none,werror=stop,rerror=stop,discard=on -device scsi-hd,drive=drive-hd-disk,id=scsi_disk

3. count the blocks number.
# mount -t glusterfs 10.66.106.35:volume_sluo /mnt/
# stat /mnt/sluo.raw 

4.Make file system to the disk in the guest.
# mkfs.ext4 /dev/sdb

5. On the host
# # stat /mnt/sluo.raw 

6.On the guest
# mount /dev/sdb /mnt/test
# dd if=/dev/zero of=test/file bs=1M count=500

7.cat map in host.
# stat /mnt/sluo.raw 

8.remove the file and fstrim it in guest.
# rm /mnt/test/file
# fstrim ./test

9.count the blocks number in host.
# stat /mnt/sluo.raw 

Results:
1.after step 3,
# stat /mnt/sluo.raw
  File: ‘/mnt/sluo.raw’
  Size: 1073741824	Blocks: 8          IO Block: 131072 regular file

2.after step 5,
# stat /mnt/sluo.raw
  File: ‘/mnt/sluo.raw’
  Size: 1073741824	Blocks: 66888      IO Block: 131072 regular file

3.after step 7,
# stat /mnt/sluo.raw
  File: ‘/mnt/sluo.raw’
  Size: 1073741824	Blocks: 971032     IO Block: 131072 regular file

4.after step 9, check the sectors if roll-back.
# stat /mnt/sluo.raw
  File: ‘/mnt/sluo.raw’
  Size: 1073741824	Blocks: 1123656    IO Block: 131072 regular file

Base on above, the sectors fail to roll-back correctly(Blocks: 971032---->1123656), so this bug has been fixed correctly.

Best Regards,
sluo

Comment 10 juzhang 2015-01-16 09:05:30 UTC
Hi Paolo,

According to https://bugzilla.redhat.com/show_bug.cgi?id=1136534#c5, seems we need to update this bz into assigned status, right?

Best Regards,
Junyi

Comment 11 Paolo Bonzini 2015-01-19 12:04:41 UTC
glusterfs _should_ support discard with the gluster POSIX backend, so marking as FailedQA.

Comment 12 Paolo Bonzini 2015-01-20 11:04:05 UTC
Please try qemu-kvm-2.1.2-20.el7

Comment 13 Sibiao Luo 2015-01-21 07:54:33 UTC
(In reply to Paolo Bonzini from comment #12)
> Please try qemu-kvm-2.1.2-20.el7
Still hit this issue, I did not see any sectors roll-back correctly(Blocks: 2597432---->2646584) with the same testing as comment #5.

host info:
# uname -r && rpm -q qemu-kvm-rhev
3.10.0-222.el7.x86_64
qemu-kvm-rhev-2.1.2-20.el7.x86_64
guest info:
# uname -r
3.10.0-222.el7.x86_64

Results:
1.after step 3,
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 0          IO Block: 131072 regular file

2.after step 5,
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 270872     IO Block: 131072 regular file

3.after step 7,
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 2597432    IO Block: 131072 regular file

4.after step 9, check the sectors if roll-back.
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 2646584    IO Block: 131072 regular file

Best Regards,
sluo

Comment 14 Sibiao Luo 2015-01-26 06:36:47 UTC
(In reply to Sibiao Luo from comment #13)
> (In reply to Paolo Bonzini from comment #12)
> > Please try qemu-kvm-2.1.2-20.el7
> Still hit this issue, I did not see any sectors roll-back correctly(Blocks:
> 2597432---->2646584) with the same testing as comment #5.
> 
> host info:
> # uname -r && rpm -q qemu-kvm-rhev
> 3.10.0-222.el7.x86_64
> qemu-kvm-rhev-2.1.2-20.el7.x86_64
> guest info:
> # uname -r
> 3.10.0-222.el7.x86_64
> 
Forgot to update the glusterfs version which i used the latest package from brewweb.
# rpm -qa | grep gluster
glusterfs-cli-3.6.0.42-1.el7rhs.x86_64
glusterfs-libs-3.6.0.42-1.el7rhs.x86_64
glusterfs-api-devel-3.6.0.42-1.el7rhs.x86_64
glusterfs-devel-3.6.0.42-1.el7rhs.x86_64
glusterfs-3.6.0.42-1.el7rhs.x86_64
glusterfs-server-3.6.0.42-1.el7rhs.x86_64
glusterfs-debuginfo-3.6.0.42-1.el7rhs.x86_64
glusterfs-api-3.6.0.42-1.el7rhs.x86_64
glusterfs-rdma-3.6.0.42-1.el7rhs.x86_64
glusterfs-fuse-3.6.0.42-1.el7rhs.x86_64
glusterfs-geo-replication-3.6.0.42-1.el7rhs.x86_64

Comment 18 Jeff Cody 2016-07-27 18:46:01 UTC
I've tested this on RHEL-7.2, with the following package versions:

glusterfs.x86_64                               3.7.1-16.el7                                                                                                                                                                                                                                 
glusterfs-api.x86_64                           3.7.1-16.el7                           
glusterfs-api-devel.x86_64                     3.7.1-16.el7                           
glusterfs-client-xlators.x86_64                3.7.1-16.el7                           
glusterfs-devel.x86_64                         3.7.1-16.el7                           
glusterfs-fuse.x86_64                          3.7.1-16.el7                           
glusterfs-libs.x86_64                          3.7.1-16.el7                           
qemu-img-rhev.x86_64                           10:2.3.0-31.el7_2.16                   
qemu-kvm-common-rhev.x86_64                    10:2.3.0-31.el7_2.16                   
qemu-kvm-rhev.x86_64                           10:2.3.0-31.el7_2.16

Discard is working fine in my testing.

When testing, I recommend a "sync" in step 8, after removing the file and before the fstrim.


Test results:

Prior to creating test file in the guest:
$ stat test.raw
  File: ‘test.raw’
  Size: 10737418240     Blocks: 595288     IO Block: 131072 regular file

$ du -sh test.raw
291M    test.raw


After 'dd if=/dev/zero of=/mnt/test/junk.bin bs=1M count=128' in guest:
$ stat test.raw
  File: ‘test.raw’
  Size: 10737418240     Blocks: 857432     IO Block: 131072 regular file

$ du -sh test.raw
419M    test.raw


After 'rm -f /mnt/test/junk.bin; sync; fstrim -v /mnt' in guest:
$ stat test.raw
  File: ‘test.raw’
  Size: 10737418240     Blocks: 595288     IO Block: 131072 regular file

$ du -sh test.raw
291M    test.raw


Moving to MODIFIED, so that it still gets tested by QE, but this should be closed.

Comment 19 weliao 2016-08-01 07:35:45 UTC
QE reproduced this issue with following version:
Host:
qemu-kvm-rhev-1.5.3-60.el7_0.12.x86_64
3.10.0-229.el7.x86_64
glusterfs-3.7.9-2.el7rhgs.x86_64
Guest:
3.10.0-456.el7.x86_64
Steps:
1. Create a 1G raw/qcow2 image on an XFS/ext4 file system.
# qemu-img create -f raw gluster://10.66.9.230/test-volume/weliao.raw 1G

2. Start qemu with a command-line like the following:
# /usr/libexec/qemu-kvm -name rhel7.3 -M pc -cpu SandyBridge -m 4096 -realtime mlock=off -nodefaults -smp 4 -drive file=/home/RHEL-Server-7.3-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:22:33:44:55,bus=pci.0,addr=0x3 -vga qxl -spice port=5900,disable-ticketing, -monitor stdio  -boot menu=on  -qmp tcp:0:4444,server,nowait -device virtio-scsi-pci,id=scsi2,indirect_desc=off,event_idx=off,bus=pci.0,addr=0x8 -drive file=gluster://10.66.9.230/test-volume/weliao.raw,if=none,id=drive-virtio-disk1,format=raw,if=none,media=disk,cache=none,werror=none,werror=stop,rerror=stop,discard=on -device scsi-hd,bus=scsi2.0,drive=drive-virtio-disk1,id=virtio-disk1
3. count the blocks number.
# mount -t glusterfs 10.66.9.230:test-volume /mnt/
# stat /mnt/weliao.raw 

4.Make file system to the disk in the guest.
# mkfs.ext4 /dev/sdb

5. On the host
# stat /mnt/weliao.raw 

6.On the guest
# mount /dev/sdb /mnt/test
# dd if=/dev/zero of=test/file bs=1M count=500

7.cat map in host.
# stat /mnt/weliao.raw 

8.remove the file and fstrim it in guest.
# rm /mnt/test/file
# fstrim ./test

9.count the blocks number in host.
# stat /mnt/weliao.raw 
Results:
1.after step 3,
# stat weliao.raw
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 8          IO Block: 131072 regular file

2.after step 5,
# stat weliao.raw
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 66888      IO Block: 131072 regular file

3.after step 7,
# # stat weliao.raw 
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 525520     IO Block: 131072 regular file

4.after step 9, check the sectors if roll-back.
# stat weliao.raw 
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 1123536    IO Block: 131072 regular file

So can reproduced.

--------------------------------------------------------------
Verify with following versions:
Host:
qemu-kvm-rhev-2.6.0-17.el7.x86_64
3.10.0-478.el7.x86_64
glusterfs-3.7.9-2.el7rhgs.x86_64
Guest:
3.10.0-456.el7.x86_64

the same test steps:
results:
1.after step 3,
# stat weliao.raw
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 8          IO Block: 131072 regular file

2.after step 5,
# stat weliao.raw
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 66888      IO Block: 131072 regular file

3.after step 7,
# # stat weliao.raw 
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 525632     IO Block: 131072 regular file

# stat weliao.raw
  File: ‘weliao.raw’
  Size: 1073741824	Blocks: 99528      IO Block: 131072 regular file
the sectors roll-back correctly(Blocks: 525632---->99528)
So this bug fixed.


Note You need to log in before you can comment on or make changes to this bug.