Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 450922 - very sub-optimal default readahead settings on lvm device
Summary: very sub-optimal default readahead settings on lvm device
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: rawhide
Hardware: All
OS: Linux
Target Milestone: ---
Assignee: Milan Broz
QA Contact: Fedora Extras Quality Assurance
Depends On:
TreeView+ depends on / blocked
Reported: 2008-06-11 18:35 UTC by John Ellson
Modified: 2013-03-01 04:06 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2008-07-01 12:10:51 UTC

Attachments (Terms of Use)

Description John Ellson 2008-06-11 18:35:20 UTC
Description of problem:
I saw some references to performance penalties of 20-30% recently for LVM, and
wondered what I was getting from running LVM over 4-way RAID-0.   I was
horrified to discovery a 50% penalty!   Googling around turned up this known fix
that restores just about all of the performance (as measured by hdparm -t):

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. hdparm -t /dev/mapper/VolGroup00-LogVol01
2. blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01
Actual results:

$ blockdev --getra /dev/mapper/VolGroup00-LogVol00

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
 Timing buffered disk reads:  374 MB in  3.01 seconds = 124.21 MB/sec

$ blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
 Timing buffered disk reads:  734 MB in  3.00 seconds = 244.57 MB/sec

Expected results:
No big performance penalties for LVM, at least not without big red flags to the

Additional info:
Now I'm wondering why I'm using LVM over raid0 ?   There is no way I'm going
to extend the logical partition to another device (4 disks).   Perhaps the right
solution for me is to drop LVM and run my file system directly on /dev/md0.

Comment 1 John Ellson 2008-06-11 18:43:13 UTC
Perhaps hdparm is misleading?    I get essentially no increase in performance
for my make jobs with this readahead change.

Comment 2 Milan Broz 2008-07-01 12:10:51 UTC
- hdaparm runs just synchronous reads of 2MB blocks, basically it should return
similar values like
blockdev --flushbufs $DEV ; dd iflag=sync if=$DEV of=/dev/null bs=2048k count=100

- readhead setting is now properly set for striped LVs (RADI0) in lvm2
(valuses should be similar to MD subsystem)

# lvcreate -i4 -L 100G -n lv_s2 vg_test
  Using default stripesize 64.00 KB
  Logical volume "lv_s2" created
# lvs -o +devices
  LV    VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv_s2 vg_test -wi-a- 100.00G                                      

# blockdev --getra /dev/sdb
# blockdev --getra /dev/vg_test/lv_s2

- there are still some issues if stacking devices (lvm over md), see bug 232843

Note You need to log in before you can comment on or make changes to this bug.