Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 158687 - vgcreate unable to create large volume groups
Summary: vgcreate unable to create large volume groups
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2
Version: 4.0
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Alasdair Kergon
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 158692
TreeView+ depends on / blocked
 
Reported: 2005-05-24 20:30 UTC by David Milburn
Modified: 2007-11-30 22:07 UTC (History)
4 users (show)

Fixed In Version: U2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2006-03-07 19:36:45 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description David Milburn 2005-05-24 20:30:12 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.6) Gecko/20050302 Firefox/1.0.1 Fedora/1.0.1-1.3.2

Description of problem:
After creating 506 PVs using 3.9TB of capacity,
vgcreate fails with the following message:

VG vg_4tb_low metadata writing failed

Customer can always reproduce.

Version-Release number of selected component (if applicable):
lvm2-2.00.31-1.0.RHEL4

How reproducible:
Always

Steps to Reproduce:
1. # pvcreate /dev/sdk1 ... /dev/sdtd1 
2. # vgcreate vg_4tb_low /dev/sdk1 ... /dev/sdtd1
3.
  

Actual Results:  vgcreate results in the following error message.

VG vg_4gb_low metadata writing failed

Expected Results:  Volume group "vg_4tb_low" should be successfully created.

Additional info:

Hardware info:
 The disk configuration consists of two target on a single path.
 Target: 230000004c7f0761
       LUN0  1.9TB (LUN0+LUN1 exceeds 2TB)
       LUN1  130GB
       LUN2  4GB
       ...
       LUN253  4GB

 Target: 210000004c7f0761
       LUN0  4GB
       ...
       LUN251  4GB

This is this issue that the *large* volume groups cannot be created by vgcreate.
When this issue occurs, the message "VG vg_4gb_low metadata writing failed" is displayed. I think this message is created in the following code in _vg_write_raw() of format-text.c.

       if (!(mdac->rlocn.size = text_vg_export_raw(vg, "", buf, sizeof(buf)))) {
               log_error("VG %s metadata writing failed", vg->name);
               goto out;
       }

NEC thinks that the cause of this issue is fixed buffer size in _vg_write_raw() of format-text.c.

       /* FIXME Essential fix! Make dynamic (realloc? pool?) */
      char buf[65536];
      int found = 0;

Comment 1 Alasdair Kergon 2005-05-24 20:56:11 UTC
Notes for large VGs:

Firstly, as per pvcreate man page, don't store VG metadata on every PV or you'll
find the tools get very slow - and you really don't need 505 backup copies,
every one of which has to be checked every time you access the VG!  4 or 5
copies should be more than enough.  For the others, use --metadatacopies 0.  The
only thing to be careful about is if you were to use vgsplit in the future,
every VG must contain at least one PV with non-zero metadatacopies.

Secondly, you need to use a larger metadata area.  You've plenty of disk space,
so set it to 1 MB or more.  In a quick test here, each additional PV used up 260
bytes, so 506 would bring you up to the 128KB limit.

Thirdly, yes that buffer needs increasing.  It's not a trivial change to do
properly because there are dependencies on other settings - the default values
of reserved_stack and reserved_memory in lvm.conf.


Comment 2 Alasdair Kergon 2005-05-24 20:58:21 UTC
The easy way out for now would be for me to make the 65536 another tunable
parameter in the config file, say "max_metadatasize".

Comment 5 Alasdair Kergon 2005-06-08 14:03:10 UTC
I've changed the code to use malloced memory (instead of the stack) and to
extend the allocation as required.

Fix is in lvm2 CVS.


Note You need to log in before you can comment on or make changes to this bug.