Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1059710 - virrt-v2v fails with guestfsd[363]: segfault at 0 ip 000000354b281451 sp 00007fff1f040608 error 4 in libc-2.12.so[3 1793 54b200000+18b000]
Summary: virrt-v2v fails with guestfsd[363]: segfault at 0 ip 000000354b281451 sp 0000...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libguestfs
Version: 6.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-30 12:54 UTC by Roman Hodain
Modified: 2018-12-04 17:14 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-24 17:30:06 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 733163 None None None Never

Description Roman Hodain 2014-01-30 12:54:28 UTC
Description of problem:
   v2v migration fails with the following error message:

   1792 guestfsd: main_loop: new request, len 0x34
   1793 [    7.466142] guestfsd[363]: segfault at 0 ip 000000354b281451 sp 00007fff1f040608 error 4 in libc-2.12.so[3   1793 54b200000+18b000]
   1794 /init: line 158:   363 Segmentation fault      $vg guestfsd
   1795 [    7.831430] sd 2:0:0:0: [sdc] Synchronizing SCSI cache
   1796 [    7.832306] sd 0:0:1:0: [sdb] Synchronizing SCSI cache
   1797 [    7.833561] sd 0:0:0:0: [sda] Synchronizing SCSI cache
   1798 [    7.934446] Restarting system.
   1799 [    7.934895] machine restart
   1800 libguestfs: child_cleanup: 0x253e690: child process died
   1801 libguestfs: sending SIGTERM to process 16674
   1802 libguestfs: trace: aug_init = -1 (error)
   1803 libguestfs: trace: aug_match "/augeas/files//error"
   1804 libguestfs: trace: aug_match = NULL (error)
   1805 libguestfs: trace: umount "/transferDUvDL1"
   1806 libguestfs: trace: umount = -1 (error)
   1807 umount: umount: call launch before using this function\n(in guestfish, don't forget to use the 'run' command)   1807  at /usr/share/perl5/vendor_perl/Sys/VirtConvert/GuestfsHandle.pm line 201.
   1808  at /usr/share/perl5/vendor_perl/Sys/VirtConvert/Config.pm line 272
   1809 virt-v2v: Trasferimento del volume di storage nagiosHP.afis-clone.img: 21474836480 bytes
   1810 libguestfs: trace: close
   1811 libguestfs: closing guestfs handle 0x253e690 (state 0)
   1812 libguestfs: command: run: rm
   1813 libguestfs: command: run: \ -rf /tmp/libguestfsq18ttH

Version-Release number of selected component (if applicable):
	virt-v2v-0.9.1-5.el6_5.x86_64

How reproducible:
	100%

Steps to Reproduce:
	Not clear yet
Actual results:
	Process failes

Expected results:
	VM is mograted to the export domain

Additional info:

Comment 9 Richard W.M. Jones 2014-02-11 22:03:07 UTC
I tried to recreate the /etc directory of the guest from
the sosreport.  However I can't reproduce the Augeas failure.

Since this is likely a data-driven bug, are you able to
get me a disk image of the guest system?

If you can connect to the ESX server (eg. over https using
a browser) then you should be able to download a virtual
file called something like

<name-of-guest>-flat.vmdk

which (despite the extension) is really a raw-format disk image.

Using the disk image, I would be able to run a command such as:

guestfish -xv --ro -a nagios-flat.vmdk -m /dev/mapper/vg_nagios-lv_root <<EOF
aug-init / 1
EOF

to see if I am able to reproduce the bug.

Comment 10 Richard W.M. Jones 2014-02-11 22:08:02 UTC
For my own reference, installed packages on the virt-v2v conversion host:

libguestfs-1.20.11-2.el6.x86_64
glibc-2.12-1.132.el6.x86_64
augeas-libs-1.0.0-5.el6.x86_64

All are up to date.

Comment 12 Richard W.M. Jones 2014-02-24 17:30:06 UTC
Closing as insufficient data.  However it still appears
to be a bug here.


Note You need to log in before you can comment on or make changes to this bug.