Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1517893 - [ALT-7.5][blivet][x86_64]CRIT anaconda: Traceback
Summary: [ALT-7.5][blivet][x86_64]CRIT anaconda: Traceback
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: python-blivet
Version: 7.5-Alt
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Blivet Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-27 16:25 UTC by PaulB
Modified: 2018-11-02 02:00 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:


Attachments (Terms of Use)

Description PaulB 2017-11-27 16:25:34 UTC
Description of problem:
Target system listed in comment #1 intermittently fails to install due to:
 CRIT anaconda: Traceback (most recent call last):

Version-Release number of selected component (if applicable):
 distro: RHEL-ALT-7.5-20171115.n.0 Server x86_64
 kernel-alt: 4.14.0-3.el7a
 anaconda: 21.48.22.127-1

How reproducible:
 yes


Steps to Reproduce:
1. Install target system listed in comment #1 with RHEL-ALT-7.5-20171115.n.0
2.

Actual results:
https://beaker.engineering.redhat.com/recipes/4489959
http://beaker-archive.app.eng.bos.redhat.com/beaker-logs/2017/11/21556/2155694/4489959/anaconda.log
---<-snip->---
15:27:06,926 CRIT anaconda: Traceback (most recent call last):

  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)

  File "/usr/lib64/python2.7/threading.py", line 765, in run
    self.__target(*self.__args, **self.__kwargs)

  File "/usr/lib64/python2.7/site-packages/pyanaconda/install.py", line 204, in doInstall
    turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall, callbacks=callbacks_reg)

  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 230, in turnOnFilesystems
    storage.doIt(callbacks)

  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 380, in doIt
    self.devicetree.processActions(callbacks)

  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 373, in processActions
    action.execute(callbacks)

  File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 344, in execute
    self.device.destroy()

  File "/usr/lib/python2.7/site-packages/blivet/devices/storage.py", line 525, in destroy
    self._destroy()

  File "/usr/lib/python2.7/site-packages/blivet/devices/partition.py", line 687, in _destroy
    self.disk.originalFormat.commit()

  File "/usr/lib/python2.7/site-packages/blivet/formats/disklabel.py", line 290, in commit
    self.partedDisk.commit()

  File "/usr/lib64/python2.7/site-packages/parted/decorators.py", line 41, in new
    ret = fn(*args, **kwds)

  File "/usr/lib64/python2.7/site-packages/parted/disk.py", line 213, in commit
    return self.__disk.commit()

IOException: Partition(s) 2 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.

15:27:08,376 DEBUG anaconda: Gtk cannot be initialized
15:27:08,377 DEBUG anaconda: In a non-main thread, sending a message with exception data
15:27:08,377 INFO anaconda: Thread Done: AnaInstallThread (140499749877504)
15:27:09,338 DEBUG anaconda: running handleException
15:27:09,338 CRIT anaconda: Traceback (most recent call last):
---<-snip->---

Expected results:
 successful installation of target system


Additional info:

Comment 3 Jiri Konecny 2017-11-28 09:28:06 UTC
Looks like a storage related issue.
Changing components.

Comment 6 David Lehman 2018-02-14 18:06:35 UTC
Hmm, this is something I have never seen before. The swap volume is active even though one of the two physical volumes is missing. The simple workaround is to manually remove the stale lvm metadata using vgremove and pvremove, then try again. Moving drives around without taking care to remove metadata causes lots of problems. Yes, it's more work, but it should be done. Please try again after removing the metadata.


Note You need to log in before you can comment on or make changes to this bug.