Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1364040 - The same update can be installed multiple times
Summary: The same update can be installed multiple times
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: imgbased
Version: 4.0.4
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.1.1
: ---
Assignee: Ryan Barry
QA Contact: Huijuan Zhao
URL:
Whiteboard:
: 1359050 1372365 (view as bug list)
Depends On: 1427088
Blocks: 1422476
TreeView+ depends on / blocked
 
Reported: 2016-08-04 11:18 UTC by Huijuan Zhao
Modified: 2017-08-07 06:30 UTC (History)
20 users (show)

Fixed In Version: redhat-virtualization-host-4.1-20170308.1
Doc Type: Bug Fix
Doc Text:
Previously, the Red Hat Virtualization Host (RHVH) may have repeatedly prompted for upgrades, even when running the most recent version. This was caused by a placeholder package in the image, which was made obsolete in order to perform the upgrade. However, the package that was used to upgrade was not propagated to the the RPM database on the new image. This release now includes the missing update package in the RPM database on the new image, when upgrading the RHVH.
Clone Of:
: 1422476 (view as bug list)
Environment:
Last Closed: 2017-04-20 18:58:21 UTC
oVirt Team: Node


Attachments (Terms of Use)
screenshot in rhevm side (deleted)
2016-08-04 11:18 UTC, Huijuan Zhao
no flags Details
All logs in rhvh (deleted)
2016-08-04 11:19 UTC, Huijuan Zhao
no flags Details
log in rhevm side (deleted)
2016-08-04 11:20 UTC, Huijuan Zhao
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 67712 master MERGED core: fix `imgbase layer` 2016-12-01 19:56:08 UTC
oVirt gerrit 67713 ovirt-4.1 MERGED core: fix `imgbase layer` 2016-12-01 21:30:15 UTC
oVirt gerrit 67714 ovirt-4.0 MERGED core: fix `imgbase layer` 2016-12-01 21:30:01 UTC
oVirt gerrit 67716 master MERGED update: add image-update to the rpmdb on the new image 2017-01-11 17:41:27 UTC
Red Hat Knowledge Base (Solution) 2969761 None None None 2017-03-15 23:28:45 UTC
Red Hat Product Errata RHEA-2017:1114 normal SHIPPED_LIVE redhat-virtualization-host bug fix and enhancement update 2017-04-20 22:57:46 UTC
oVirt gerrit 70044 ovirt-4.1-pre MERGED update: add image-update to the rpmdb on the new image 2017-01-11 17:41:44 UTC

Description Huijuan Zhao 2016-08-04 11:18:40 UTC
Created attachment 1187438 [details]
screenshot in rhevm side

Description of problem:
Upgrade RHVH to latest build in rhevm side, but it still show upgrade available in rhevm side, if click "upgrade", upgrade failed.
Should not show upgrade available in rhevm side after upgrade to the latest build.

Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.0-20160803.3
imgbased-0.7.4-0.1.el7ev.noarch
cockpit-0.114-2.el7.x86_64
cockpit-ovirt-dashboard-0.10.6-1.3.4.el7ev.noarch
redhat-virtualization-host-image-update-placeholder-4.0-0.26.el7.noarch


How reproducible:
100%

Steps to Reproduce:
1. Install redhat-virtualization-host-4.0-20160727.1
2. Add RHVH to rhevm
3. Login RHVH and setup local repos
4. Login rhevm, install redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch.rpm:
   # rpm -ivh redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch.rpm
5. Login rhevm UI, change to "Hosts" page, wait for 10+ minutes, upgrade is available, click "Upgrade"
6. Reboot RHVH and login new build redhat-virtualization-host-image-update-4.0-20160803.3
7. Login rhevm UI, change to "Hosts" page, wait for 10+ minutes, focus on if the upgrade is available


Actual results:
1. After step7, upgrade is available in rhevm side, click "Upgrade", upgrade failed.


Expected results:
2. After step7, should not show upgrade is available due to it is the latest build now.


Additional info:

Comment 1 Huijuan Zhao 2016-08-04 11:19:45 UTC
Created attachment 1187439 [details]
All logs in rhvh

Comment 2 Huijuan Zhao 2016-08-04 11:20:34 UTC
Created attachment 1187440 [details]
log in rhevm side

Comment 3 Huijuan Zhao 2016-08-04 11:21:59 UTC
Update test version:
vdsm-4.18.10-1.el7ev.x86_64
Red Hat Virtualization Manager Version: 4.0.2.3-0.1.el7ev

Comment 4 Fabian Deutsch 2016-08-05 13:15:45 UTC
Martin, do you have an idea on this issue?

Comment 5 Martin Perina 2016-08-05 13:23:13 UTC
Ravi, could you please take a look?

Comment 6 Ravi Nori 2016-08-10 16:53:29 UTC
otopi is detecting that there are packages available for update even when ovirt-node has been previously upgraded and booted to the new version.

1. Installed ovirt-node-ng-installer-ovirt-4.0-2016062412
2. Rhevm detected packages 4.0.2-2 are available for upgrade
3. Invoke upgrade from webadmin, upgrade succeeds and node is rebooted to 4.0.2-2
4. rhevm checks for upgrades and otopi incorrectly reports back to engine that upgrade packages 4.0.2-2 are available

Comment 7 Martin Perina 2016-08-10 18:07:13 UTC
Ravi, if you connect to the host using SSH after upgrade&restart performed using webadmin, can you detect the upgrade using 'yum check-update'

Comment 8 Ravi Nori 2016-08-10 18:29:07 UTC
yum check-update does not detect any upgrades

Comment 9 Martin Perina 2016-08-10 18:46:11 UTC
Didi, could you please take a look why otopi miniyum implementation detects update which is not detected by 'yum check-update'?

Comment 10 Yedidyah Bar David 2016-08-11 06:31:16 UTC
Did this ever work?

Is this reproducible upstream? If not, please move to a downstream bug.

(In reply to Ravi Nori from comment #6)
> otopi is detecting that there are packages available for update even when
> ovirt-node has been previously upgraded and booted to the new version.
> 
> 1. Installed ovirt-node-ng-installer-ovirt-4.0-2016062412

This is an upstream package. Is it supposed to be able to be used, and upgraded, with downstream?

> 2. Rhevm detected packages 4.0.2-2 are available for upgrade
> 3. Invoke upgrade from webadmin, upgrade succeeds and node is rebooted to
> 4.0.2-2
> 4. rhevm checks for upgrades and otopi incorrectly reports back to engine
> that upgrade packages 4.0.2-2 are available

Can't find "4.0.2-2" in attached host-deploy log. Didn't check other logs.

Not sure how downstream was designed/supposed to work. In this log:

2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND       **%QEnd: OMGMT_PACKAGES/packages
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:RECEIVE    ovirt-node-ng-image-update

- Meaning, the engine asks the host to check for updates to 'ovirt-node-ng-image-update'

2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum queue package ovirt-node-ng-image-update for install/update
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum processing package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch for install/update
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch queued
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum processing package ovirt-node-ng-image-update-4.0-20160727.1.el7.noarch for install/update
Package ovirt-node-ng-image-update is obsoleted by redhat-virtualization-host-image-update, trying to install redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch instead
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum package ovirt-node-ng-image-update-4.0-20160727.1.el7.noarch queued

- Makes sense to me, but again - not sure how it was designed to work

Also, later on, perhaps unrelated to this bug:

2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Script sink: warning: %post(redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch) scriptlet failed, exit status 1

Please check also this.

I do see in downstream git, redhat-virtualization-host.spec.tmpl (in spin-kickstarts, which was used for the reported packages, later on moved to dist-git - didn't check that one):

Obsoletes:  ovirt-node-ng-image-update-placeholder < %{version}-%{release}
Provides:   ovirt-node-ng-image-update-placeholder = %{version}-%{release}

Obsoletes:  ovirt-node-ng-image-update < %{version}-%{release}
Provides:   ovirt-node-ng-image-update = %{version}-%{release}

So, did you indeed try to upgrade upstream to downstream? Is it supposed to work?

Comment 11 Martin Perina 2016-08-11 07:15:41 UTC
Didi, on upstream we check for upgrade/upgrade ovirt-node-ng-image-update, which is standard package name. On downstream we check for same package name, but it's only provided (using RPM Provides) by redhat-virtualization-host-image-update packages. More info can be found at https://bugzilla.redhat.com/show_bug.cgi?id=1360677#c12

So the question, why we have difference in flows:

1. Command line - works fine
    yum check-update -> reports update available
    yum updade       -> performs this update
    reboot
    yum check-update -> no more updates available

2. webadmin - doesn't work, reports update is available although it's installed
    Check for upgrades -> reports update available
    Upgrade host       -> performs update and reboot host
    Check for upgrades -> detects the same upgrade we have just installed

Comment 12 Yedidyah Bar David 2016-08-16 12:51:04 UTC
Seems like the reason is:

2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Script sink: warning: %post(redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch) scriptlet failed, exit status 1

Later on:

2016-08-04 06:07:20 ERROR otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum Non-fatal POSTIN scriptlet failure in rpm package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch
2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch
2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch
2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum erase: 2/2: redhat-virtualization-host-image-update-placeholder
2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-placeholder-4.0-0.26.el7.noarch
2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Verify: 1/2: redhat-virtualization-host-image-update.noarch 0:4.0-20160803.3.el7_2 - u
2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Verify: 2/2: redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-0.26.el7 - od
2016-08-04 06:07:21 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Transaction processed
2016-08-04 06:07:21 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
  File "/tmp/ovirt-mYTS8ESPdc/pythonlib/otopi/context.py", line 132, in _executeMethod
    method['method']()
  File "/tmp/ovirt-mYTS8ESPdc/otopi-plugins/otopi/packagers/yumpackager.py", line 261, in _packages
    self._miniyum.processTransaction()
  File "/tmp/ovirt-mYTS8ESPdc/pythonlib/otopi/miniyum.py", line 1049, in processTransaction
    _('One or more elements within Yum transaction failed')
RuntimeError: One or more elements within Yum transaction failed
2016-08-04 06:07:21 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Package installation': One or more elements within Yum transaction failed
2016-08-04 06:07:21 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction'
2016-08-04 06:07:21 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback

So bottom line, the transaction was rolled back.

Comment 13 Yamakasi 2016-08-22 21:09:59 UTC
Didi, I need to check logs, but what I see is that the install of the packages from the GUI go well, also when yum update doesn't show any update anymore.

I would be nice if we can just do a full system upgrade, so yum update from the gui. This saves login to the server itself.

Also a "reboot" button would be nice then.

Comment 14 Douglas Schilling Landgraf 2016-09-06 20:35:07 UTC
*** Bug 1372365 has been marked as a duplicate of this bug. ***

Comment 15 Douglas Schilling Landgraf 2016-09-07 03:45:18 UTC
Hi,

Added to downstream a validation based on NVR datetime. Next build for 4.0.4, should resolved this report. Moving to post.

commit 2dada2104241d315c217adc6a12f4a17bdff056c
Author: Douglas Schilling Landgraf dougsland@redhat.com <dougsland@redhat.com>
Date:   Tue Sep 6 22:51:18 2016 -0400

    Use timestamp for redhat-virtualization-host-image-update-placeholder
    
    Without the timestamp check, the package will always upgrade as
    there is no real comparation via NVR.

For the record:
My test was: scratch build redhat-release-virtualization-host with the above change, created yum repo with the rpms and build redhat-virtualization-host adding this repo.

Test 1:
- Installed the generated squashfs 
- Added the repo into /etc/yum.repos.d/local.repo
- # yum update
  No updates available since I am the last available. [OK]

Test 2:
- Increased the date and generated the rpms and added to repo

# rpm -qa | grep -i update
redhat-virtualization-host-image-update-placeholder-4.0-20160906.el7.noarch

# yum update
Loaded plugins: imgbased-warning, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Warning: yum operations are not persisted across upgrades!
Resolving Dependencies
--> Running transaction check
---> Package redhat-release-virtualization-host.x86_64 0:4.0-3.el7 will be updated
---> Package redhat-release-virtualization-host.x86_64 0:4.0-4.el7 will be an update
---> Package redhat-release-virtualization-host-content.x86_64 0:4.0-3.el7 will be updated
---> Package redhat-release-virtualization-host-content.x86_64 0:4.0-4.el7 will be an update
---> Package redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-20160906.el7 will be updated
---> Package redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-20160907.el7 will be an update
--> Finished Dependency Resolution

Comment 16 Fabian Deutsch 2016-09-22 08:13:05 UTC
The proposed solution works, but has a negative impact on the build process.

This bug got moved out to find a more suitable solution.

Comment 17 Fabian Deutsch 2016-10-20 12:42:29 UTC
A new design idea: Give a hint to imgbased which rpm to inject into the new image rpmdb using justdb.
In the osupdater part we can then detect in the update flow, that a hint was given, and can look at the filesystem and/or rpmdb of the previous image, to find the file. (I.e. first look at rpmdb to find rpmname, then look at filesystem to find the file).
In osupdater we already have access to the previous LV, this should make it easy.

Once we have the file on the previous LV, it should be easy to rpm -i --justdb it on the new image.

Comment 18 Ryan Barry 2016-10-20 22:49:42 UTC
(In reply to Fabian Deutsch from comment #17)
> A new design idea: Give a hint to imgbased which rpm to inject into the new
> image rpmdb using justdb.
> In the osupdater part we can then detect in the update flow, that a hint was
> given, and can look at the filesystem and/or rpmdb of the previous image, to
> find the file. (I.e. first look at rpmdb to find rpmname, then look at
> filesystem to find the file).
> In osupdater we already have access to the previous LV, this should make it
> easy.
> 
> Once we have the file on the previous LV, it should be easy to rpm -i
> --justdb it on the new image.

This is difficult, because RPM is not recursive. We'd need to have a service which ran after the RPM transaction finished (such as on first boot) in order to do this.

Also, in the case that the RPM was removed from the yum cache (or local), this would fail.

I'm not sure about this solution. I'll do some thinking.

Comment 19 Ryan Barry 2016-10-21 00:21:12 UTC
I checked, and we *do* have rpmbuild available.

Since RPM is not recursive (it's not possible to "rpm -i --justdb" from a %post script, I don't think -- you definitely can't "rpm -i" without --justdb), the best solution may be to construct a very trivial RPM specfile on boot if the running version is not in rpmdb, then install that...

Thoughts?

Comment 20 Douglas Schilling Landgraf 2016-10-24 14:41:48 UTC
*** Bug 1359050 has been marked as a duplicate of this bug. ***

Comment 22 Sandro Bonazzola 2016-12-12 10:46:48 UTC
I see a referenced patch still not merged on master, shouldn't this be on POST?

Comment 26 Huijuan Zhao 2017-02-28 09:12:30 UTC
Encountered Bug 1427088, so change the status to ASSIGNED.

Test version:
RHVH:
From redhat-virtualization-host-4.1-20170202.0
To   redhat-virtualization-host-4.1-20170222.0

RHVM:
Red Hat Virtualization Manager Version: 4.1.1.2-0.1.el7

Test steps:
1. Install redhat-virtualization-host-4.1-20170202.0
2. Login RHVH and setup local repos
3. Add RHVH to RHVM 4.1
4. Login RHVM UI, change to "Hosts" page, click "Check for Upgrade". Upgrade is available, click "Upgrade"
5. Reboot RHVH and focus on boot entry

Test results:
1. After step5, miss boot entry of new build, can not access new build(Bug 1427088)

Comment 27 Sandro Bonazzola 2017-02-28 09:41:40 UTC
Bug 1427088 has been fixed and is in modified. moving this as well to modified, pending new build

Comment 29 Huijuan Zhao 2017-03-14 06:48:14 UTC
Test version:
RHVH:
From redhat-virtualization-host-4.1-20170202.0
To   redhat-virtualization-host-4.1-20170308.1

RHVM:
Red Hat Virtualization Manager Version: 4.1.1.4-0.1.el7

Test steps:
1. Install redhat-virtualization-host-4.1-20170202.0
2. Login RHVH and setup local repos
3. Add RHVH to RHVM 4.1
4. Login RHVM UI, change to "Hosts" page, click "Check for Upgrade". Upgrade is available, click "Upgrade"
5. Reboot RHVH and login new build redhat-virtualization-host-4.1-20170308.1
6. Login RHVM UI, change to "Hosts" page, after host is up, click "Check for Upgrade"
7. In host side, run
   # yum update
   # yum clean all
   # yum update

Test results:
1. After step6, upgrade is not available.
2. After step7, upgrade is not available, it reports "No packages marked for update".

Same test results as above when upgrade from redhat-virtualization-host-4.0-20161116.1 to redhat-virtualization-host-4.1-20170308.1.

So this bug is fixed in  redhat-virtualization-host-4.1-20170308.1, change the status to VERIFIED.

Comment 30 errata-xmlrpc 2017-04-20 18:58:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1114


Note You need to log in before you can comment on or make changes to this bug.