Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1354492 - Engine-setup fails to upgrade when it fails to stop dwh
Summary: Engine-setup fails to upgrade when it fails to stop dwh
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: ovirt-engine-dwh
Classification: oVirt
Component: Setup
Version: 4.0.0
Hardware: x86_64
OS: Linux
high
high vote
Target Milestone: ovirt-4.0.2
: ---
Assignee: Yedidyah Bar David
QA Contact: Lukas Svaty
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-11 12:28 UTC by Nikolai Sednev
Modified: 2017-05-11 11:13 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-24 08:56:34 UTC
oVirt Team: Integration
ylavi: ovirt-4.0.z?
rule-engine: planning_ack?
rule-engine: devel_ack?
lsvaty: testing_ack+


Attachments (Terms of Use)
sosreport from engine (deleted)
2016-07-11 12:34 UTC, Nikolai Sednev
no flags Details
sosreport from host (deleted)
2016-07-11 12:36 UTC, Nikolai Sednev
no flags Details
upgrade print-screen (deleted)
2016-07-24 05:17 UTC, Nikolai Sednev
no flags Details

Description Nikolai Sednev 2016-07-11 12:28:57 UTC
Description of problem:
# engine-setup 
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-wsp.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
          Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20160711080853-9yblut.log
          Version: otopi-1.5.0 (otopi-1.5.0-3.el7ev)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization
         
          Welcome to the RHEV 4.0 setup/upgrade.
          Please read the RHEV 4.0 install guide
          https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/4.0/html/Installation_Guide/index.html.
          Please refer to the RHEV Upgrade Helper application
          https://access.redhat.com/labs/rhevupgradehelper/ which will guide you in the upgrading process.
          Would you like to proceed? (Yes, No) [Yes]: 
         
          --== PRODUCT OPTIONS ==--
         
         
          --== PACKAGES ==--
         
[ INFO  ] Checking for product updates...
          Setup has found updates for some packages:
          PACKAGE: [updated] ovirt-engine-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-backend-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-backend-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-dashboard-1.0.0-20160615git43298a4.el7ev.x86_64
          PACKAGE: [update] ovirt-engine-dashboard-1.0.0-20160623git48c5592.el7ev.x86_64
          PACKAGE: [updated] ovirt-engine-dbscripts-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-dbscripts-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-restapi-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-restapi-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-tools-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-tools-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-tools-backup-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-tools-backup-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-userportal-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-userportal-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] ovirt-engine-webadmin-portal-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] ovirt-engine-webadmin-portal-4.0.2-0.2.rc1.el7ev.noarch
          PACKAGE: [updated] rhevm-4.0.0.6-0.1.el7ev.noarch
          PACKAGE: [update] rhevm-4.0.2-0.2.rc1.el7ev.noarch
          do you wish to update them now? (Yes, No) [Yes]: 
[ INFO  ] Checking for an update for Setup...
          Setup will not be able to rollback new packages in case of a failure, because the following installed packages were not found in enabled repositories:
         
          ovirt-engine-userportal-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-tools-backup-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-webadmin-portal-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-dashboard-1.0.0-20160615git43298a4.el7ev.x86_64
          ovirt-engine-restapi-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-dbscripts-4.0.0.6-0.1.el7ev.noarch
          rhevm-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-backend-4.0.0.6-0.1.el7ev.noarch
          ovirt-engine-tools-4.0.0.6-0.1.el7ev.noarch
          Do you want to abort Setup? (Yes, No) [Yes]: no
         
          --== NETWORK CONFIGURATION ==--
         
         
          --== DATABASE CONFIGURATION ==--
         
          The detected DWH database size is 21 MB.
          Setup can backup the existing database. The time and space required for the database backup depend on its size. This process takes time, and in some cases (for instance, when the size is few GBs) may take several hours to complete.
          If you choose to not back up the database, and Setup later fails for some reason, it will not be able to restore the database and all DWH data will be lost.
          Would you like to backup the existing database before upgrading it? (Yes, No) [Yes]: 
         
          --== OVIRT ENGINE CONFIGURATION ==--
         
         
          --== STORAGE CONFIGURATION ==--
         
         
          --== PKI CONFIGURATION ==--
         
         
          --== APACHE CONFIGURATION ==--
         
         
          --== SYSTEM CONFIGURATION ==--
         
         
          --== MISC CONFIGURATION ==--
         
         
          --== END OF CONFIGURATION ==--
         
[ INFO  ] Stage: Setup validation
          During execution engine service will be stopped (OK, Cancel) [OK]: 
[WARNING] Less than 16384MB of memory is available
[ INFO  ] Cleaning stale zombie tasks and commands
         
          --== CONFIGURATION PREVIEW ==--
         
          Default SAN wipe after delete           : False
          Firewall manager                        : firewalld
          Update Firewall                         : False
          Host FQDN                               : nsednev-he-2.qa.lab.tlv.redhat.com
          Require packages rollback               : False
          Upgrade packages                        : True
          Engine database secured connection      : False
          Engine database host                    : localhost
          Engine database user name               : engine
          Engine database name                    : engine
          Engine database port                    : 5432
          Engine database host name validation    : False
          DWH database secured connection         : False
          DWH database host                       : localhost
          DWH database user name                  : ovirt_engine_history
          DWH database name                       : ovirt_engine_history
          DWH database port                       : 5432
          DWH database host name validation       : False
          Engine installation                     : True
          PKI organization                        : qa.lab.tlv.redhat.com
          DWH installation                        : True
          Backup DWH database                     : True
          Engine Host FQDN                        : nsednev-he-2.qa.lab.tlv.redhat.com
          Configure VMConsole Proxy               : True
          Configure WebSocket Proxy               : True
         
          Please confirm installation settings (OK, Cancel) [OK]: 
[ INFO  ] Cleaning async tasks and compensations
[ INFO  ] Unlocking existing entities
[ INFO  ] Checking the Engine database consistency
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping ovirt-fence-kdump-listener service
[ INFO  ] Stopping dwh service
[ INFO  ] Stopping websocket-proxy service
[ ERROR ] dwhd is currently running. Its hostname is nsednev-he-2.qa.lab.tlv.redhat.com. Please stop it before running Setup.
[ ERROR ] Failed to execute stage 'Transaction setup': dwhd is currently running
[ INFO  ] Stage: Clean up
          Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20160711080853-9yblut.log
[ INFO  ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20160711080935-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed


Version-Release number of selected component (if applicable):
Engine:
rhev-release-4.0.1-1-001.noarch
rhevm-spice-client-x86-msi-4.0-2.el7ev.noarch
rhevm-spice-client-x64-msi-4.0-2.el7ev.noarch
rhevm-setup-plugins-4.0.0.1-1.el7ev.noarch
rhevm-guest-agent-common-1.0.12-2.el7ev.noarch
rhevm-dependencies-4.0.0-1.el7ev.noarch
rhevm-branding-rhev-4.0.0-2.el7ev.noarch
rhevm-doc-4.0.0-2.el7ev.noarch
rhev-guest-tools-iso-4.0-2.el7ev.noarch
rhevm-4.0.0.6-0.1.el7ev.noarch
Linux version 3.10.0-327.18.2.el7.x86_64 (mockbuild@x86-020.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Fri Apr 8 05:09:53 EDT 2016
Linux 3.10.0-327.18.2.el7.x86_64 #1 SMP Fri Apr 8 05:09:53 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.3 Beta (Maipo)

Host:
ovirt-vmconsole-host-1.0.3-1.el7ev.noarch
ovirt-hosted-engine-ha-2.0.0-1.el7ev.noarch
libvirt-client-1.2.17-13.el7_2.5.x86_64
ovirt-host-deploy-1.5.0-1.el7ev.noarch
ovirt-hosted-engine-setup-2.0.0.2-1.el7ev.noarch
ovirt-setup-lib-1.0.2-1.el7ev.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.18.x86_64
mom-0.5.5-1.el7ev.noarch
ovirt-vmconsole-1.0.3-1.el7ev.noarch
ovirt-imageio-common-0.3.0-0.el7ev.noarch
vdsm-4.18.5.1-1.el7ev.x86_64
rhevm-appliance-20160623.0-1.el7ev.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el7ev.noarch
rhev-release-4.0.1-1-001.noarch
sanlock-3.2.4-2.el7_2.x86_64
ovirt-imageio-daemon-0.3.0-0.el7ev.noarch
Linux version 3.10.0-327.28.2.el7.x86_64 (mockbuild@x86-017.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Mon Jun 27 14:48:28 EDT 2016
Linux 3.10.0-327.28.2.el7.x86_64 #1 SMP Mon Jun 27 14:48:28 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.2 (Maipo)


How reproducible:
100%

Steps to Reproduce:
1.Deploy hosted-engine on one host over NFS, use rhevm-appliance-20160623.0-1.
2.Once deployed, add required latest repositories to the engine and put the host in to global maintenance.
3.Yum update your engine and run engine-setup to get components upgraded on your engine.

Actual results:
update fails

Expected results:
update should pass

Additional info:
Sosreports from the engine and the host attached.

Comment 1 Nikolai Sednev 2016-07-11 12:30:16 UTC
Its not a hosted-engine specific issue and will happen on regular engine too.

Comment 2 Nikolai Sednev 2016-07-11 12:31:33 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1167801 looks like the same issue is also here.

Comment 3 Nikolai Sednev 2016-07-11 12:34:52 UTC
Created attachment 1178389 [details]
sosreport from engine

Comment 4 Nikolai Sednev 2016-07-11 12:36:31 UTC
Created attachment 1178390 [details]
sosreport from host

Comment 5 Oved Ourfali 2016-07-12 05:01:48 UTC
Setup should be assigned to integration.

Comment 6 Yedidyah Bar David 2016-07-13 12:37:28 UTC
From attached engine sos report:

var/log/messages has:

Jul 11 08:06:25 nsednev-he-2 systemd: Stopping PostgreSQL database server...
Jul 11 08:06:26 nsednev-he-2 systemd: Starting PostgreSQL database server...
Jul 11 08:06:27 nsednev-he-2 systemd: Started PostgreSQL database server.

ovirt-engine-dwhd.log has:

2016-07-11 07:41:01|ETL Service Started
...
2016-07-11 08:07:00|qodVZ4|609d6U|RJNOqY|OVIRT_ENGINE_DWH|OsEnumUpdate|Default|6|Java Exception|tJDBCInput_4|org.postgresql.util.PSQLException:FATAL: terminating connection due to administrator command|1
...
2016-07-11 08:07:00|RJNOqY|609d6U|QYhGcV|OVIRT_ENGINE_DWH|SampleRunJobs|Default|6|Java Exception|tRunJob_4|java.lang.RuntimeException:Child job running failed|1
...
2016-07-11 08:07:00|QYhGcV|609d6U|ARts2Y|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|6|Java Exception|tRunJob_1|java.lang.RuntimeException:Child job running failed|1
...
2016-07-11 08:08:00|JIktCA|609d6U|PypLmI|OVIRT_ENGINE_DWH|ConfigurationSync|Default|6|Java Exception|tJDBCOutput_9|org.postgresql.util.PSQLException:FATAL: terminating connection due to administrator command|1
...
2016-07-11 08:08:00|PypLmI|609d6U|vqYgAs|OVIRT_ENGINE_DWH|SampleRunJobs|Default|6|Java Exception|tRunJob_1|java.lang.RuntimeException:Child job running failed|1
...
2016-07-11 08:08:05|vqYgAs|609d6U|ARts2Y|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|6|Java Exception|tRunJob_1|java.lang.RuntimeException:Child job running failed|1
...
2016-07-11 08:09:34|609d6U|609d6U|609d6U|OVIRT_ENGINE_DWH|HistoryETL|Default|6|Java Exception|tJDBCRollback_3|org.postgresql.util.PSQLException:FATAL: terminating connection due to administrator command|1

Last line is when engine-setup stopped dwhd, 3 minutes after postgresql was restarted. For some reason dwhd didn't manage to reconnect, which it should have done. See also bug 1286441 and also bug 1297682.

Shirly - please have a look. Thanks.

Comment 7 Shirly Radco 2016-07-14 10:54:05 UTC
What is the value of "DwhCurrentlyRunning" in dwh_history_timekeeping table ?

Comment 8 Yedidyah Bar David 2016-07-14 11:45:58 UTC
(In reply to Shirly Radco from comment #7)
> What is the value of "DwhCurrentlyRunning" in dwh_history_timekeeping table ?

You can see that in the setup log, right before failing:

2016-07-11 08:09:35 DEBUG otopi.ovirt_engine_setup.engine_common.database database.execute:172 Database: 'None', Statement: '
            select * from GetDwhHistoryTimekeepingByVarName(
                %(name)s
            )
        ', args: {'name': 'DwhCurrentlyRunning'}
2016-07-11 08:09:35 DEBUG otopi.ovirt_engine_setup.engine_common.database database.execute:177 Creating own connection
2016-07-11 08:09:35 DEBUG otopi.ovirt_engine_setup.engine_common.database database.execute:222 Result: [{'var_value': '1', 'var_datetime': None, 'var_name': 'DwhCurrentlyRunning'}]
2016-07-11 08:09:35 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine_dwh.core.single_etl single_etl._transactionBegin:136 dwhd is currently running.

So it's '1'.

Comment 9 Shirly Radco 2016-07-21 08:53:59 UTC
When this happened did you try to stop dwh manually? Was it still up after setup failed?

Comment 10 Nikolai Sednev 2016-07-21 09:10:56 UTC
(In reply to Shirly Radco from comment #9)
> When this happened did you try to stop dwh manually? Was it still up after
> setup failed?

Normal operation sequence, not tried anything, not to stop the process, neither to check for if its still running after.

Comment 11 Shirly Radco 2016-07-21 11:30:46 UTC
I need to know if dwh was up or down after the setup faild.
If it was down, it means that there might a bug in the update of the engine db, as mentioned in the referenced bugs in comment #6.
If it was still running it means that there was a problem in stopping the service and it should be stopped manually.

Can this be tested?

Comment 12 Nikolai Sednev 2016-07-21 12:55:38 UTC
(In reply to Shirly Radco from comment #11)
> I need to know if dwh was up or down after the setup faild.
> If it was down, it means that there might a bug in the update of the engine
> db, as mentioned in the referenced bugs in comment #6.
> If it was still running it means that there was a problem in stopping the
> service and it should be stopped manually.
> 
> Can this be tested?

I've tried to run engine setup several times with the same result, hence service probably was running. Can't bring you any more details right now, have no setup any more. Please try reproducing on your environment if possible.

Comment 13 Nikolai Sednev 2016-07-24 05:17:00 UTC
I've tried to reproduce again on cleanly deployed environment with 4.0.1 and then upgraded to 4.0.2, environment consists of two RHEVH4.0 hosts with hosted-engine running on top of them, reproduction has failed. 

DWH was running before the upgrade and was running after upgrade has been finished. You may look in to the attached upgrade print-screen for more details.

Worked for me on these components:
Engine:
ovirt-engine-websocket-proxy-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-extensions-api-impl-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-userportal-4.0.2-0.1.rc.el7ev.noarch
ovirt-iso-uploader-4.0.0-1.el7ev.noarch
ovirt-engine-dbscripts-4.0.2-0.1.rc.el7ev.noarch
ovirt-vmconsole-proxy-1.0.4-1.el7ev.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-vmconsole-proxy-helper-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-webadmin-portal-debuginfo-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-tools-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-webadmin-portal-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-cli-3.6.7.0-1.el7ev.noarch
ovirt-vmconsole-1.0.4-1.el7ev.noarch
ovirt-setup-lib-1.0.2-1.el7ev.noarch
ovirt-engine-setup-base-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-setup-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-tools-backup-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-dashboard-1.0.1-0.el7ev.x86_64
ovirt-engine-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el7ev.noarch
ovirt-log-collector-4.0.0-1.el7ev.noarch
ovirt-engine-lib-4.0.2-0.1.rc.el7ev.noarch
ovirt-host-deploy-java-1.5.1-1.el7ev.noarch
ovirt-engine-dwh-setup-4.0.1-1.el7ev.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-userportal-debuginfo-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-backend-4.0.2-0.1.rc.el7ev.noarch
ovirt-engine-dwh-4.0.1-1.el7ev.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.2-0.1.rc.el7ev.noarch
python-ovirt-engine-sdk4-4.0.0-0.5.a5.el7ev.x86_64
ovirt-engine-restapi-4.0.2-0.1.rc.el7ev.noarch
ovirt-image-uploader-4.0.0-1.el7ev.noarch
ovirt-host-deploy-1.5.1-1.el7ev.noarch
ovirt-engine-extension-aaa-jdbc-1.1.0-1.el7ev.noarch
rhev-guest-tools-iso-4.0-4.el7ev.noarch
rhevm-4.0.2-0.1.rc.el7ev.noarch
rhev-release-4.0.2-1-001.noarch
rhevm-doc-4.0.0-3.el7ev.noarch
rhevm-spice-client-x86-msi-4.0-2.el7ev.noarch
rhevm-branding-rhev-4.0.0-3.el7ev.noarch
rhevm-spice-client-x64-msi-4.0-2.el7ev.noarch
rhevm-guest-agent-common-1.0.12-2.el7ev.noarch
rhevm-dependencies-4.0.0-1.el7ev.noarch
rhevm-setup-plugins-4.0.0.1-1.el7ev.noarch
rhev-release-4.0.1-2-001.noarch
Linux version 3.10.0-327.22.2.el7.x86_64 (mockbuild@x86-030.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Thu Jun 9 10:09:10 EDT 2016
Linux 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 9 10:09:10 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.2 (Maipo)

Hosts:
sanlock-3.2.4-2.el7_2.x86_64
ovirt-hosted-engine-ha-2.0.1-1.el7ev.noarch
ovirt-imageio-daemon-0.3.0-0.el7ev.noarch
ovirt-host-deploy-1.5.1-1.el7ev.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el7ev.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.16.x86_64
mom-0.5.5-1.el7ev.noarch
ovirt-setup-lib-1.0.2-1.el7ev.noarch
ovirt-vmconsole-host-1.0.4-1.el7ev.noarch
libvirt-client-1.2.17-13.el7_2.5.x86_64
vdsm-4.18.6-1.el7ev.x86_64
ovirt-hosted-engine-setup-2.0.1-1.el7ev.noarch
ovirt-imageio-common-0.3.0-0.el7ev.noarch
ovirt-vmconsole-1.0.4-1.el7ev.noarch
Linux version 3.10.0-327.22.2.el7.x86_64 (mockbuild@x86-030.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Thu Jun 9 10:09:10 EDT 2016
Linux 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 9 10:09:10 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 7.2

Comment 14 Nikolai Sednev 2016-07-24 05:17:39 UTC
Created attachment 1183302 [details]
upgrade print-screen

Comment 15 Shirly Radco 2016-07-24 06:07:52 UTC
So can we close this?

Comment 16 Nikolai Sednev 2016-07-24 06:42:56 UTC
(In reply to Shirly Radco from comment #15)
> So can we close this?

I think we may, as long as there is no objection from your side. If logs and sosreports don't reveal any additional information which could be of use, then I'm OK with closing this bug as works for me.

Comment 17 Yedidyah Bar David 2016-07-24 09:03:43 UTC
(In reply to Nikolai Sednev from comment #16)
> (In reply to Shirly Radco from comment #15)
> > So can we close this?
> 
> I think we may, as long as there is no objection from your side. If logs and
> sosreports don't reveal any additional information which could be of use,
> then I'm OK with closing this bug as works for me.

Did you restart postgresql prior to upgrade? AFAIU this was the only thing breaking us, might be wrong of course. Then, you implicitly restated it by running 'yum update', which happened to update it too, thus restart it.

Comment 18 Nikolai Sednev 2016-08-01 11:28:31 UTC
(In reply to Yedidyah Bar David from comment #17)
> (In reply to Nikolai Sednev from comment #16)
> > (In reply to Shirly Radco from comment #15)
> > > So can we close this?
> > 
> > I think we may, as long as there is no objection from your side. If logs and
> > sosreports don't reveal any additional information which could be of use,
> > then I'm OK with closing this bug as works for me.
> 
> Did you restart postgresql prior to upgrade? AFAIU this was the only thing
> breaking us, might be wrong of course. Then, you implicitly restated it by
> running 'yum update', which happened to update it too, thus restart it.

I did not restarted the postgresql, in fact I did not anything except for:
1)yum update ovirt-engine
2)yum update -y
3)engine-setup

Comment 19 Yedidyah Bar David 2016-08-01 11:46:52 UTC
(In reply to Nikolai Sednev from comment #18)
> (In reply to Yedidyah Bar David from comment #17)
> > (In reply to Nikolai Sednev from comment #16)
> > > (In reply to Shirly Radco from comment #15)
> > > > So can we close this?
> > > 
> > > I think we may, as long as there is no objection from your side. If logs and
> > > sosreports don't reveal any additional information which could be of use,
> > > then I'm OK with closing this bug as works for me.
> > 
> > Did you restart postgresql prior to upgrade? AFAIU this was the only thing
> > breaking us, might be wrong of course. Then, you implicitly restated it by
> > running 'yum update', which happened to update it too, thus restart it.
> 
> I did not restarted the postgresql, in fact I did not anything except for:
> 1)yum update ovirt-engine
> 2)yum update -y

This one upgraded and restarted postgresql, check your attached engine sosreport.

> 3)engine-setup

Comment 20 Yedidyah Bar David 2016-08-01 11:47:51 UTC
(In reply to Yedidyah Bar David from comment #19)
> (In reply to Nikolai Sednev from comment #18)
> > (In reply to Yedidyah Bar David from comment #17)
> > > (In reply to Nikolai Sednev from comment #16)
> > > > (In reply to Shirly Radco from comment #15)
> > > > > So can we close this?
> > > > 
> > > > I think we may, as long as there is no objection from your side. If logs and
> > > > sosreports don't reveal any additional information which could be of use,
> > > > then I'm OK with closing this bug as works for me.
> > > 
> > > Did you restart postgresql prior to upgrade? AFAIU this was the only thing
> > > breaking us, might be wrong of course. Then, you implicitly restated it by
> > > running 'yum update', which happened to update it too, thus restart it.
> > 
> > I did not restarted the postgresql, in fact I did not anything except for:
> > 1)yum update ovirt-engine
> > 2)yum update -y
> 
> This one upgraded and restarted postgresql, check your attached engine
> sosreport.

That said, I did some tests of my own and could not reproduce either. Either it was some specific problem with postgresql or some high load or something like that.

Comment 21 Nikolai Sednev 2016-08-01 13:03:16 UTC
(In reply to Yedidyah Bar David from comment #20)
> (In reply to Yedidyah Bar David from comment #19)
> > (In reply to Nikolai Sednev from comment #18)
> > > (In reply to Yedidyah Bar David from comment #17)
> > > > (In reply to Nikolai Sednev from comment #16)
> > > > > (In reply to Shirly Radco from comment #15)
> > > > > > So can we close this?
> > > > > 
> > > > > I think we may, as long as there is no objection from your side. If logs and
> > > > > sosreports don't reveal any additional information which could be of use,
> > > > > then I'm OK with closing this bug as works for me.
> > > > 
> > > > Did you restart postgresql prior to upgrade? AFAIU this was the only thing
> > > > breaking us, might be wrong of course. Then, you implicitly restated it by
> > > > running 'yum update', which happened to update it too, thus restart it.
> > > 
> > > I did not restarted the postgresql, in fact I did not anything except for:
> > > 1)yum update ovirt-engine
> > > 2)yum update -y
> > 
> > This one upgraded and restarted postgresql, check your attached engine
> > sosreport.
> 
> That said, I did some tests of my own and could not reproduce either. Either
> it was some specific problem with postgresql or some high load or something
> like that.

I agree, I did upgrade from then around 5 times and not met the same issue since then.


Note You need to log in before you can comment on or make changes to this bug.