Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 235385 - Remove extra entries in rhnServerDmi and in rhnRam
Summary: Remove extra entries in rhnServerDmi and in rhnRam
Alias: None
Product: Red Hat Network
Classification: Red Hat
Component: RHN/Web Site
Version: rhn500
Hardware: All
OS: Linux
Target Milestone: ---
Assignee: John Sanda
QA Contact: Red Hat Satellite QA List
: 579491 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2007-04-05 14:56 UTC by John Sanda
Modified: 2010-07-21 20:01 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2007-12-14 19:22:02 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description John Sanda 2007-04-05 14:56:06 UTC
Description of problem:
This bug is really an extension of bug 235108. With that bug we found that there
was an incorrect association mapping between the Server and CPU classes. Further
investigation revealed for the association mappings in Server with Ram and Dmi.
The issue manifests itself in the HQL query, Server.findByIdAndOrgId which is
defined in Server_legacyUser.hbm.xml.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Find a server with multiple entries in the table rhnRam.
1.a. Find a server multiple entries in the table rhnServerDmi.
2. Run the HQL query Server.findByIdAndOrgId.
Actual results:
See Comment 2 in bug 235108. It provides a stack trace.

Expected results:
No exceptions should be thrown.

Additional info:

Comment 1 John Sanda 2007-04-09 15:22:58 UTC
There are a very small number of servers that have multiple rows in rhnRam and
or rhnServerDmi. It has been concluded that servers should not have multiple
rows in either of these tables. For those servers that do have multiple rows, it
should be treated as an error. Two things probably need to be done. 1) Remove
bad data from the database. 2) Make our Hibernate error handling more robust.
Currently, we almost always propagate a HibernateException up the call stack and
the user winds up seeing an ISE. We need to recover from errors when possible
and report/log more intelligible error messages when and where possible. Both of
these items are probably beyond the scope of 501.

Comment 2 Bret McMillan 2007-04-12 15:43:28 UTC
Has this hit webdev yet?

Comment 3 John Sanda 2007-04-12 15:59:38 UTC
The hibernate mappings for rhnServerDmi and rhnRam are in fact correct. We have
concluded that the few servers that have multiple rows in these tables have
erroneous data. These extra rows need to be purged from the database.

Bret, in our last conversation about this, you mentioned that we could do this
at any time. Can we move this off of 501h-must since it is not a high priority

Comment 5 Bret McMillan 2007-05-11 15:46:17 UTC
John, we'll align this to rhn502; once the script is done and you're happy with
it, we can fast-track it through the various env's.

Comment 6 Grant Gainey 2007-06-12 13:55:27 UTC
There are currently 15 servers with >1 DMI entry, 88 with >1 CPU, and 90 with >1
RAM entry.  It looks like the extras are duplicates of each other.  A script
that deleted all but one of the dups, however we did that, would likely be

Currently, any/all of the systems affected by this will throw an ISE if you
attempt to visit their SDC pages. 

As an example - 
Caused by: com.redhat.rhn.common.hibernate.HibernateRuntimeException: Executing
query Server.findByIdandOrgId with params {orgId=779013, sid=1000122758} failed
Caused by: org.hibernate.HibernateException: More than one row with the given
identifier was found: 1000122758, for class: com.redhat.rhn.domain.server.Ram

So - right now, we need to fix up the data, regardless of whatever other work we
do, it looks like.

Comment 7 Bret McMillan 2007-06-19 00:33:44 UTC
moving these to rhn505-triage until we can re-examine for rhn503+

Comment 8 Grant Gainey 2007-12-14 19:22:02 UTC
I'm using 237898 as the master BZ for tracking this problem - closing this one
as adup

*** This bug has been marked as a duplicate of 237898 ***

Comment 9 Stephen Herr 2010-07-21 20:01:41 UTC
*** Bug 579491 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.