Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1064176 - qpidd binds only to ipv6 after reboot
Summary: qpidd binds only to ipv6 after reboot
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: qpid-cpp
Version: 20
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Ted Ross
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-12 07:53 UTC by Boris Derzhavets
Modified: 2015-06-29 15:09 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-29 15:09:19 UTC


Attachments (Terms of Use)
/var/log/nova/scheduler.log (deleted)
2014-02-12 07:53 UTC, Boris Derzhavets
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1098659 None None None Never

Internal Links: 1098659

Description Boris Derzhavets 2014-02-12 07:53:15 UTC
Created attachment 862134 [details]
/var/log/nova/scheduler.log

Description of problem:

On 2 Two Real Node Neutron GRE + OVS Cluster ( on each one of two built clusters systems ) per :-

http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt

http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt

First one of them worked fine since 01/23/2014.

Now when I attempt to  run something like ( what worked for 2 weeks ) :-

[root@dfw02 (keystone_admin)]$ nova boot --flavor 2  --user-data=./myfile.txt --block_device_mapping vda=4cb4c501-c7b1-4c42-ba26-0141fcde038b:::0 VF20

or just attempt to load via glance image I get in /var/log/nova/scheduler.log
(attached)

2014-02-11 13:29:37.718 1161 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: connection aborted. Sleeping 1 seconds
2014-02-11 13:32:12.643 1161 WARNING nova.scheduler.driver [req-227e92ad-38f4-44a3-b986-8754c017e9b9 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: 934b661f-3d18-4a3e-9fdc-ca3458be61cb] Setting instance to ERROR state.
2014-02-11 13:38:58.537 1161 WARNING nova.scheduler.driver [req-7a081f55-9e8d-41a1-9ce2-0a16c46831d6 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: bc63d821-f203-4344-8a90-f9c9760068ee] Setting instance to ERROR state.
2014-02-11 04:58:13.619 1161 WARNING nova.scheduler.driver [req-914688db-33e5-4c8c-8824-99caab91e7ba 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: 34443dab-f1f9-4a00-bfb6-636b85bbbef4] Setting instance to ERROR state.
2014-02-11 05:32:45.815 1161 WARNING nova.scheduler.driver [req-1cbbd878-6358-4de5-a805-8017f15a4024 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: bd409f62-e7ce-4fb9-92a9-1a558c5fe396] Setting instance to ERROR state.
2014-02-11 14:35:39.835 1161 WARNING nova.scheduler.driver [req-ca8dc850-1385-4128-8f93-614a7c01641e 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: 90f7c2cb-8044-4cc1-8f9f-a7e85ce35a51] Setting instance to ERROR state.
2014-02-11 14:37:41.206 1161 WARNING nova.scheduler.driver [req-d67abe4d-3b15-4aa3-a9d7-13f302c7d039 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: 90ac3c0d-643c-479f-a1bc-6a126efc9a25] Setting instance to ERROR state.

All suspended instances of Ubuntu 13.10 and Fedora 20 may be resumed. But I am unable to create any new instance ( even Cirros) it immediately gets ERROR state

Two "Two real-node clusters Neutron GRE + OVS"   behave in the same way

Version-Release number of selected component (if applicable):

openstack-nova-network-2013.2.1-4.fc20.noarch
openstack-nova-2013.2.1-4.fc20.noarch
openstack-nova-scheduler-2013.2.1-4.fc20.noarch
openstack-nova-console-2013.2.1-4.fc20.noarch
openstack-nova-common-2013.2.1-4.fc20.noarch
openstack-nova-cells-2013.2.1-4.fc20.noarch
python-novaclient-2.15.0-1.fc20.noarch
openstack-nova-novncproxy-2013.2.1-4.fc20.noarch
openstack-nova-conductor-2013.2.1-4.fc20.noarch
openstack-nova-objectstore-2013.2.1-4.fc20.noarch
openstack-nova-api-2013.2.1-4.fc20.noarch
python-nova-2013.2.1-4.fc20.noarch
openstack-nova-cert-2013.2.1-4.fc20.noarch
openstack-nova-compute-2013.2.1-4.fc20.noarch


How reproducible:

Setup Two Node real Cluster per mentioned instructions and try to run something like  :

$ nova keypair-add oskey1 > oskey1.priv
  $ chmod 600 oskey1.priv

  $ glance image-list
  +--------------------------------------+--------+-------------+------------------+---------+--------+
  | ID                                   | Name   | Disk Format | Container Format | Size    | Status |
  +--------------------------------------+--------+-------------+------------------+---------+--------+
  | fa7a83d1-3ddb-4c0e-9c07-839b6b00f8ca | cirros | qcow2       | bare             | 9761280 | active |
  +--------------------------------------+--------+-------------+------------------+---------+--------+

  $ nova boot --flavor 2 --key_name oskey1 --image \
    fa7a83d1-3ddb-4c0e-9c07-839b6b00f8ca cirr-guest1

Actual results:

Instance gets status ERROR & NOSTATE

Expected results:

Instance get status ACTIVE & RUNNING

Additional info:

First of mentioned Clusters was working just fine for 18 days. I have created
more then 10 F20 and Ubuntu instances based on cinder volumes on Glusterfs.
Second one was built 2 days ago and worked OK just for several hours after creation . I didn't run in this period any "yum update" . After crash on second Cluster I brought up first Cluster and it was already broken in same way.

Comment 1 Boris Derzhavets 2014-02-12 08:31:42 UTC
I am adding the most recent /var/log/nova/scheduler.log obtained just now
after 

$ nova boot --flavor 2  --key-name oskey3  --image dc992799-7831-4933-b6ee-7b81868f808b CirrOS33

+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | CirrOS31                             |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000032                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | b139f3ae-1eb9-418e-8d0c-a762ad56ff66 |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-12T08:21:04Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | oskey3                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | CirrOS33                             |
| adminPass                            | SqtZP32AC5Ny                         |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
| created                              | 2014-02-12T08:21:04Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+

/var/log/nova/sheduler.log

2014-02-12 10:49:21.819 1174 WARNING nova.openstack.common.db.sqlalchemy.session [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] SQL connection failed. infinite attempts left.
2014-02-12 10:49:34.792 1174 WARNING nova.openstack.common.db.sqlalchemy.session [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] SQL connection failed. infinite attempts left.
2014-02-12 10:49:44.802 1174 WARNING nova.openstack.common.db.sqlalchemy.session [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] SQL connection failed. infinite attempts left.
2014-02-12 10:49:54.984 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 1 seconds
2014-02-12 10:49:55.988 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 2 seconds
2014-02-12 10:49:57.990 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 4 seconds
2014-02-12 10:50:01.995 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 8 seconds
2014-02-12 10:50:09.997 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 16 seconds
2014-02-12 10:50:26.013 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 32 seconds
2014-02-12 10:50:58.014 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
2014-02-12 10:51:58.034 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
2014-02-12 10:52:58.092 1174 ERROR nova.openstack.common.rpc.impl_qpid [req-f6a8d494-83ba-4e2e-af03-1c5dfd3e0dc2 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
2014-02-12 12:10:01.166 1174 WARNING nova.scheduler.driver [req-a3845362-e13e-4bc1-8297-8d9d4d2ca1cc 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: 8d2edd9a-a51f-472f-bc04-6a4ac66eaf0d] Setting instance to ERROR state.
2014-02-12 12:21:04.669 1174 WARNING nova.scheduler.driver [req-6b5cbc63-318f-4213-9e6b-eccbd7beafcd 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: b139f3ae-1eb9-418e-8d0c-a762ad56ff66] Setting instance to ERROR state.

Comment 2 Boris Derzhavets 2014-02-12 08:41:55 UTC
openstack-status on Controller node :-

== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+----------------------------------+---------+---------+-------+
|                id                |   name  | enabled | email |
+----------------------------------+---------+---------+-------+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+----------------------------------+---------+---------+-------+
== Glance images ==
+--------------------------------------+---------------------------------+-------------+------------------+-------------+--------+
| ID                                   | Name                            | Disk Format | Container Format | Size        | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-------------+--------+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image                 | qcow2       | bare             | 344457216   | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31                        | qcow2       | bare             | 13147648    | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64                | qcow2       | bare             | 237371392   | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image                 | qcow2       | bare             | 214106112   | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10             | qcow2       | bare             | 244514816   | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image             | qcow2       | bare             | 246022144   | active |
| 07071d00-fb85-4b32-a9b4-d515088700d0 | Windows Server 2012 R2 Std Eval | vhd         | bare             | 17182752768 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-------------+--------+
== Nova managed services ==
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-02-12T08:38:28.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-02-12T08:38:28.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-02-12T08:38:28.000000 | None            |
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+-------+------+
| ID                                   | Label | Cidr |
+--------------------------------------+-------+------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+--------------------------------------+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+--------------------------------------+------------------+-----------+------------+-------------+-----------------------------+
| ID                                   | Name             | Status    | Task State | Power State | Networks                    |
+--------------------------------------+------------------+-----------+------------+-------------+-----------------------------+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5        | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| b139f3ae-1eb9-418e-8d0c-a762ad56ff66 | CirrOS33         | ERROR     | None       | NOSTATE     |                             |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312        | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 95a36074-5145-4959-b3b3-2651f2ac1a9c | UbuntuSalamander | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.104 |
| 6e3e0d20-0af9-4c63-9060-ffd43ee54cef | VF20RS           | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.107 |
| 276c7f4b-53ab-480d-a439-e81f77ad3763 | VF20WRT          | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 55f6e0bc-281e-480d-b88f-193207ea4d4a | VF20XWL          | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.108 |
+--------------------------------------+------------------+-----------+------------+-------------+----------------------------

Comment 3 Lars Kellogg-Stedman 2014-02-12 14:56:52 UTC
Boris,

I see errors there for both SQL database connections and connections to the AMQP server. Are both services running?  Can you connect to them manually?

- For MySQL, use the "mysql" command line tool and the credentials from /etc/nova/nova.conf.
- For AMQP, just trying to "telnet server 5672".

If the AMQP connection fails, please verify that (a) qpidd is running and that (b) it is listening on the right ports (by running "netstat -tln | grep 5672).

Comment 4 Boris Derzhavets 2014-02-12 20:31:52 UTC
Lars,

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 5672
tcp        0      0 0.0.0.0:5672            0.0.0.0:*               LISTEN      3883/qpidd          
tcp6       0      0 :::5672                 :::*                    LISTEN      3883/qpidd 


[root@dfw02 ~(keystone_admin)]$ telnet dfw02.localdomain 5672
Trying 192.168.1.127...
Connected to dfw02.localdomain.
Escape character is '^]'.

[root@dfw02 ~(keystone_admin)]$ systemctl status mariadb -l
mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled)
   Active: active (running) since Thu 2014-02-13 00:21:07 MSK; 8min ago
  Process: 1244 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 1110 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 1243 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           ├─1243 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─1673 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock

Feb 13 00:20:33 dfw02.localdomain systemd[1]: Starting MariaDB database server...
Feb 13 00:20:42 dfw02.localdomain mysqld_safe[1243]: 140213 00:20:42 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
Feb 13 00:20:43 dfw02.localdomain mysqld_safe[1243]: 140213 00:20:43 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Feb 13 00:21:07 dfw02.localdomain systemd[1]: Started MariaDB database server.

And more :-

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep python
tcp        0      0 0.0.0.0:35357           0.0.0.0:*               LISTEN      1125/python         
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      1109/python         
tcp        0      0 0.0.0.0:8773            0.0.0.0:*               LISTEN      4172/python         
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      4172/python         
tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      1138/python         
tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      2561/python         
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      1125/python         
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      1119/python         
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4172/python

Comment 5 Boris Derzhavets 2014-02-12 20:37:36 UTC
Right after this checks one more time  :

[root@dfw02 ~(keystone_admin)]$ nova boot --flavor 2  --key-name oskey3  --image dc992799-7831-4933-b6ee-7b81868f808b CirrOS34
+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | CirrOS31                             |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000034                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 2061e983-e891-487a-82d1-e92f5f1fd26d |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-12T20:33:23Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | oskey3                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | CirrOS34                             |
| adminPass                            | dzWF5K85rnE8                         |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
| created                              | 2014-02-12T20:33:23Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+------------------+-----------+------------+-------------+-----------------------------+
| ID                                   | Name             | Status    | Task State | Power State | Networks                    |
+--------------------------------------+------------------+-----------+------------+-------------+-----------------------------+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5        | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 2061e983-e891-487a-82d1-e92f5f1fd26d | CirrOS34         | ERROR     | None       | NOSTATE     |                             |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312        | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 95a36074-5145-4959-b3b3-2651f2ac1a9c | UbuntuSalamander | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.104 |
| 6e3e0d20-0af9-4c63-9060-ffd43ee54cef | VF20RS           | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.107 |
| 276c7f4b-53ab-480d-a439-e81f77ad3763 | VF20WRT          | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 55f6e0bc-281e-480d-b88f-193207ea4d4a | VF20XWL          | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.108 |
+--------------------------------------+------------------+-----------+------------+-------------+-----------------

scheduler.log

291b064 None None] SQL connection failed. infinite attempts left.
2014-02-13 00:21:16.413 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 1 seconds
2014-02-13 00:21:17.419 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 2 seconds
2014-02-13 00:21:19.422 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 4 seconds
2014-02-13 00:21:23.428 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 8 seconds
2014-02-13 00:21:31.434 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 16 seconds
2014-02-13 00:21:47.447 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 32 seconds
2014-02-13 00:22:19.449 1122 ERROR nova.openstack.common.rpc.impl_qpid [req-8f68f63e-66d2-4296-9dc1-e0c23291b064 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
2014-02-13 00:33:23.685 1122 WARNING nova.scheduler.driver [req-cd4649e4-0494-44c1-8361-42381b47f6ab 970ed56ef7bc41d59c54f5ed8a1690dc d0a0acfdb62b4cc8a2bfa8d6a08bb62f] [instance: 2061e983-e891-487a-82d1-e92f5f1fd26d] Setting instance to ERROR state.

Comment 6 Boris Derzhavets 2014-02-13 07:03:10 UTC
After create completely new "Two Node Neutron GRE +OVS" cluster I was able to create via different glance images following instance with no errors :-

+--------------------------------------+----------------+-----------+------------+-------------+-----------------------------+
| ID                                   | Name           | Status    | Task State | Power State | Networks                    |
+--------------------------------------+----------------+-----------+------------+-------------+-----------------------------+
| 562b3512-861b-4788-805c-37c4caf0aca8 | Cirros310      | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| c6dc2f89-4a0f-4357-a7f0-68b9e61e064a | UbuntuSalander | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| ee48dd92-312a-46f4-8b76-b4a45df29f2c | UbuntuSaucy    | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.106 |
| 0ec82385-422e-448c-8b1f-618b3b7dd4eb | VF20RSX        | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 2ab8e2e7-77ab-41af-9818-6678ae92598b | VF20WXL        | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+----------------+-----------+------------+-------------+-----------------------------+

All following attempts ( based on new glance images) failed with ERROR&NOSTATE

and same message in /var/log/nova/scheduler.log

2014-02-13 10:40:17.311 1155 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: connection aborted. Sleeping 1 seconds
2014-02-13 10:46:45.766 1155 WARNING nova.scheduler.driver [req-648a45a6-abf0-496a-9185-ff77d19449e2 1571d51849714420af3040da24a99019 57e9f499f0114db88ba22306604e064e] [instance: 7cce0986-bef9-4c58-8e9f-d5825b8d0c1c] Setting instance to ERROR state. 

All others VMs were suspended. Memory on Compute node freed up

On compute node same report :-

openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: active (running) since Thu 2014-02-13 10:30:03 MSK; 30min ago
 Main PID: 1576 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           └─1576 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log

Feb 13 10:38:23 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:38:23.434 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:38:23 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:38:23.434 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:38:31 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:38:31.435 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:39:23 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:39:23.447 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:39:23 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:39:23.448 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:39:23 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:39:23.448 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:39:31 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:39:31.444 1576 ERROR nova.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
Feb 13 10:40:31 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:40:31.484 1576 WARNING nova.openstack.common.loopingcall [-] task run outlasted interval by 305.353957 sec
Feb 13 10:41:08 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:41:08.614 1576 WARNING nova.compute.manager [-] Found 5 in the database and 0 on the hypervisor.
Feb 13 10:51:09 dfw02.localdomain nova-compute[1576]: 2014-02-13 10:51:09.215 1576 WARNING nova.compute.manager [-] Found 5 in the database and 0 on the hypervisor.

Comment 7 Boris Derzhavets 2014-02-13 10:25:45 UTC
This update might be critical

Fedora 20 Update: qpid-cpp-0.24-9.fc20

    To: package-announce@xxxxxxxxxxxxxxxxxxxxxxx
    Subject: Fedora 20 Update: qpid-cpp-0.24-9.fc20
    From: updates@xxxxxxxxxxxxxxxxx
    Date: Sat, 01 Feb 2014 04:00:01 +0000
    Delivered-to: package-announce@xxxxxxxxxxxxxxxxxxxxxxx

--------------------------------------------------------------------------------
Fedora Update Notification
FEDORA-2014-1366
2014-01-23 09:48:40
--------------------------------------------------------------------------------

Name        : qpid-cpp
Product     : Fedora 20
Version     : 0.24
Release     : 9.fc20
URL         : http://qpid.apache.org
Summary     : Libraries for Qpid C++ client applications
Description :

Run-time libraries for AMQP client applications developed using Qpid
C++. Clients exchange messages with an AMQP message broker using
the AMQP protocol.


************************************************************************
* Wed Jan 22 2014 Darryl L. Pierce <dpierce@xxxxxxxxxx> - 0.24-9
- Set qpidd service to start after the network service.
************************************************************************

Comment 8 Lars Kellogg-Stedman 2014-02-13 16:51:04 UTC
That update may help; I ran into a problem with similar symptoms that was caused by this (runtime race between qpidd startup and networking).  But the result in that case was that qpidd was not listening on the IPv4 port (which was not the case in comment #4).

Let me know if that update resolves the problem for you.

Comment 9 Boris Derzhavets 2014-02-13 17:13:44 UTC
On the contrary, I mean that this update might be a reason of problem.

[root@dfw02 ~(keystone_admin)]$ rpm -qa | grep qpid
python-qpid-0.24-1.fc20.noarch
python-qpid-common-0.24-1.fc20.noarch
qpid-cpp-server-0.24-9.fc20.x86_64
qpid-proton-c-0.6-1.fc20.x86_64
qpid-cpp-client-0.24-9.fc20.x86_64

Due to /etc/rc.d/rc.local

#!/bin/sh
ifdown br-ex ;
ifup br-ex ;
service network restart ;

As was noticed by you OVS bridge on F20 doesn't come up properly.

Please, remind https://ask.openstack.org/en/question/10363/neutron-fails-to-connect-to-amqp-server-on-fedora-20/ . There is your comment. Same thing
happened on controller since 01/23/14. But it was still able to create instances.
Now I have 3 negative attempts to reproduce Kashyap's  schema from scratch on real F20 boxes. The most recent was yesterday : comment 6. Moreover I cannot no longer resume suspended instances on third "Two Node Neutron GRE + OVS"

"yum downgrade openstack-nova-* python-nova" doesn't help also. Qpidd I cannot
downgrade.

Comment 10 Boris Derzhavets 2014-02-14 05:32:10 UTC
Neutron-Server behaviour right after Controller starts ( happens since "Two Node Neutron GRE+OVS" setup as of 01/23/14)  :-

[root@dfw02 nova(keystone_admin)]$ service neutron-server status -l
Redirecting to /bin/systemctl status  -l neutron-server.service
neutron-server.service - OpenStack Quantum Server
   Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled)
   Active: active (running) since Fri 2014-02-14 09:19:45 MSK; 1min 52s ago
 Main PID: 5161 (neutron-server)
   CGroup: /system.slice/neutron-server.service
           └─5161 /usr/bin/python /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --log-file /var/log/neutron/server.log

Feb 14 09:19:45 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:45.438 5161 INFO neutron.manager [-] Loading Plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
Feb 14 09:19:45 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:45.550 5161 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Network VLAN ranges: {}
Feb 14 09:19:45 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:45.621 5161 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Tunnel ID ranges: [(1, 1000)]
Feb 14 09:19:45 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:45.683 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 1 seconds
Feb 14 09:19:46 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:46.684 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 2 seconds
Feb 14 09:19:48 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:48.687 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 4 seconds
Feb 14 09:19:52 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:19:52.691 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 8 seconds
Feb 14 09:20:00 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:20:00.696 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 16 seconds
Feb 14 09:20:16 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:20:16.706 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 32 seconds
Feb 14 09:20:48 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:20:48.707 5161 ERROR neutron.openstack.common.rpc.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds
[root@dfw02 nova(keystone_admin)]$ service openstack-nova-api restart
Redirecting to /bin/systemctl restart  openstack-nova-api.service
[root@dfw02 nova(keystone_admin)]$ openstack-status

Multiple runs:

# service qpidd restart 

(3-6) times allows to get :-

[root@dfw02 nova(keystone_admin)]$ service neutron-server status -l
Redirecting to /bin/systemctl status  -l neutron-server.service
neutron-server.service - OpenStack Quantum Server
   Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled)
   Active: active (running) since Fri 2014-02-14 09:19:45 MSK; 2min 28s ago
 Main PID: 5161 (neutron-server)
   CGroup: /system.slice/neutron-server.service
           └─5161 /usr/bin/python /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --log-file /var/log/neutron/server.log

Feb 14 09:21:48 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:21:48.859 5161 INFO keystoneclient.middleware.auth_token [-] Starting keystone auth_token middleware
Feb 14 09:21:48 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:21:48.860 5161 INFO keystoneclient.middleware.auth_token [-] Using /var/lib/neutron/keystone-signing as cache directory for signing certificate
Feb 14 09:21:48 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:21:48.887 5161 INFO neutron.service [-] Neutron service started, listening on 0.0.0.0:9696
Feb 14 09:21:56 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:21:56.633 5161 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.1.127
Feb 14 09:21:56 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:21:56.777 5161 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.1.127
Feb 14 09:21:58 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:21:58.341 5161 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.1.127
Feb 14 09:22:06 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:22:06.108 5161 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.1.127
Feb 14 09:22:06 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:22:06.354 5161 INFO neutron.openstack.common.rpc.impl_qpid [-] Connected to AMQP server on 192.168.1.127:5672
Feb 14 09:22:07 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:22:07.501 5161 INFO neutron.openstack.common.rpc.impl_qpid [-] Connected to AMQP server on 192.168.1.127:5672
Feb 14 09:22:07 dfw02.localdomain neutron-server[5161]: 2014-02-14 09:22:07.594 5161 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.1.127

and openstack-report comes to normal.

Comment 11 Lukas Bezdicka 2014-05-26 15:32:55 UTC
Please read from bottom to top.



Restarted service comes back to life..

May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug Bind key [6a7dd169-ca3b-45cb-86ec-1924e5536aea:0.0] to queue 6a7dd169-ca3b-45cb-86ec-1924e5536aea:0.0 (origin=)
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.queueDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.queueDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug Configured queue 6a7dd169-ca3b-45cb-86ec-1924e5536aea:0.0 with qpid.trace.id='' and qpid.trace.exclude='' i.e. 0 elements
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug Configured queue 6a7dd169-ca3b-45cb-86ec-1924e5536aea:0.0 with no-local=0
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:queue:6a7dd169-ca3b-45cb-86ec-1924e5536aea:0.0
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: receiver marked completed: 2 incomplete: { } unknown-completed: { [0,2] }
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Model] debug Create exchange. name:q-agent-notifier-port-update_fanout user:anonymous rhost:192.168.122.193:5672-192.168.122.193:50524 type:fanout alternateExchange: durable:F
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:q-agent-notifier-port-update_fanout
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: receiver marked completed: 1 incomplete: { } unknown-completed: { [0,1] }
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: receiver marked completed: 0 incomplete: { } unknown-completed: { [0,0] }
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: receiver command-point set to: (0+0)
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: ready to send, activating output.
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Protocol] debug Attached channel 0 to anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: attached on broker.
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:session:6a7dd169-ca3b-45cb-86ec-1924e5536aea:0
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug SessionState::SessionState anonymous.6a7dd169-ca3b-45cb-86ec-1924e5536aea:0: 0x7f24b80459b0
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Model] debug Create connection. user:anonymous rhost:192.168.122.193:5672-192.168.122.193:50524
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.clientConnect
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.clientConnect
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:connection:192.168.122.193:5672-192.168.122.193:50524
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Broker] debug LinkRegistry::notifyConnection(); key=192.168.122.193:5672-192.168.122.193:50524
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Security] debug SASL: No Authentication Performed
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [System] debug RECV [192.168.122.193:5672-192.168.122.193:50524]: INIT(0-10)

Notice now it got through..

May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Network] debug Listened to: 5672
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Network] debug Listened to: 5672
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:qmf.default.direct
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:qmf.default.topic
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:qpid.management
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.match
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.fanout
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.topic
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.direct
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:vhost:org.apache.qpid.broker:broker:amqp-broker,/
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:broker:amqp-broker
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object (V1) added: org.apache.qpid.broker:system:b821f757-0cd3-47d0-a423-239f57aa87a9
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:replicationPanic
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:queueThresholdExceeded
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:unsubscribe
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:subscribe
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:unbind
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:bind
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:exchangeDelete
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:exchangeDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:queueDelete
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:queueDeclare
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:brokerLinkDown
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:brokerLinkUp
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:clientDisconnect
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:clientConnectFail
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:clientConnect
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:managementsetupstate
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:session
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:bridge
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:link
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:connection
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:subscription
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:binding
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:exchange
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:queue
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:vhost
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:agent
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:broker
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:memory
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added class org.apache.qpid.broker:system
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug SEND PackageInd package=org.apache.qpid.broker to=schema.package
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent added package org.apache.qpid.broker
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug ManagementAgent boot sequence: 13
May 26 17:23:03 rhel7 qpidd[4101]: 2014-05-26 17:23:03 [Management] debug Management object added: amqp-broker

So yeah can't connect restart ..

May 26 17:23:02 rhel7 cinder-volume[1394]: 2014-05-26 17:23:02.317 2026 ERROR oslo.messaging._drivers.impl_qpid [req-ede17230-daba-49b9-aa29-cc59ae4b1e63 - - - - -] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 5 seconds
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SENT AgentHeartbeat name=apache.org:qpidd:ef7918ab-0244-4ac4-b36f-0221ede6b189
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND HeartbeatInd to=console.heartbeat.1.0
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND Multicast ContentInd to=agent.ind.data.org_apache_qpid_broker.exchange.apache_org.qpidd.ef7918ab-0244-4ac4-b36f-0221ede6b189 props=0 stats=2 len=1118
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND V1 Multicast ContentInd to=console.obj.1.0.org.apache.qpid.broker.exchange props=0 stats=2 len=336
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND Multicast ContentInd to=agent.ind.data.org_apache_qpid_broker.broker.apache_org.qpidd.ef7918ab-0244-4ac4-b36f-0221ede6b189 props=0 stats=1 len=1084
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND V1 Multicast ContentInd to=console.obj.1.0.org.apache.qpid.broker.broker props=0 stats=1 len=366
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND Multicast ContentInd to=agent.ind.data.org_apache_qpid_broker.memory.apache_org.qpidd.ef7918ab-0244-4ac4-b36f-0221ede6b189 props=1 stats=0 len=465
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug SEND V1 Multicast ContentInd to=console.obj.1.0.org.apache.qpid.broker.memory props=1 stats=0 len=163
May 26 17:23:01 rhel7 qpidd[1388]: 2014-05-26 17:23:01 [Management] debug Management agent periodic processing: management snapshot: 1 packages, 12 objects (0 deleted), 0 new objects  (0 deleted), 0 pending deletes
May 26 17:23:01 rhel7 cinder-backup[1393]: 2014-05-26 17:23:01.645 1393 ERROR oslo.messaging._drivers.impl_qpid [-] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 5 seconds

It seems to be dead after trying to listen...

May 26 17:22:11 rhel7 qpidd[1388]: 2014-05-26 17:22:11 [Management] debug SEND Multicast ContentInd to=agent.ind.data.org_apache_qpid_broker.broker.apache_org.qpidd.ef7918ab-0244-4ac4-b36f-0221ede6b189 props=1 stats=1 len=1358
May 26 17:22:11 rhel7 qpidd[1388]: 2014-05-26 17:22:11 [Management] debug SEND V1 Multicast ContentInd to=console.obj.1.0.org.apache.qpid.broker.broker props=1 stats=1 len=532
May 26 17:22:11 rhel7 qpidd[1388]: 2014-05-26 17:22:11 [Management] debug SEND Multicast ContentInd to=agent.ind.data.org_apache_qpid_broker.memory.apache_org.qpidd.ef7918ab-0244-4ac4-b36f-0221ede6b189 props=1 stats=0 len=465
May 26 17:22:11 rhel7 qpidd[1388]: 2014-05-26 17:22:11 [Management] debug SEND V1 Multicast ContentInd to=console.obj.1.0.org.apache.qpid.broker.memory props=1 stats=0 len=163
May 26 17:22:11 rhel7 qpidd[1388]: 2014-05-26 17:22:11 [Management] debug Management agent periodic processing: management snapshot: 1 packages, 0 objects (0 deleted), 12 new objects  (0 deleted), 0 pending deletes
May 26 17:22:10 rhel7 cinder-scheduler[1395]: 2014-05-26 17:22:10.906 1395 ERROR oslo.messaging._drivers.impl_qpid [req-0c2b52bf-ed57-4721-ba6a-2ce89877a187 - - - - -] Unable to connect to AMQP server: [Errno 101] ENETUNREACH. Sleeping 1 seconds
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Network] debug Listened to: 5672
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:qmf.default.direct
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:qmf.default.topic
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:qpid.management
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.match
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.fanout
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.topic
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:amq.direct
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v2) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND raiseEvent (v1) class=org.apache.qpid.broker.exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:exchange:
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:vhost:org.apache.qpid.broker:broker:amqp-broker,/
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:broker:amqp-broker
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object (V1) added: org.apache.qpid.broker:system:b821f757-0cd3-47d0-a423-239f57aa87a9
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:replicationPanic
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:queueThresholdExceeded
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:unsubscribe
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:subscribe
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:unbind
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:bind
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:exchangeDelete
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:exchangeDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:queueDelete
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:queueDeclare
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:brokerLinkDown
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:brokerLinkUp
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:clientDisconnect
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:clientConnectFail
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:clientConnect
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:managementsetupstate
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:session
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:bridge
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:link
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:connection
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:subscription
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:binding
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:exchange
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:queue
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:vhost
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:agent
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:broker
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:memory
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added class org.apache.qpid.broker:system
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug SEND PackageInd package=org.apache.qpid.broker to=schema.package
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent added package org.apache.qpid.broker
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug ManagementAgent boot sequence: 12
May 26 17:22:01 rhel7 qpidd[1388]: 2014-05-26 17:22:01 [Management] debug Management object added: amqp-broker

Machine started from reboot.

Comment 12 Alan Pevec 2014-05-27 11:29:33 UTC
similar to libvirt bug 1098659

Comment 13 Lukas Bezdicka 2014-05-27 15:23:06 UTC
This is a bug in glibc where :

   memset(&hints, 0, sizeof(struct addrinfo));
    hints.ai_family = AF_UNSPEC;    /* Allow IPv4 or IPv6 */
    hints.ai_socktype = SOCK_DGRAM; /* Datagram socket */
    hints.ai_flags = AI_PASSIVE;    /* For wildcard IP address */
    hints.ai_protocol = 0;          /* Any protocol */
    hints.ai_canonname = NULL;
    hints.ai_addr = NULL;
    hints.ai_next = NULL;

   s = getaddrinfo(NULL, argv[1], &hints, &result);

returns on rhel6 two addresses: 0.0.0.0 ::0 and on rhel7 or fedora20 just ::0 if the eth0 (probably just default route interface) didn't acquire ip.

Comment 14 Lukas Bezdicka 2014-05-27 15:51:30 UTC
wrong reproducer, I also have AI_ADDRCONFIG which probably is root cause.

Comment 15 Lukas Bezdicka 2014-05-27 16:10:07 UTC
This fixes the issue:

diff --git a/cpp/src/qpid/sys/posix/SocketAddress.cpp b/cpp/src/qpid/sys/posix/SocketAddress.cpp
index b88b3a2..6cdfb70 100644
--- a/cpp/src/qpid/sys/posix/SocketAddress.cpp
+++ b/cpp/src/qpid/sys/posix/SocketAddress.cpp
@@ -120,7 +120,7 @@ const ::addrinfo& getAddrInfo(const SocketAddress& sa)
     if (!sa.addrInfo) {
         ::addrinfo hints;
         ::memset(&hints, 0, sizeof(hints));
-        hints.ai_flags = AI_ADDRCONFIG; // Only use protocols that we have configured interfaces for
+//        hints.ai_flags = AI_ADDRCONFIG; // Only use protocols that we have configured interfaces for
         hints.ai_family = AF_UNSPEC; // Allow both IPv4 and IPv6
         hints.ai_socktype = SOCK_STREAM;

Comment 16 Fedora End Of Life 2015-05-29 10:53:33 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 18 Fedora End Of Life 2015-06-29 15:09:19 UTC
Fedora 20 changed to end-of-life (EOL) status on 2015-06-23. Fedora 20 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.