Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511538 - Inconsistency between source and destination hypervisors when high-loaded (RAM) instance is migated
Summary: Inconsistency between source and destination hypervisors when high-loaded (RA...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 8.0 (Liberty)
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: nova-maint
QA Contact: nova-maint
URL:
Whiteboard:
Depends On:
Blocks: 1381612
TreeView+ depends on / blocked
 
Reported: 2017-11-09 14:23 UTC by Sergii Mykhailushko
Modified: 2018-05-29 09:12 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-05-29 09:12:43 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1496788 None CLOSED Live migration fails when a instance has high load average (RAM) 2018-08-28 02:20:31 UTC

Description Sergii Mykhailushko 2017-11-09 14:23:09 UTC
This request is derived from https://bugzilla.redhat.com/show_bug.cgi?id=1496788 -- Live migration fails when a instance has high load average (RAM)

As suggested there, customer reproduced the issue with updated kernel version (3.10.0-693.1.1).

This time live migration worked as expected but there were still some inconsistencies between the source and destination host.

Here are some test background and the results:

Every instance was  created with 8 vCPUs and 16 GB RAM; 'stress' tool was used for overloading the instance;


Every instance was  created with 8 vCPUs and 16 GB RAM, and they did the following tests with stress tool:  

1. Generated 10 points of load but without memory usage:     

~~~
stress --cpu 5 --io 5 --timeout 600s	
~~~

The migration from node3 to node2 was successfully completed 
 
~~~ 
$ nova show 61125f23-ab51-49ba-9e05-0284e9a90e8d
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | lab1                                                     |
| OS-EXT-SRV-ATTR:host                 | host2.example.com                          		  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | host2.example.com                                        |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000cff                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2017-10-09T13:48:32.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2017-10-09T13:48:22Z                                     |
| flavor                               | m1.xlarge (5)                                            |
| hostId                               | 0ba52324bdfd5233f3cd43c26ae3b3eba49cb18df261f6a3c59bbe4a |
| id                                   | 61125f23-ab51-49ba-9e05-0284e9a90e8d                     |
| image                                | RHEL7.3 (2d6d0405-d17e-441c-b5cd-1de3e64c5cd5)           |
| internal network                     | 10.0.0.1, 20.0.0.1                                       |
| key_name                             | key1                                                     |
| metadata                             | {}                                                       |
| name                                 | test-migration-1                                         |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 99                                                       |
| security_groups                      | ICMP, SSH and HTTP                                       |
| status                               | ACTIVE                                                   |
| tenant_id                            | d796c8573b3f44b1b607c5a16e4a590d                         |
| updated                              | 2017-10-09T14:11:48Z                                     |
| user_id                              | a79f72aff03b47c4ab62942b306dd536                         |
+--------------------------------------+----------------------------------------------------------+
~~~

2. Generated 10 points of load but wth memory usage in this case:     

~~~
stress --cpu 2 --io 2 --vm 6 --timeout 600s	
~~~

The migration from node2 to node3 failed
 
The instance affected was the same than the one used in the previous test (61125f23-ab51-49ba-9e05-0284e9a90e8d) and the error messages were the same than in the next test. 


3. Generated 8 points of load with memory usage in this case. The command was this one: 

~~~
stress --cpu 2 --io 2 --vm 4 --timeout 600s
~~~

The migration from node2 to node3 failed with the following error messages: 

~~~
2017-10-09 16:31:23.409 6728 ERROR nova.virt.libvirt.driver [req-c2ca97ee-dbd4-466f-8000-44c1db6bf6c6 1bb8b87bb73c4c2b9f4d7237c146ad38 621f1bb5601c4de0a121093b6f2896bd - - -] [instance: 1a50a786-c627-43b0-bbc9-afd9d9fcff1e] Live Migration failure: operation aborted: migration job: canceled by client
2017-10-09 16:31:23.410 6728 DEBUG nova.virt.libvirt.driver [req-c2ca97ee-dbd4-466f-8000-44c1db6bf6c6 1bb8b87bb73c4c2b9f4d7237c146ad38 621f1bb5601c4de0a121093b6f2896bd - - -] [instance: 1a50a786-c627-43b0-bbc9-afd9d9fcff1e] Migration operation thread notification thread_finished /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6275
2017-10-09 16:31:23.664 6728 DEBUG nova.virt.libvirt.driver [req-c2ca97ee-dbd4-466f-8000-44c1db6bf6c6 1bb8b87bb73c4c2b9f4d7237c146ad38 621f1bb5601c4de0a121093b6f2896bd - - -] [instance: 1a50a786-c627-43b0-bbc9-afd9d9fcff1e] VM running on src, migration failed _live_migration_monitor /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6081
2017-10-09 16:31:23.664 6728 DEBUG nova.virt.libvirt.driver [req-c2ca97ee-dbd4-466f-8000-44c1db6bf6c6 1bb8b87bb73c4c2b9f4d7237c146ad38 621f1bb5601c4de0a121093b6f2896bd - - -] [instance: 1a50a786-c627-43b0-bbc9-afd9d9fcff1e] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6101
2017-10-09 16:31:23.664 6728 ERROR nova.virt.libvirt.driver [req-c2ca97ee-dbd4-466f-8000-44c1db6bf6c6 1bb8b87bb73c4c2b9f4d7237c146ad38 621f1bb5601c4de0a121093b6f2896bd - - -] [instance: 1a50a786-c627-43b0-bbc9-afd9d9fcff1e] Migration operation has aborted
2017-10-09 16:31:23.826 6728 DEBUG nova.virt.libvirt.driver [req-c2ca97ee-dbd4-466f-8000-44c1db6bf6c6 1bb8b87bb73c4c2b9f4d7237c146ad38 621f1bb5601c4de0a121093b6f2896bd - - -] [instance: 1a50a786-c627-43b0-bbc9-afd9d9fcff1e] Live migration monitoring is all done _live_migration /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6295 
~~~

~~~ 
$ nova show 1a50a786-c627-43b0-bbc9-afd9d9fcff1e
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | lab1                                               |
| OS-EXT-SRV-ATTR:host                 | host2.example.com                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | host2.example.com                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000d02                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2017-10-09T13:48:31.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2017-10-09T13:48:23Z                                     |
| flavor                               | m1.xlarge (5)                                            |
| hostId                               | 0ba52324bdfd5233f3cd43c26ae3b3eba49cb18df261f6a3c59bbe4a |
| id                                   | 1a50a786-c627-43b0-bbc9-afd9d9fcff1e                     |
| image                                | RHEL7.3 (2d6d0405-d17e-441c-b5cd-1de3e64c5cd5)           |
| internal network                     | 10.0.0.146, 20.48.224.75                                 |
| key_name                             | key1                                                     |
| metadata                             | {}                                                       |
| name                                 | test-migration-2                                         |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 97                                                       |
| security_groups                      | ICMP, SSH and HTTP                                       |
| status                               | ACTIVE                                                   |
| tenant_id                            | d796c8573b3f44b1b607c5a16e4a590d                         |
| updated                              | 2017-10-09T14:31:23Z                                     |
| user_id                              | a79f72aff03b47c4ab62942b306dd536                         |
+--------------------------------------+----------------------------------------------------------+
~~~							
									
It seems that it is likely being caused by the memory being written faster than the migration is able to copy the data over the network, and for this reason, it never finishes properly. 

In any case, the instance is not properly migrated but now it is only defined in the source host and no in the destination host. 

source:
~~~
[root@node2 ~]# date
Tue Oct 10 09:44:57 CEST 2017
[root@node2 ~]# virsh list --all | grep instance-00000d02
 40    instance-00000d02              running
~~~

destination:
~~~
[root@node3 ~]# date
Tue Oct 10 09:45:26 CEST 2017
[root@node3 ~]# virsh list --all | grep instance-00000d02
[root@node3 ~]#
~~~

This behaviour seems to be right but there are still some inconsistencies: 

- The OVS and linux bridge ports are still in both, the source and destination hypervisor: 

source:
~~~
[root@node2 ~]# virsh domiflist instance-00000d02
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap7c4099af-ae bridge     qbr7c4099af-ae virtio      fa:16:3e:be:46:32

[root@node2 ~]# ovs-vsctl show | grep 7c4099af-ae
        Port "qvo7c4099af-ae"
            Interface "qvo7c4099af-ae"
[root@node2 ~]# brctl show | grep 7c4099af-ae
qbr7c4099af-ae          8000.e67c04f79f9a       no              qvb7c4099af-ae
                                                        tap7c4099af-ae
~~~

destination:
~~~
[root@node3 ~]# ovs-vsctl show | grep 7c4099af-ae
        Port "qvo7c4099af-ae"
            Interface "qvo7c4099af-ae"
[root@node3 ~]# brctl show | grep 7c4099af-ae
qbr7c4099af-ae          8000.f26981d97085       no              qvb7c4099af-ae
~~~


- As the port is bound in both hosts, the endpoint file and Openflow rules are still in both hypervisors: 

source:
~~~
[root@node2 ~]# grep -i 7c4099af-ae /var/lib/opflex-agent-ovs/endpoints/
/var/lib/opflex-agent-ovs/endpoints/7c4099af-ae98-4277-ade2-f6b8bbedb99c_fa:16:3e:be:46:32.ep:    "interface-name": "qvo7c4099af-ae",
/var/lib/opflex-agent-ovs/endpoints/7c4099af-ae98-4277-ade2-f6b8bbedb99c_fa:16:3e:be:46:32.ep:    "uuid": "7c4099af-ae98-4277-ade2-f6b8bbedb99c|fa-16-3e-be-46-32",

[root@node2 ~]# ovs-ofctl -OOpenFlow13 dump-flows br-int | grep fa:16:3e:be:46:32
 cookie=0x0, duration=65241.663s, table=0, n_packets=2155, n_bytes=246184, priority=30,ip,in_port=48,dl_src=fa:16:3e:be:46:32,nw_src=10.0.0.146 actions=goto_table:1
 cookie=0x0, duration=65241.663s, table=0, n_packets=9902, n_bytes=415884, priority=40,arp,in_port=48,dl_src=fa:16:3e:be:46:32,arp_spa=10.0.0.146 actions=goto_table:1
 cookie=0x0, duration=65241.663s, table=0, n_packets=0, n_bytes=0, priority=40,icmp6,in_port=48,dl_src=fa:16:3e:be:46:32,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:febe:4632 actions=goto_table:1
 cookie=0x0, duration=65241.663s, table=0, n_packets=5, n_bytes=366, priority=30,ipv6,in_port=48,dl_src=fa:16:3e:be:46:32,ipv6_src=fe80::f816:3eff:febe:4632 actions=goto_table:1
 cookie=0x0, duration=65241.663s, table=0, n_packets=0, n_bytes=0, priority=20,in_port=48,dl_src=fa:16:3e:be:46:32 actions=goto_table:1
 cookie=0x8000000000000004, duration=65241.663s, table=1, n_packets=2, n_bytes=684, priority=150,udp,in_port=48,dl_src=fa:16:3e:be:46:32,tp_src=68,tp_dst=67 actions=load:0x3f1->NXM_NX_REG0[],CONTROLLER:65535
 cookie=0x0, duration=65241.663s, table=1, n_packets=12062, n_bytes=662434, priority=140,in_port=48,dl_src=fa:16:3e:be:46:32 actions=load:0x3f1->NXM_NX_REG0[],load:0x6->NXM_NX_REG4[],load:0x4->NXM_NX_REG5[],load:0x4->NXM_NX_REG6[],goto_table:2
 cookie=0x0, duration=65241.663s, table=2, n_packets=1, n_bytes=64, priority=10,reg4=0x6,dl_dst=fa:16:3e:be:46:32 actions=load:0x3f1->NXM_NX_REG2[],load:0x30->NXM_NX_REG7[],goto_table:7
 cookie=0x0, duration=65241.663s, table=3, n_packets=0, n_bytes=0, priority=500,ip,reg6=0x4,dl_dst=00:22:bd:f8:19:ff,nw_dst=10.0.0.146 actions=load:0x3f1->NXM_NX_REG2[],load:0x30->NXM_NX_REG7[],set_field:00:22:bd:f8:19:ff->eth_src,set_field:fa:16:3e:be:46:32->eth_dst,dec_ttl,write_metadata:0x400/0x400,goto_table:7
 cookie=0x0, duration=65186.132s, table=3, n_packets=2033, n_bytes=208124, priority=452,ip,reg0=0x539,reg6=0x1,nw_dst=20.48.224.75 actions=set_field:00:22:bd:f8:19:ff->eth_src,set_field:fa:16:3e:be:46:32->eth_dst,set_field:10.0.0.146->ip_dst,dec_ttl,load:0x3f1->NXM_NX_REG2[],load:0x6->NXM_NX_REG4[],load:0x4->NXM_NX_REG5[],load:0x4->NXM_NX_REG6[],load:0x30->NXM_NX_REG7[],write_metadata:0x400/0x400,goto_table:4
 cookie=0x0, duration=65241.664s, table=6, n_packets=113, n_bytes=14381, priority=50,ip,reg6=0x4,nw_dst=10.0.0.146 actions=set_field:00:22:bd:f8:19:ff->eth_src,set_field:fa:16:3e:be:46:32->eth_dst,dec_ttl,output:48
 cookie=0x0, duration=65241.664s, table=8, n_packets=1987, n_bytes=230230, priority=10,ip,reg6=0x4,reg7=0x539,metadata=0x2/0xff,nw_src=10.0.0.146 actions=set_field:fa:16:3e:be:46:32->eth_src,set_field:00:22:bd:f8:19:ff->eth_dst,set_field:20.48.224.75->ip_src,dec_ttl,load:0x539->NXM_NX_REG0[],load:0x17->NXM_NX_REG4[],load:0x1->NXM_NX_REG5[],load:0x1->NXM_NX_REG6[],load:0->NXM_NX_REG7[],load:0x400->OXM_OF_METADATA[],resubmit(,2)
~~~

destination:
~~~
[root@node3 ~]# grep -i 7c4099af-ae /var/lib/opflex-agent-ovs/endpoints/
/var/lib/opflex-agent-ovs/endpoints/7c4099af-ae98-4277-ade2-f6b8bbedb99c_fa:16:3e:be:46:32.ep:    "interface-name": "qvo7c4099af-ae",
/var/lib/opflex-agent-ovs/endpoints/7c4099af-ae98-4277-ade2-f6b8bbedb99c_fa:16:3e:be:46:32.ep:    "uuid": "7c4099af-ae98-4277-ade2-f6b8bbedb99c|fa-16-3e-be-46-32",

[root@node3 ~]#  ovs-ofctl -OOpenFlow13 dump-flows br-int | grep fa:16:3e:be:46:32
 cookie=0x0, duration=62852.579s, table=0, n_packets=0, n_bytes=0, priority=40,arp,in_port=13,dl_src=fa:16:3e:be:46:32,arp_spa=10.0.0.146 actions=goto_table:1
 cookie=0x0, duration=62852.579s, table=0, n_packets=0, n_bytes=0, priority=30,ip,in_port=13,dl_src=fa:16:3e:be:46:32,nw_src=10.0.0.146 actions=goto_table:1
 cookie=0x0, duration=62852.579s, table=0, n_packets=0, n_bytes=0, priority=40,icmp6,in_port=13,dl_src=fa:16:3e:be:46:32,icmp_type=136,icmp_code=0,nd_target=fe80::f816:3eff:febe:4632 actions=goto_table:1
 cookie=0x0, duration=62852.579s, table=0, n_packets=0, n_bytes=0, priority=30,ipv6,in_port=13,dl_src=fa:16:3e:be:46:32,ipv6_src=fe80::f816:3eff:febe:4632 actions=goto_table:1
 cookie=0x0, duration=62852.579s, table=0, n_packets=0, n_bytes=0, priority=20,in_port=13,dl_src=fa:16:3e:be:46:32 actions=goto_table:1
 cookie=0x8000000000000004, duration=62852.578s, table=1, n_packets=0, n_bytes=0, priority=150,udp,in_port=13,dl_src=fa:16:3e:be:46:32,tp_src=68,tp_dst=67 actions=load:0x3f1->NXM_NX_REG0[],CONTROLLER:65535
 cookie=0x0, duration=62852.579s, table=1, n_packets=0, n_bytes=0, priority=140,in_port=13,dl_src=fa:16:3e:be:46:32 actions=load:0x3f1->NXM_NX_REG0[],load:0x4->NXM_NX_REG4[],load:0x4->NXM_NX_REG5[],load:0x4->NXM_NX_REG6[],goto_table:2
 cookie=0x0, duration=62852.579s, table=2, n_packets=0, n_bytes=0, priority=10,reg4=0x4,dl_dst=fa:16:3e:be:46:32 actions=load:0x3f1->NXM_NX_REG2[],load:0xd->NXM_NX_REG7[],goto_table:7
 cookie=0x0, duration=62852.579s, table=3, n_packets=0, n_bytes=0, priority=500,ip,reg6=0x4,dl_dst=00:22:bd:f8:19:ff,nw_dst=10.0.0.146 actions=load:0x3f1->NXM_NX_REG2[],load:0xd->NXM_NX_REG7[],set_field:00:22:bd:f8:19:ff->eth_src,set_field:fa:16:3e:be:46:32->eth_dst,dec_ttl,write_metadata:0x400/0x400,goto_table:7
 cookie=0x0, duration=62852.579s, table=3, n_packets=8, n_bytes=676, priority=452,ip,reg0=0x539,reg6=0x1,nw_dst=20.48.224.75 actions=set_field:00:22:bd:f8:19:ff->eth_src,set_field:fa:16:3e:be:46:32->eth_dst,set_field:10.0.0.146->ip_dst,dec_ttl,load:0x3f1->NXM_NX_REG2[],load:0x4->NXM_NX_REG4[],load:0x4->NXM_NX_REG5[],load:0x4->NXM_NX_REG6[],load:0xd->NXM_NX_REG7[],write_metadata:0x400/0x400,goto_table:4
 cookie=0x0, duration=62852.579s, table=6, n_packets=0, n_bytes=0, priority=50,ip,reg6=0x4,nw_dst=10.0.0.146 actions=set_field:00:22:bd:f8:19:ff->eth_src,set_field:fa:16:3e:be:46:32->eth_dst,dec_ttl,output:13
 cookie=0x0, duration=62852.579s, table=8, n_packets=0, n_bytes=0, priority=10,ip,reg6=0x4,reg7=0x539,metadata=0x2/0xff,nw_src=10.0.0.146 actions=set_field:fa:16:3e:be:46:32->eth_src,set_field:00:22:bd:f8:19:ff->eth_dst,set_field:20.48.224.75->ip_src,dec_ttl,load:0x539->NXM_NX_REG0[],load:0x1->NXM_NX_REG4[],load:0x1->NXM_NX_REG5[],load:0x1->NXM_NX_REG6[],load:0->NXM_NX_REG7[],load:0x400->OXM_OF_METADATA[],resubmit(,2)
~~~

The additional test was performed: 

4. Generated 6 points of load by using the following command:

~~~
stress --cpu 2 --io 2 --vm 2 --timeout 600s	
~~~

The migration from node3 to node2 also failed with the same error messages than the previous test (3).
												 
~~~
$ nova show ddb85d6c-252c-4b43-93e1-0dfb66068f04
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | node3.example.com                                        |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node3.example.com                                        |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000d05                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2017-10-09T13:48:32.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2017-10-09T13:48:23Z                                     |
| flavor                               | m1.xlarge (5)                                            |
| hostId                               | 1b9cc1d8b779a0b21c9d79a99990a28b5889bbeb24e21f35e69f5fa2 |
| id                                   | ddb85d6c-252c-4b43-93e1-0dfb66068f04                     |
| image                                | RHEL7.3 (2d6d0405-d17e-441c-b5cd-1de3e64c5cd5)           |
| internal network                     | 10.0.0.147, 20.48.224.76                                 |
| key_name                             | key1                                                     |
| metadata                             | {}                                                       |
| name                                 | test-migration-3                                         |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 99                                                       |
| security_groups                      | ICMP, SSH and HTTP                                       |
| status                               | ACTIVE                                                   |
| tenant_id                            | d796c8573b3f44b1b607c5a16e4a590d                         |
| updated                              | 2017-10-09T14:45:55Z                                     |
| user_id                              | a79f72aff03b47c4ab62942b306dd536                         |
+--------------------------------------+----------------------------------------------------------+
~~~

Comment 1 Sahid Ferdjaoui 2017-11-09 16:45:05 UTC
This inconsistent has already been observed with issue bug 1401173. If libvirt is returning VIR_DOMAIN_JOB_NONE Nova is trying to do best effort to understand whether or not the migration succeed.

A patch in libvirt seems to have resolved the confusion:

  https://bugzilla.redhat.com/show_bug.cgi?id=1401173#c34

Comment 2 Kashyap Chamarthy 2017-11-17 15:22:26 UTC
Sergio, can you please try with the libvirt version Sahid linked to in comment#1:

    libvirt-3.9.0-1.el7

And report us back the details, whether it resolves the issue or not?


Note You need to log in before you can comment on or make changes to this bug.