Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511793 - kube-proxy is unable to distribute traffic to different pods with sessionAffinity:client-ip after rolling update
Summary: kube-proxy is unable to distribute traffic to different pods with sessionAffi...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.3.1
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: 3.7.z
Assignee: jtanenba
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-10 07:47 UTC by 408514942
Modified: 2018-03-02 17:39 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-02 17:39:55 UTC


Attachments (Terms of Use)

Description 408514942 2017-11-10 07:47:21 UTC
Description of problem:


From Customer:

We deployed an application with a service + 1 rc + 2 pods on openshift 3.3,  the application port is exported as nodeport and set sessionAffinity to client-ip in service file. we use F5 device as external loadbalancer for two pods, that is , userclient --> F5 -- >nodeip:nodeport.

kube-proxy is unable to distribute traffic to different pods with sessionAffinity:client-ip after rolling update, that is, all traffic is distributed to one pod, other pods has no traffic. 


How reproducible:

Steps to Reproduce:
1. refer to the configuration as described in "From Customer"
2. rolling update your application

Actual results:

all traffics are distributed to the one pod


Expected results:

traffic should be distributed to every pod even if set sessionAffinity:client-ip.

Additional info:

no

Comment 1 408514942 2017-11-10 08:02:16 UTC
two pods are deployed on two nodes.

Comment 2 Ben Bennett 2017-11-10 14:18:23 UTC
Unfortunately, I think this is behaving as designed.  The F5 is making the IP address that OpenShift sees be always the same.  And by asking for ClientIP session affinity, if there is enough traffic to the port that it never hits the timeout, you will effectively always be sending all traffic to the same pod.

What is the behavior you are looking for?  And do you need session affinity?

Comment 3 408514942 2017-11-13 09:35:21 UTC
(In reply to Ben Bennett from comment #2)
> Unfortunately, I think this is behaving as designed.  The F5 is making the
> IP address that OpenShift sees be always the same.  And by asking for
> ClientIP session affinity, if there is enough traffic to the port that it
> never hits the timeout, you will effectively always be sending all traffic
> to the same pod.
> 
> What is the behavior you are looking for?  And do you need session affinity?

Hi Bennnett,

Thanks for your quickly reply.

It is expected that all traffic are distributed to different pods,that is, some clients access  the application service  provided by pod1, other clients are distributed to pod2. Unbalance of traffic is acceptable, but it doesn't means that all traffic is distributed to the one pod.

The application provide bank service for users, so session affinity is important for them, do you have good suggestion for it?

In addition,I confirmed with customer about F5 configuration, It doesn't make same ip address. 

new information for this issue: 

it can work as expected when re-creating App svc, but the issue still happens when rolling update.


Thanks

- Loren

Comment 5 408514942 2017-11-15 09:43:21 UTC
Hi Bennett,do you have any good ideas or better solution for the issue?

Comment 6 Ben Bennett 2017-12-07 13:30:58 UTC
Ok... new questions...

1) When they do the rolling upgrade, how many pods did they start with?
2) How many are they ending up with?
3) and what is the maxSurge parameter of the deployment config set to?
4) How many clients are there using the service?
5) Do the clients use it frequently?
6) Do the clients change?
7) If new clients use the service, are they always going to the same pod, or are they distributed?

Thanks.

Also, if you can grab the output from iptables-save from one of the nodes where a _client_ pod is running, and tell me the service name that is misbehaving, that would be great.

Comment 7 408514942 2017-12-14 12:01:12 UTC
Hi Ben

Related information as below:

1) When they do the rolling upgrade, how many pods did they start with?
4 pods, with replicationcontroller.
2) How many are they ending up with?
4 pods
3) and what is the maxSurge parameter of the deployment config set to?
we use replicationcontroller, and rolling update with command oc login -u system:admin; kubectl rolling-update [rcName] --image=[image] --timeout=[time] --image-pull-policy=[IfNotPresent] -n [namespace]
4) How many clients are there using the service?
>=200
5) Do the clients use it frequently?
yes
6) Do the clients change?
yes. 
7) If new clients use the service, are they always going to the same pod, or are they distributed?
After rolling-update, new clients will distributed to the first three pods. traffic will not be distributed to the fourth pod.

four pods are on node05~node08. app name is pweb. 
pweb-w52gw  10.1.8.3  on node05. it's the latest pod when rolling-update. and no  traffic be distributed to it.
node05.example.com pweb-w52gw
node06.example.com pweb-wu1i5
node07.example.com pweb-x6pad
node08.example.com pweb-y397w


After the issue, iptables information is recorded as below:
=======================
********************************(iptables-save)
# Generated by iptables-save v1.4.21 on Fri Nov 24 21:22:56 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [380:79997]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-NODEPORT-NON-LOCAL - [0:0]
:KUBE-SERVICES - [0:0]
:OS_FIREWALL_ALLOW - [0:0]
-A INPUT -m comment --comment "Ensure that non-local NodePort traffic can flow" -j KUBE-NODEPORT-NON-LOCAL
-A INPUT -i tun0 -m comment --comment "traffic from docker for internet" -j ACCEPT
-A INPUT -p udp -m multiport --dports 4789 -m comment --comment "001 vxlan incoming" -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j OS_FIREWALL_ALLOW
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o lbr0 -j DOCKER
-A FORWARD -o lbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lbr0 ! -o lbr0 -j ACCEPT
-A FORWARD -i lbr0 -o lbr0 -j ACCEPT
-A FORWARD -s 10.1.0.0/16 -j ACCEPT
-A FORWARD -d 10.1.0.0/16 -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-ISOLATION -j RETURN
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 10255 -j ACCEPT
-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 10255 -j ACCEPT
-A OS_FIREWALL_ALLOW -p udp -m state --state NEW -m udp --dport 4789 -j ACCEPT
COMMIT
# Completed on Fri Nov 24 21:22:56 2017
# Generated by iptables-save v1.4.21 on Fri Nov 24 21:22:56 2017
*nat
:PREROUTING ACCEPT [39:7378]
:INPUT ACCEPT [17:1766]
:OUTPUT ACCEPT [41:3420]
:POSTROUTING ACCEPT [28:2062]
:DOCKER - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORT-CONTAINER - [0:0]
:KUBE-NODEPORT-HOST - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PORTALS-CONTAINER - [0:0]
:KUBE-PORTALS-HOST - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-24PXNCHVVYR3RW4L - [0:0]
:KUBE-SEP-2JJFXYAO7OHLCOHX - [0:0]
:KUBE-SEP-2OKB2SHDNUVPSEPN - [0:0]
:KUBE-SEP-34Z6A2AEIG2KRPBU - [0:0]
:KUBE-SEP-47MVCVBBA5QZPJCX - [0:0]
:KUBE-SEP-4MX6PUWAYF2JLNO5 - [0:0]
:KUBE-SEP-4QYISJ6GJUWLAVOP - [0:0]
:KUBE-SEP-4RLYPFTFLSP5KBC4 - [0:0]
:KUBE-SEP-7IZC2J2HHR6BGIPX - [0:0]
:KUBE-SEP-7TENJY42TANOWJMO - [0:0]
:KUBE-SEP-7UGZ6EMRFYZBS7AQ - [0:0]
:KUBE-SEP-ACOJXTO6QXCR6Z64 - [0:0]
:KUBE-SEP-AMQ7KKKAYXOJN6BO - [0:0]
:KUBE-SEP-AXB2LBRBJ5T2B264 - [0:0]
:KUBE-SEP-B5TSJ5TG3JPXHBOL - [0:0]
:KUBE-SEP-BV2UZ3EDHOLDJ67H - [0:0]
:KUBE-SEP-C6G5M7ZOALEAR37L - [0:0]
:KUBE-SEP-CRO5FJLSEBB7LKDB - [0:0]
:KUBE-SEP-DIFZW3PRCT2PFAT7 - [0:0]
:KUBE-SEP-FVCSY6TJRVUCPYBR - [0:0]
:KUBE-SEP-GCHHDP75JZJTAQCQ - [0:0]
:KUBE-SEP-HAJY2C3Q3BTDX5RN - [0:0]
:KUBE-SEP-HNVU5PZLKRV2IFQP - [0:0]
:KUBE-SEP-IT22AJR5GTPMB2QC - [0:0]
:KUBE-SEP-J7X6ZCMBY4YZXDCM - [0:0]
:KUBE-SEP-JRAQOTQWIMRTLLFH - [0:0]
:KUBE-SEP-K3ETARAN7CFYD6AR - [0:0]
:KUBE-SEP-K4KDAIAF7JEREORI - [0:0]
:KUBE-SEP-KE7VJNO7LZYMOTYS - [0:0]
:KUBE-SEP-LTR3VN5IT44IQRQM - [0:0]
:KUBE-SEP-NRM272KHZ7777WWH - [0:0]
:KUBE-SEP-NT7VPRGGN4QZRTYN - [0:0]
:KUBE-SEP-NU22ISCZ3A2BUVR6 - [0:0]
:KUBE-SEP-PGWQ7L4UHHA2SXBS - [0:0]
:KUBE-SEP-PI55QNJF7U46JXEJ - [0:0]
:KUBE-SEP-PWZYAMDYDH2GHGK7 - [0:0]
:KUBE-SEP-QNOF2TNOVM4DJIJI - [0:0]
:KUBE-SEP-SC4LDGFKGN7E66T2 - [0:0]
:KUBE-SEP-TQK5Q5XFPWYDCXTE - [0:0]
:KUBE-SEP-TSP3S74LX75UZKTJ - [0:0]
:KUBE-SEP-VPLLSRIHCGSRJRS2 - [0:0]
:KUBE-SEP-WI2B7F7SSNKGAERU - [0:0]
:KUBE-SEP-WUPHBQG3ZYEDWM5H - [0:0]
:KUBE-SEP-XUKBPDPABJQQGZBE - [0:0]
:KUBE-SEP-XWUK2I4DAYYXNCUX - [0:0]
:KUBE-SEP-YMRG5OY6BZ7MTFO6 - [0:0]
:KUBE-SEP-YX7GGMGP26HH2TEL - [0:0]
:KUBE-SEP-ZMPNYSAJ4FWK52SX - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-3PQUIH6UWJRGHBBQ - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SVC-4WA56JY5E3JQEHIU - [0:0]
:KUBE-SVC-53M334O5C6AFPXED - [0:0]
:KUBE-SVC-5WKXUCCBPW4WXMKW - [0:0]
:KUBE-SVC-76GJ7A5QDKD24MJX - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SVC-BSIFX2PQIL42UOMO - [0:0]
:KUBE-SVC-CUWWUHHNOYUE7XCB - [0:0]
:KUBE-SVC-ENE2TRPV7JRSNXUV - [0:0]
:KUBE-SVC-FQYOV4TWDBYNHCIJ - [0:0]
:KUBE-SVC-HIJ7RANMQK6OC25P - [0:0]
:KUBE-SVC-IFVMONO6R7UKLXIJ - [0:0]
:KUBE-SVC-LUDRXBJ3H7JIOQPF - [0:0]
:KUBE-SVC-LXGWHLGFLZ6UGNWA - [0:0]
:KUBE-SVC-MSVZI6DZZNOM75U6 - [0:0]
:KUBE-SVC-NCZWMMAQVMU3LX5N - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NVIGWBZEJ5AXOHVJ - [0:0]
:KUBE-SVC-OTJAQHWXJWPSSDZB - [0:0]
:KUBE-SVC-UUZUOIGAZB2PXILA - [0:0]
:KUBE-SVC-VLAUYSBDYZJVWKPP - [0:0]
:KUBE-SVC-W5UXTCKUFQOMEGIM - [0:0]
:KUBE-SVC-WRABDGLKCV3QVL74 - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-CONTAINER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-CONTAINER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "handle ClusterIPs; NOTE: this must be before the NodePort rules" -j KUBE-PORTALS-HOST
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m addrtype --dst-type LOCAL -m comment --comment "handle service NodePorts; NOTE: this must be the last rule in the chain" -j KUBE-NODEPORT-HOST
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.1.8.0/24 ! -o lbr0 -j MASQUERADE
-A POSTROUTING -s 10.1.0.0/16 ! -d 10.1.0.0/16 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i lbr0 -j RETURN
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ebs:was-port" -m tcp --dport 31010 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ebs:was-port" -m tcp --dport 31010 -j KUBE-SVC-HIJ7RANMQK6OC25P
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ecs:soap-port" -m tcp --dport 31006 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ecs:soap-port" -m tcp --dport 31006 -j KUBE-SVC-NVIGWBZEJ5AXOHVJ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ecs:was-port" -m tcp --dport 31004 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ecs:was-port" -m tcp --dport 31004 -j KUBE-SVC-UUZUOIGAZB2PXILA
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ebs:soap-port" -m tcp --dport 31012 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ebs:soap-port" -m tcp --dport 31012 -j KUBE-SVC-WRABDGLKCV3QVL74
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/eis:was-port" -m tcp --dport 31007 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/eis:was-port" -m tcp --dport 31007 -j KUBE-SVC-4WA56JY5E3JQEHIU
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/eis:app-port" -m tcp --dport 31008 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/eis:app-port" -m tcp --dport 31008 -j KUBE-SVC-NCZWMMAQVMU3LX5N
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/eis:soap-port" -m tcp --dport 31009 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/eis:soap-port" -m tcp --dport 31009 -j KUBE-SVC-W5UXTCKUFQOMEGIM
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ebs:app-port" -m tcp --dport 31011 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ebs:app-port" -m tcp --dport 31011 -j KUBE-SVC-3PQUIH6UWJRGHBBQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/pweb:app-port" -m tcp --dport 31002 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/pweb:app-port" -m tcp --dport 31002 -j KUBE-SVC-OTJAQHWXJWPSSDZB
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/pweb:soap-port" -m tcp --dport 31003 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/pweb:soap-port" -m tcp --dport 31003 -j KUBE-SVC-VLAUYSBDYZJVWKPP
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ecs:app-port" -m tcp --dport 31005 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/ecs:app-port" -m tcp --dport 31005 -j KUBE-SVC-LUDRXBJ3H7JIOQPF
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/pweb:was-port" -m tcp --dport 31001 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "wangjin/pweb:was-port" -m tcp --dport 31001 -j KUBE-SVC-FQYOV4TWDBYNHCIJ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-24PXNCHVVYR3RW4L -s 10.1.10.2/32 -m comment --comment "wangjin/ecs:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-24PXNCHVVYR3RW4L -p tcp -m comment --comment "wangjin/ecs:was-port" -m tcp -j DNAT --to-destination 10.1.10.2:9060
-A KUBE-SEP-2JJFXYAO7OHLCOHX -s 10.1.9.2/32 -m comment --comment "wangjin/pweb:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-2JJFXYAO7OHLCOHX -p tcp -m comment --comment "wangjin/pweb:soap-port" -m recent --set --name KUBE-SEP-2JJFXYAO7OHLCOHX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.9.2:8880
-A KUBE-SEP-2OKB2SHDNUVPSEPN -s 10.1.8.3/32 -m comment --comment "wangjin/pweb:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-2OKB2SHDNUVPSEPN -p tcp -m comment --comment "wangjin/pweb:soap-port" -m recent --set --name KUBE-SEP-2OKB2SHDNUVPSEPN --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.8.3:8880
-A KUBE-SEP-34Z6A2AEIG2KRPBU -s 10.1.9.2/32 -m comment --comment "wangjin/pweb:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-34Z6A2AEIG2KRPBU -p tcp -m comment --comment "wangjin/pweb:app-port" -m recent --set --name KUBE-SEP-34Z6A2AEIG2KRPBU --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.9.2:9080
-A KUBE-SEP-47MVCVBBA5QZPJCX -s 10.1.10.3/32 -m comment --comment "wangjin/pweb:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-47MVCVBBA5QZPJCX -p tcp -m comment --comment "wangjin/pweb:app-port" -m recent --set --name KUBE-SEP-47MVCVBBA5QZPJCX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.10.3:9080
-A KUBE-SEP-4MX6PUWAYF2JLNO5 -s 10.1.19.2/32 -m comment --comment "wangjin/ebs:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-4MX6PUWAYF2JLNO5 -p tcp -m comment --comment "wangjin/ebs:app-port" -m tcp -j DNAT --to-destination 10.1.19.2:9080
-A KUBE-SEP-4QYISJ6GJUWLAVOP -s 88.1.48.217/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4QYISJ6GJUWLAVOP -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4QYISJ6GJUWLAVOP --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 88.1.48.217:8443
-A KUBE-SEP-4RLYPFTFLSP5KBC4 -s 88.1.48.218/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-4RLYPFTFLSP5KBC4 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-4RLYPFTFLSP5KBC4 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 88.1.48.218:8443
-A KUBE-SEP-7IZC2J2HHR6BGIPX -s 10.1.19.2/32 -m comment --comment "wangjin/ebs:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-7IZC2J2HHR6BGIPX -p tcp -m comment --comment "wangjin/ebs:was-port" -m tcp -j DNAT --to-destination 10.1.19.2:9060
-A KUBE-SEP-7TENJY42TANOWJMO -s 88.1.48.216/32 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-7TENJY42TANOWJMO -p tcp -m comment --comment "default/kubernetes:dns-tcp" -m recent --set --name KUBE-SEP-7TENJY42TANOWJMO --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 88.1.48.216:8053
-A KUBE-SEP-7UGZ6EMRFYZBS7AQ -s 10.1.7.3/32 -m comment --comment "wangjin/pweb:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-7UGZ6EMRFYZBS7AQ -p tcp -m comment --comment "wangjin/pweb:soap-port" -m recent --set --name KUBE-SEP-7UGZ6EMRFYZBS7AQ --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.7.3:8880
-A KUBE-SEP-ACOJXTO6QXCR6Z64 -s 10.1.7.2/32 -m comment --comment "wangjin/ecs:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-ACOJXTO6QXCR6Z64 -p tcp -m comment --comment "wangjin/ecs:app-port" -m tcp -j DNAT --to-destination 10.1.7.2:9080
-A KUBE-SEP-AMQ7KKKAYXOJN6BO -s 10.1.8.2/32 -m comment --comment "wangjin/ecs:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-AMQ7KKKAYXOJN6BO -p tcp -m comment --comment "wangjin/ecs:was-port" -m tcp -j DNAT --to-destination 10.1.8.2:9060
-A KUBE-SEP-AXB2LBRBJ5T2B264 -s 88.1.48.216/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-AXB2LBRBJ5T2B264 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-AXB2LBRBJ5T2B264 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 88.1.48.216:8443
-A KUBE-SEP-B5TSJ5TG3JPXHBOL -s 88.1.48.216/32 -m comment --comment "default/kubernetes:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-B5TSJ5TG3JPXHBOL -p udp -m comment --comment "default/kubernetes:dns" -m recent --set --name KUBE-SEP-B5TSJ5TG3JPXHBOL --mask 255.255.255.255 --rsource -m udp -j DNAT --to-destination 88.1.48.216:8053
-A KUBE-SEP-BV2UZ3EDHOLDJ67H -s 10.1.9.2/32 -m comment --comment "wangjin/pweb:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-BV2UZ3EDHOLDJ67H -p tcp -m comment --comment "wangjin/pweb:was-port" -m recent --set --name KUBE-SEP-BV2UZ3EDHOLDJ67H --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.9.2:9060
-A KUBE-SEP-C6G5M7ZOALEAR37L -s 88.1.48.217/32 -m comment --comment "default/kubernetes:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-C6G5M7ZOALEAR37L -p udp -m comment --comment "default/kubernetes:dns" -m recent --set --name KUBE-SEP-C6G5M7ZOALEAR37L --mask 255.255.255.255 --rsource -m udp -j DNAT --to-destination 88.1.48.217:8053
-A KUBE-SEP-CRO5FJLSEBB7LKDB -s 10.1.3.4/32 -m comment --comment "openshift-infra/heapster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-CRO5FJLSEBB7LKDB -p tcp -m comment --comment "openshift-infra/heapster:" -m tcp -j DNAT --to-destination 10.1.3.4:8082
-A KUBE-SEP-DIFZW3PRCT2PFAT7 -s 10.1.7.2/32 -m comment --comment "wangjin/ecs:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-DIFZW3PRCT2PFAT7 -p tcp -m comment --comment "wangjin/ecs:soap-port" -m tcp -j DNAT --to-destination 10.1.7.2:8880
-A KUBE-SEP-FVCSY6TJRVUCPYBR -s 10.1.13.2/32 -m comment --comment "wangjin/eis:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-FVCSY6TJRVUCPYBR -p tcp -m comment --comment "wangjin/eis:was-port" -m tcp -j DNAT --to-destination 10.1.13.2:9060
-A KUBE-SEP-GCHHDP75JZJTAQCQ -s 10.1.10.2/32 -m comment --comment "wangjin/ecs:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-GCHHDP75JZJTAQCQ -p tcp -m comment --comment "wangjin/ecs:app-port" -m tcp -j DNAT --to-destination 10.1.10.2:9080
-A KUBE-SEP-HAJY2C3Q3BTDX5RN -s 88.1.48.223/32 -m comment --comment "default/ose-router:80-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-HAJY2C3Q3BTDX5RN -p tcp -m comment --comment "default/ose-router:80-tcp" -m tcp -j DNAT --to-destination 88.1.48.223:80
-A KUBE-SEP-HNVU5PZLKRV2IFQP -s 88.1.48.218/32 -m comment --comment "default/kubernetes:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-HNVU5PZLKRV2IFQP -p udp -m comment --comment "default/kubernetes:dns" -m recent --set --name KUBE-SEP-HNVU5PZLKRV2IFQP --mask 255.255.255.255 --rsource -m udp -j DNAT --to-destination 88.1.48.218:8053
-A KUBE-SEP-IT22AJR5GTPMB2QC -s 10.1.13.2/32 -m comment --comment "wangjin/eis:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT22AJR5GTPMB2QC -p tcp -m comment --comment "wangjin/eis:app-port" -m tcp -j DNAT --to-destination 10.1.13.2:9080
-A KUBE-SEP-J7X6ZCMBY4YZXDCM -s 10.1.9.3/32 -m comment --comment "wangjin/ecs:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-J7X6ZCMBY4YZXDCM -p tcp -m comment --comment "wangjin/ecs:was-port" -m tcp -j DNAT --to-destination 10.1.9.3:9060
-A KUBE-SEP-JRAQOTQWIMRTLLFH -s 10.1.8.2/32 -m comment --comment "wangjin/ecs:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-JRAQOTQWIMRTLLFH -p tcp -m comment --comment "wangjin/ecs:soap-port" -m tcp -j DNAT --to-destination 10.1.8.2:8880
-A KUBE-SEP-K3ETARAN7CFYD6AR -s 10.1.8.3/32 -m comment --comment "wangjin/pweb:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-K3ETARAN7CFYD6AR -p tcp -m comment --comment "wangjin/pweb:was-port" -m recent --set --name KUBE-SEP-K3ETARAN7CFYD6AR --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.8.3:9060
-A KUBE-SEP-K4KDAIAF7JEREORI -s 88.1.48.223/32 -m comment --comment "default/ose-router:443-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-K4KDAIAF7JEREORI -p tcp -m comment --comment "default/ose-router:443-tcp" -m tcp -j DNAT --to-destination 88.1.48.223:443
-A KUBE-SEP-KE7VJNO7LZYMOTYS -s 10.1.9.3/32 -m comment --comment "wangjin/ecs:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-KE7VJNO7LZYMOTYS -p tcp -m comment --comment "wangjin/ecs:app-port" -m tcp -j DNAT --to-destination 10.1.9.3:9080
-A KUBE-SEP-LTR3VN5IT44IQRQM -s 10.1.10.3/32 -m comment --comment "wangjin/pweb:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-LTR3VN5IT44IQRQM -p tcp -m comment --comment "wangjin/pweb:soap-port" -m recent --set --name KUBE-SEP-LTR3VN5IT44IQRQM --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.10.3:8880
-A KUBE-SEP-NRM272KHZ7777WWH -s 10.1.3.5/32 -m comment --comment "openshift-infra/hawkular-metrics:https-endpoint" -j KUBE-MARK-MASQ
-A KUBE-SEP-NRM272KHZ7777WWH -p tcp -m comment --comment "openshift-infra/hawkular-metrics:https-endpoint" -m tcp -j DNAT --to-destination 10.1.3.5:8443
-A KUBE-SEP-NT7VPRGGN4QZRTYN -s 10.1.3.3/32 -m comment --comment "openshift-infra/hawkular-cassandra:ssl-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-NT7VPRGGN4QZRTYN -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:ssl-port" -m tcp -j DNAT --to-destination 10.1.3.3:7001
-A KUBE-SEP-NU22ISCZ3A2BUVR6 -s 10.1.3.3/32 -m comment --comment "openshift-infra/hawkular-cassandra:thift-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-NU22ISCZ3A2BUVR6 -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:thift-port" -m tcp -j DNAT --to-destination 10.1.3.3:9160
-A KUBE-SEP-PGWQ7L4UHHA2SXBS -s 10.1.3.3/32 -m comment --comment "openshift-infra/hawkular-cassandra:cql-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-PGWQ7L4UHHA2SXBS -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:cql-port" -m tcp -j DNAT --to-destination 10.1.3.3:9042
-A KUBE-SEP-PI55QNJF7U46JXEJ -s 10.1.19.2/32 -m comment --comment "wangjin/ebs:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-PI55QNJF7U46JXEJ -p tcp -m comment --comment "wangjin/ebs:soap-port" -m tcp -j DNAT --to-destination 10.1.19.2:8880
-A KUBE-SEP-PWZYAMDYDH2GHGK7 -s 10.1.7.3/32 -m comment --comment "wangjin/pweb:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-PWZYAMDYDH2GHGK7 -p tcp -m comment --comment "wangjin/pweb:was-port" -m recent --set --name KUBE-SEP-PWZYAMDYDH2GHGK7 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.7.3:9060
-A KUBE-SEP-QNOF2TNOVM4DJIJI -s 88.1.48.218/32 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-QNOF2TNOVM4DJIJI -p tcp -m comment --comment "default/kubernetes:dns-tcp" -m recent --set --name KUBE-SEP-QNOF2TNOVM4DJIJI --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 88.1.48.218:8053
-A KUBE-SEP-SC4LDGFKGN7E66T2 -s 10.1.10.2/32 -m comment --comment "wangjin/ecs:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-SC4LDGFKGN7E66T2 -p tcp -m comment --comment "wangjin/ecs:soap-port" -m tcp -j DNAT --to-destination 10.1.10.2:8880
-A KUBE-SEP-TQK5Q5XFPWYDCXTE -s 10.1.13.2/32 -m comment --comment "wangjin/eis:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-TQK5Q5XFPWYDCXTE -p tcp -m comment --comment "wangjin/eis:soap-port" -m tcp -j DNAT --to-destination 10.1.13.2:8880
-A KUBE-SEP-TSP3S74LX75UZKTJ -s 10.1.9.3/32 -m comment --comment "wangjin/ecs:soap-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-TSP3S74LX75UZKTJ -p tcp -m comment --comment "wangjin/ecs:soap-port" -m tcp -j DNAT --to-destination 10.1.9.3:8880
-A KUBE-SEP-VPLLSRIHCGSRJRS2 -s 10.1.3.3/32 -m comment --comment "openshift-infra/hawkular-cassandra:tcp-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-VPLLSRIHCGSRJRS2 -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:tcp-port" -m tcp -j DNAT --to-destination 10.1.3.3:7000
-A KUBE-SEP-WI2B7F7SSNKGAERU -s 10.1.7.3/32 -m comment --comment "wangjin/pweb:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-WI2B7F7SSNKGAERU -p tcp -m comment --comment "wangjin/pweb:app-port" -m recent --set --name KUBE-SEP-WI2B7F7SSNKGAERU --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.7.3:9080
-A KUBE-SEP-WUPHBQG3ZYEDWM5H -s 10.1.8.3/32 -m comment --comment "wangjin/pweb:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-WUPHBQG3ZYEDWM5H -p tcp -m comment --comment "wangjin/pweb:app-port" -m recent --set --name KUBE-SEP-WUPHBQG3ZYEDWM5H --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.8.3:9080
-A KUBE-SEP-XUKBPDPABJQQGZBE -s 10.1.8.2/32 -m comment --comment "wangjin/ecs:app-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-XUKBPDPABJQQGZBE -p tcp -m comment --comment "wangjin/ecs:app-port" -m tcp -j DNAT --to-destination 10.1.8.2:9080
-A KUBE-SEP-XWUK2I4DAYYXNCUX -s 10.1.10.3/32 -m comment --comment "wangjin/pweb:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-XWUK2I4DAYYXNCUX -p tcp -m comment --comment "wangjin/pweb:was-port" -m recent --set --name KUBE-SEP-XWUK2I4DAYYXNCUX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.1.10.3:9060
-A KUBE-SEP-YMRG5OY6BZ7MTFO6 -s 10.1.7.2/32 -m comment --comment "wangjin/ecs:was-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-YMRG5OY6BZ7MTFO6 -p tcp -m comment --comment "wangjin/ecs:was-port" -m tcp -j DNAT --to-destination 10.1.7.2:9060
-A KUBE-SEP-YX7GGMGP26HH2TEL -s 88.1.48.223/32 -m comment --comment "default/ose-router:1936-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-YX7GGMGP26HH2TEL -p tcp -m comment --comment "default/ose-router:1936-tcp" -m tcp -j DNAT --to-destination 88.1.48.223:1936
-A KUBE-SEP-ZMPNYSAJ4FWK52SX -s 88.1.48.217/32 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZMPNYSAJ4FWK52SX -p tcp -m comment --comment "default/kubernetes:dns-tcp" -m recent --set --name KUBE-SEP-ZMPNYSAJ4FWK52SX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 88.1.48.217:8053
-A KUBE-SERVICES -d 172.30.203.150/32 -p tcp -m comment --comment "openshift-infra/hawkular-metrics:https-endpoint cluster IP" -m tcp --dport 443 -j KUBE-SVC-CUWWUHHNOYUE7XCB
-A KUBE-SERVICES -d 172.30.20.172/32 -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:tcp-port cluster IP" -m tcp --dport 7000 -j KUBE-SVC-5WKXUCCBPW4WXMKW
-A KUBE-SERVICES -d 172.30.164.42/32 -p tcp -m comment --comment "openshift-infra/heapster: cluster IP" -m tcp --dport 80 -j KUBE-SVC-LXGWHLGFLZ6UGNWA
-A KUBE-SERVICES -d 172.30.152.246/32 -p tcp -m comment --comment "wangjin/ebs:was-port cluster IP" -m tcp --dport 9060 -j KUBE-SVC-HIJ7RANMQK6OC25P
-A KUBE-SERVICES -d 172.30.51.132/32 -p tcp -m comment --comment "default/ose-router:443-tcp cluster IP" -m tcp --dport 443 -j KUBE-SVC-ENE2TRPV7JRSNXUV
-A KUBE-SERVICES -d 172.30.2.118/32 -p tcp -m comment --comment "wangjin/ecs:soap-port cluster IP" -m tcp --dport 8880 -j KUBE-SVC-NVIGWBZEJ5AXOHVJ
-A KUBE-SERVICES -d 172.30.51.132/32 -p tcp -m comment --comment "default/ose-router:80-tcp cluster IP" -m tcp --dport 80 -j KUBE-SVC-53M334O5C6AFPXED
-A KUBE-SERVICES -d 172.30.2.118/32 -p tcp -m comment --comment "wangjin/ecs:was-port cluster IP" -m tcp --dport 9060 -j KUBE-SVC-UUZUOIGAZB2PXILA
-A KUBE-SERVICES -d 172.30.20.172/32 -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:thift-port cluster IP" -m tcp --dport 9160 -j KUBE-SVC-76GJ7A5QDKD24MJX
-A KUBE-SERVICES -d 172.30.152.246/32 -p tcp -m comment --comment "wangjin/ebs:soap-port cluster IP" -m tcp --dport 8880 -j KUBE-SVC-WRABDGLKCV3QVL74
-A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 172.30.0.1/32 -p udp -m comment --comment "default/kubernetes:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SERVICES -d 172.30.20.172/32 -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:ssl-port cluster IP" -m tcp --dport 7001 -j KUBE-SVC-MSVZI6DZZNOM75U6
-A KUBE-SERVICES -d 172.30.212.106/32 -p tcp -m comment --comment "wangjin/eis:was-port cluster IP" -m tcp --dport 9060 -j KUBE-SVC-4WA56JY5E3JQEHIU
-A KUBE-SERVICES -d 172.30.212.106/32 -p tcp -m comment --comment "wangjin/eis:app-port cluster IP" -m tcp --dport 9080 -j KUBE-SVC-NCZWMMAQVMU3LX5N
-A KUBE-SERVICES -d 172.30.212.106/32 -p tcp -m comment --comment "wangjin/eis:soap-port cluster IP" -m tcp --dport 8880 -j KUBE-SVC-W5UXTCKUFQOMEGIM
-A KUBE-SERVICES -d 172.30.152.246/32 -p tcp -m comment --comment "wangjin/ebs:app-port cluster IP" -m tcp --dport 9080 -j KUBE-SVC-3PQUIH6UWJRGHBBQ
-A KUBE-SERVICES -d 172.30.51.132/32 -p tcp -m comment --comment "default/ose-router:1936-tcp cluster IP" -m tcp --dport 1936 -j KUBE-SVC-BSIFX2PQIL42UOMO
-A KUBE-SERVICES -d 172.30.20.172/32 -p tcp -m comment --comment "openshift-infra/hawkular-cassandra:cql-port cluster IP" -m tcp --dport 9042 -j KUBE-SVC-IFVMONO6R7UKLXIJ
-A KUBE-SERVICES -d 172.30.5.3/32 -p tcp -m comment --comment "wangjin/pweb:app-port cluster IP" -m tcp --dport 9080 -j KUBE-SVC-OTJAQHWXJWPSSDZB
-A KUBE-SERVICES -d 172.30.5.3/32 -p tcp -m comment --comment "wangjin/pweb:soap-port cluster IP" -m tcp --dport 8880 -j KUBE-SVC-VLAUYSBDYZJVWKPP
-A KUBE-SERVICES -d 172.30.2.118/32 -p tcp -m comment --comment "wangjin/ecs:app-port cluster IP" -m tcp --dport 9080 -j KUBE-SVC-LUDRXBJ3H7JIOQPF
-A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SERVICES -d 172.30.5.3/32 -p tcp -m comment --comment "wangjin/pweb:was-port cluster IP" -m tcp --dport 9060 -j KUBE-SVC-FQYOV4TWDBYNHCIJ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-3PQUIH6UWJRGHBBQ -m comment --comment "wangjin/ebs:app-port" -j KUBE-SEP-4MX6PUWAYF2JLNO5
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-B5TSJ5TG3JPXHBOL --mask 255.255.255.255 --rsource -j KUBE-SEP-B5TSJ5TG3JPXHBOL
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-C6G5M7ZOALEAR37L --mask 255.255.255.255 --rsource -j KUBE-SEP-C6G5M7ZOALEAR37L
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-HNVU5PZLKRV2IFQP --mask 255.255.255.255 --rsource -j KUBE-SEP-HNVU5PZLKRV2IFQP
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-B5TSJ5TG3JPXHBOL
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-C6G5M7ZOALEAR37L
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment "default/kubernetes:dns" -j KUBE-SEP-HNVU5PZLKRV2IFQP
-A KUBE-SVC-4WA56JY5E3JQEHIU -m comment --comment "wangjin/eis:was-port" -j KUBE-SEP-FVCSY6TJRVUCPYBR
-A KUBE-SVC-53M334O5C6AFPXED -m comment --comment "default/ose-router:80-tcp" -j KUBE-SEP-HAJY2C3Q3BTDX5RN
-A KUBE-SVC-5WKXUCCBPW4WXMKW -m comment --comment "openshift-infra/hawkular-cassandra:tcp-port" -j KUBE-SEP-VPLLSRIHCGSRJRS2
-A KUBE-SVC-76GJ7A5QDKD24MJX -m comment --comment "openshift-infra/hawkular-cassandra:thift-port" -j KUBE-SEP-NU22ISCZ3A2BUVR6
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-7TENJY42TANOWJMO --mask 255.255.255.255 --rsource -j KUBE-SEP-7TENJY42TANOWJMO
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-ZMPNYSAJ4FWK52SX --mask 255.255.255.255 --rsource -j KUBE-SEP-ZMPNYSAJ4FWK52SX
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-QNOF2TNOVM4DJIJI --mask 255.255.255.255 --rsource -j KUBE-SEP-QNOF2TNOVM4DJIJI
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7TENJY42TANOWJMO
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ZMPNYSAJ4FWK52SX
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment "default/kubernetes:dns-tcp" -j KUBE-SEP-QNOF2TNOVM4DJIJI
-A KUBE-SVC-BSIFX2PQIL42UOMO -m comment --comment "default/ose-router:1936-tcp" -j KUBE-SEP-YX7GGMGP26HH2TEL
-A KUBE-SVC-CUWWUHHNOYUE7XCB -m comment --comment "openshift-infra/hawkular-metrics:https-endpoint" -j KUBE-SEP-NRM272KHZ7777WWH
-A KUBE-SVC-ENE2TRPV7JRSNXUV -m comment --comment "default/ose-router:443-tcp" -j KUBE-SEP-K4KDAIAF7JEREORI
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-XWUK2I4DAYYXNCUX --mask 255.255.255.255 --rsource -j KUBE-SEP-XWUK2I4DAYYXNCUX
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-PWZYAMDYDH2GHGK7 --mask 255.255.255.255 --rsource -j KUBE-SEP-PWZYAMDYDH2GHGK7
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-K3ETARAN7CFYD6AR --mask 255.255.255.255 --rsource -j KUBE-SEP-K3ETARAN7CFYD6AR
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-BV2UZ3EDHOLDJ67H --mask 255.255.255.255 --rsource -j KUBE-SEP-BV2UZ3EDHOLDJ67H
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-XWUK2I4DAYYXNCUX
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-PWZYAMDYDH2GHGK7
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-K3ETARAN7CFYD6AR
-A KUBE-SVC-FQYOV4TWDBYNHCIJ -m comment --comment "wangjin/pweb:was-port" -j KUBE-SEP-BV2UZ3EDHOLDJ67H
-A KUBE-SVC-HIJ7RANMQK6OC25P -m comment --comment "wangjin/ebs:was-port" -j KUBE-SEP-7IZC2J2HHR6BGIPX
-A KUBE-SVC-IFVMONO6R7UKLXIJ -m comment --comment "openshift-infra/hawkular-cassandra:cql-port" -j KUBE-SEP-PGWQ7L4UHHA2SXBS
-A KUBE-SVC-LUDRXBJ3H7JIOQPF -m comment --comment "wangjin/ecs:app-port" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-GCHHDP75JZJTAQCQ
-A KUBE-SVC-LUDRXBJ3H7JIOQPF -m comment --comment "wangjin/ecs:app-port" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-ACOJXTO6QXCR6Z64
-A KUBE-SVC-LUDRXBJ3H7JIOQPF -m comment --comment "wangjin/ecs:app-port" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XUKBPDPABJQQGZBE
-A KUBE-SVC-LUDRXBJ3H7JIOQPF -m comment --comment "wangjin/ecs:app-port" -j KUBE-SEP-KE7VJNO7LZYMOTYS
-A KUBE-SVC-LXGWHLGFLZ6UGNWA -m comment --comment "openshift-infra/heapster:" -j KUBE-SEP-CRO5FJLSEBB7LKDB
-A KUBE-SVC-MSVZI6DZZNOM75U6 -m comment --comment "openshift-infra/hawkular-cassandra:ssl-port" -j KUBE-SEP-NT7VPRGGN4QZRTYN
-A KUBE-SVC-NCZWMMAQVMU3LX5N -m comment --comment "wangjin/eis:app-port" -j KUBE-SEP-IT22AJR5GTPMB2QC
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-AXB2LBRBJ5T2B264 --mask 255.255.255.255 --rsource -j KUBE-SEP-AXB2LBRBJ5T2B264
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-4QYISJ6GJUWLAVOP --mask 255.255.255.255 --rsource -j KUBE-SEP-4QYISJ6GJUWLAVOP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-4RLYPFTFLSP5KBC4 --mask 255.255.255.255 --rsource -j KUBE-SEP-4RLYPFTFLSP5KBC4
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-AXB2LBRBJ5T2B264
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-4QYISJ6GJUWLAVOP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-4RLYPFTFLSP5KBC4
-A KUBE-SVC-NVIGWBZEJ5AXOHVJ -m comment --comment "wangjin/ecs:soap-port" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-SC4LDGFKGN7E66T2
-A KUBE-SVC-NVIGWBZEJ5AXOHVJ -m comment --comment "wangjin/ecs:soap-port" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-DIFZW3PRCT2PFAT7
-A KUBE-SVC-NVIGWBZEJ5AXOHVJ -m comment --comment "wangjin/ecs:soap-port" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-JRAQOTQWIMRTLLFH
-A KUBE-SVC-NVIGWBZEJ5AXOHVJ -m comment --comment "wangjin/ecs:soap-port" -j KUBE-SEP-TSP3S74LX75UZKTJ
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-47MVCVBBA5QZPJCX --mask 255.255.255.255 --rsource -j KUBE-SEP-47MVCVBBA5QZPJCX
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-WI2B7F7SSNKGAERU --mask 255.255.255.255 --rsource -j KUBE-SEP-WI2B7F7SSNKGAERU
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-WUPHBQG3ZYEDWM5H --mask 255.255.255.255 --rsource -j KUBE-SEP-WUPHBQG3ZYEDWM5H
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-34Z6A2AEIG2KRPBU --mask 255.255.255.255 --rsource -j KUBE-SEP-34Z6A2AEIG2KRPBU
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-47MVCVBBA5QZPJCX
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WI2B7F7SSNKGAERU
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-WUPHBQG3ZYEDWM5H
-A KUBE-SVC-OTJAQHWXJWPSSDZB -m comment --comment "wangjin/pweb:app-port" -j KUBE-SEP-34Z6A2AEIG2KRPBU
-A KUBE-SVC-UUZUOIGAZB2PXILA -m comment --comment "wangjin/ecs:was-port" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-24PXNCHVVYR3RW4L
-A KUBE-SVC-UUZUOIGAZB2PXILA -m comment --comment "wangjin/ecs:was-port" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-YMRG5OY6BZ7MTFO6
-A KUBE-SVC-UUZUOIGAZB2PXILA -m comment --comment "wangjin/ecs:was-port" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-AMQ7KKKAYXOJN6BO
-A KUBE-SVC-UUZUOIGAZB2PXILA -m comment --comment "wangjin/ecs:was-port" -j KUBE-SEP-J7X6ZCMBY4YZXDCM
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-LTR3VN5IT44IQRQM --mask 255.255.255.255 --rsource -j KUBE-SEP-LTR3VN5IT44IQRQM
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-7UGZ6EMRFYZBS7AQ --mask 255.255.255.255 --rsource -j KUBE-SEP-7UGZ6EMRFYZBS7AQ
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-2OKB2SHDNUVPSEPN --mask 255.255.255.255 --rsource -j KUBE-SEP-2OKB2SHDNUVPSEPN
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-2JJFXYAO7OHLCOHX --mask 255.255.255.255 --rsource -j KUBE-SEP-2JJFXYAO7OHLCOHX
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LTR3VN5IT44IQRQM
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7UGZ6EMRFYZBS7AQ
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2OKB2SHDNUVPSEPN
-A KUBE-SVC-VLAUYSBDYZJVWKPP -m comment --comment "wangjin/pweb:soap-port" -j KUBE-SEP-2JJFXYAO7OHLCOHX
-A KUBE-SVC-W5UXTCKUFQOMEGIM -m comment --comment "wangjin/eis:soap-port" -j KUBE-SEP-TQK5Q5XFPWYDCXTE
-A KUBE-SVC-WRABDGLKCV3QVL74 -m comment --comment "wangjin/ebs:soap-port" -j KUBE-SEP-PI55QNJF7U46JXEJ
COMMIT


Best Regards
Loren

Comment 8 jtanenba 2018-01-11 20:07:01 UTC
Hi could you provide all the previous information and the output from 

sudo iptables -nvL -t nat

sudo conntrack -L


could you please make sure all the pods have been up long enought to see traffic to the app before collecting the data?

Comment 9 Ben Bennett 2018-01-17 21:01:43 UTC
@Meng Bo: Can you reproduce this behavior?  I have been unable to :-(

Comment 10 Meng Bo 2018-01-18 10:21:58 UTC
I did not reproduce it neither.

Steps:
1. Setup multi node env with 1 master 2 nodes
2. Create rc with 2 pods and svc with nodeport and sessionaffinity=ClientIP
3. Access the node_ip:node_port from different client ip
# for i in {1..10} ; do curl $node_ip1:32381 ; curl $node_ip2:32381 ; done
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13
HOSTNAME:test-rc-g52kx IP:10.128.0.13

# for i in {1..10} ; do curl $node_ip1:32381 ; curl $node_ip2:32381 ; done
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13
HOSTNAME:test-rc-ww99v IP:10.129.0.13

4. Delete the pod one by one to simulate the rolling update
5. Access the node_ip:node_port from different client ip again
# for i in {1..10} ; do curl $node_ip1:32381 ;  curl $node_ip2:32381 ; done
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14

# for i in {1..10} ; do curl $node_ip1:32381 ; curl $node_ip2:32381 ; done
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14


One interesting thing is, on some client, the sessionaffinity seems dose not work correctly. I am not sure about the reason but at least each backend can be distributed.
$ for i in {1..10} ; do curl $node_ip1:32381 ;  curl $node_ip2:32381 ; done
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14
HOSTNAME:test-rc-mx6gq IP:10.128.0.14
HOSTNAME:test-rc-s4d2k IP:10.129.0.14


@408514942@qq.com 
To track down the problem, could you try to access the node_ip:node_port without using the f5 loadbalancer? And see if you can get the same result?

Comment 11 jtanenba 2018-01-18 15:22:22 UTC
The customer closed there case and we can't reproduce the issue locally

Comment 12 408514942 2018-02-26 02:51:44 UTC
Hi mengbo,

could you please try to simulate multi-client(>100) to access nodeip:port when rolling update? It is expected that all traffic are distributed to different pods, not the only one. thanks

Best Regards
Loren

Comment 13 Meng Bo 2018-03-01 07:03:06 UTC
I still cannot reproduce the issue with 120 clients with different session id to access the node port service.

To simulate the multiple clients, I setup a private network on my cluster.
The master ip is 10.1.1.2
The node1 ip is 10.1.1.3
The node2 ip is 10.1.1.4

1. Create 120 macvlan link and put them in different netns
# for i in {1..120} 
do 
ip netns add ns$i
ip link add macvlan0 link eth1 type macvlan mode bridge
ip link set macvlan0 up
ip link set macvlan0 netns ns$i \
let x=$i+100 ; ip netns exec ns$i ip addr add 10.1.1.${x}/24 dev macvlan0
ip netns exec ns$i ip link set macvlan0 up
done

2. Create rc with 2 pods and svc with nodeport and sessionaffinity=ClientIP

3. Access the nodeip:nodeport from each netns parallel
# for i in {1..120}
do
ip netns exec ns$i curl 10.1.1.3:30364 &
done

4. Delete the pods as what the rolling depoly do
# oc delete po --all

5. Access the nodeip:nodeport from each netns again after the new pods up
# for i in {1..120}
do
ip netns exec ns$i curl 10.1.1.3:30364 &
done


Before the rolling deploy:
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-mwvjl IP:10.128.0.5
HOSTNAME:test-rc-kn4xr IP:10.129.0.5
...
...

After the rolling deploy:
HOSTNAME:test-rc-5bsgb IP:10.128.0.6
HOSTNAME:test-rc-5bsgb IP:10.128.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5bsgb IP:10.128.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5qz57 IP:10.129.0.6
HOSTNAME:test-rc-5bsgb IP:10.128.0.6
HOSTNAME:test-rc-5bsgb IP:10.128.0.6
HOSTNAME:test-rc-5bsgb IP:10.128.0.6
...
...
...

Still all the pods can be distributed.

Comment 14 Ben Bennett 2018-03-02 17:39:55 UTC
Please re-open if you can reproduce the behavior without the F5 involved.


Note You need to log in before you can comment on or make changes to this bug.