Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1522700

Summary: [Extras] Update DPDK to 17.11
Product: Red Hat Enterprise Linux 7 Reporter: Timothy Redaelli <tredaelli>
Component: dpdkAssignee: Timothy Redaelli <tredaelli>
Status: CLOSED ERRATA QA Contact: Jean-Tsung Hsiao <jhsiao>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.5CC: atragler, ctrautma, fleitner, hewang, kzhang, narendra_k, ovs-team, pezhang, prabhakar_pujeri, tli, tredaelli, yselkowi
Target Milestone: rcKeywords: Extras, Rebase
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: dpdk-17.11-7.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-10 23:59:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1518884, 1335825    
Bug Blocks:    

Description Timothy Redaelli 2017-12-06 09:45:13 UTC

Comment 7 Jean-Tsung Hsiao 2017-12-21 18:15:30 UTC
The package has been tested and passed with the following tests:

# PvP 64 bytes zero loss testing between testpmd and Xena as traffic generator --- vhostuser is installed with DPDK-17.11-4.

# P2P 64 bytes zero loss testing between testpmd/host and Trex --- the host is loaded with DPDK-17.11-4.

# Netperf testing between name spaces on the test driver using testpmd/SUT as loopback.

Comment 8 Marcelo Ricardo Leitner 2018-02-16 15:58:54 UTC
*** Bug 1455140 has been marked as a duplicate of this bug. ***

Comment 9 Marcelo Ricardo Leitner 2018-02-16 16:04:39 UTC
*** Bug 1517210 has been marked as a duplicate of this bug. ***

Comment 10 Flavio Leitner 2018-02-16 16:07:08 UTC
*** Bug 1497384 has been marked as a duplicate of this bug. ***

Comment 11 Christian Trautman 2018-03-01 01:38:47 UTC
Tested http://download-node-02.eng.bos.redhat.com/brewroot/packages/dpdk/17.11/7.el7/x86_64/dpdk-17.11-7.el7.x86_64.rpm

Ran sr-iov testing and standard pvp testing with ovs-dpdk 2.9 from FDP.

Ran testing against 16.11-2 versus 17.11 and found no degradation with Intel x520 cards for 64 or 1500 bytes performance.

Comment 13 Jean-Tsung Hsiao 2018-03-01 12:06:05 UTC
The package has been tested and passed with the following tests:

1. PvP 64 bytes zero loss testing between testpmd and Xena over ixgbe using 2Q/4PMD --- 9.86 Mpps.

2. PvP 64 bytes zero loss testing between testpmd and Trex over 40Gb i40e(XL710) using 2Q/4PMD --- 5.90 Mpps.

Related packages:

Host is using OVS 2.9.0-1 fdP, and guest is using DPDK 17.11-7.

Both host and guest are running under kernel-851.

Comment 15 Jean-Tsung Hsiao 2018-03-02 03:03:58 UTC
(In reply to Jean-Tsung Hsiao from comment #13)
> The package has been tested and passed with the following tests:
> 
> 1. PvP 64 bytes zero loss testing between testpmd and Xena over ixgbe using
> 2Q/4PMD --- 9.86 Mpps.
For 1Q/2PMD the Mpps rate is 5.02

> 
> 2. PvP 64 bytes zero loss testing between testpmd and Trex over 40Gb
> i40e(XL710) using 2Q/4PMD --- 5.90 Mpps.
> 

Also, ran P2P 64 bytes zero loss between testpmd/host and Trex over 40Gb XL710 --- 36.10 Mpps.


> Related packages:
> 
> Host is using OVS 2.9.0-1 fdP, and guest is using DPDK 17.11-7.
> 
> Both host and guest are running under kernel-851.

Comment 16 Pei Zhang 2018-03-02 03:54:00 UTC
Update:

From Virt QE side, all testings with dpdk has finished and get PASS results.

Versions:
kernel-3.10.0-855.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
tuned-2.9.0-1.el7.noarch
libvirt-3.9.0-13.el7.x86_64
dpdk-17.11-7.el7.x86_64
microcode-20180108.tgz

Network cards: 10-Gigabit X540-AT2

Intel meltdonw&spectre fixes were applied to both the host&guest.
microcode: revision 0x3b, date = 2017-11-17

Values of related options:
# cat /sys/kernel/debug/x86/pti_enabled
1
# cat /sys/kernel/debug/x86/ibpb_enabled
1
# cat /sys/kernel/debug/x86/ibrs_enabled
0
# cat /sys/kernel/debug/x86/retp_enabled
1

Testing Scenarios:
(1) PVP performance testing -- PASS
(Note: dpdk's testpmd acts as the role of OpenvSwitch in host)

The throughput results looks good as expected:
1_Queue/0_Loss/64Byte_packet throughput: 9.49Mpps
1_Queue/0.002%_Loss/64Byte_packet throughput: 17.06Mpps

(2) PVP live migration testing -- PASS

All 10 ping-pong migration works well as expected:
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      119     13254        15    9352010.0
 1       1Mpps      123     13453        15    9546293.0
 2       1Mpps      131     12844        15    7015119.0
 3       1Mpps      119     12575        14    4898332.0
 4       1Mpps      124     13021        15    4759253.0
 5       1Mpps      125     13461        16    8348222.0
 6       1Mpps      122     12638        14    6433116.0
 7       1Mpps      121     12581        14    5951345.0
 8       1Mpps      128     13078        15    5130945.0
 9       1Mpps      119     13181        13    6856561.0


(3) Guest with device assignment -- PASS

The throughput results looks good as expected:
1_Queue/0_Loss/64Byte_packet throughput: 20.64Mpps

(4) Guest with ovs+dpdk+vhost-user -- PASS

Other versions:
openvswitch-2.9.0-3.el7fdp.x86_64

The throughput results looks good as expected:
2_Queues/0_Loss/64Byte_packet throughput: 21.30Mpps

Comment 21 errata-xmlrpc 2018-04-10 23:59:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1065