draft-ietf-bmwg-ngfw-performance-01.txt   draft-ietf-bmwg-ngfw-performance-02.txt 
Benchmarking Methodology Working Group B. Balarajah Benchmarking Methodology Working Group B. Balarajah
Internet-Draft Internet-Draft
Intended status: Informational C. Rossenhoevel Intended status: Informational C. Rossenhoevel
Expires: March 6, 2020 EANTC AG Expires: May 22, 2020 EANTC AG
B. Monkman B. Monkman
NetSecOPEN NetSecOPEN
September 3, 2019 November 19, 2019
Benchmarking Methodology for Network Security Device Performance Benchmarking Methodology for Network Security Device Performance
draft-ietf-bmwg-ngfw-performance-01 draft-ietf-bmwg-ngfw-performance-02
Abstract Abstract
This document provides benchmarking terminology and methodology for This document provides benchmarking terminology and methodology for
next-generation network security devices including next-generation next-generation network security devices including next-generation
firewalls (NGFW), intrusion detection and prevention solutions (IDS/ firewalls (NGFW), intrusion detection and prevention solutions (IDS/
IPS) and unified threat management (UTM) implementations. This IPS) and unified threat management (UTM) implementations. This
document aims to strongly improve the applicability, reproducibility, document aims to strongly improve the applicability, reproducibility,
and transparency of benchmarks and to align the test methodology with and transparency of benchmarks and to align the test methodology with
today's increasingly complex layer 7 application use cases. The main today's increasingly complex layer 7 application use cases. The main
skipping to change at page 1, line 41 skipping to change at page 1, line 41
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on March 6, 2020. This Internet-Draft will expire on May 22, 2020.
Copyright Notice Copyright Notice
Copyright (c) 2019 IETF Trust and the persons identified as the Copyright (c) 2019 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 29 skipping to change at page 2, line 29
4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 5
4.3. Test Equipment Configuration . . . . . . . . . . . . . . 9 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 9
4.3.1. Client Configuration . . . . . . . . . . . . . . . . 9 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 9
4.3.2. Backend Server Configuration . . . . . . . . . . . . 11 4.3.2. Backend Server Configuration . . . . . . . . . . . . 11
4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 11 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 11
4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 12 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 12
5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 13 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 13
6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 14 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.1. Key Performance Indicators . . . . . . . . . . . . . . . 15 6.1. Key Performance Indicators . . . . . . . . . . . . . . . 15
7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 16 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 16
7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 17 7.1. Throughput Performance With NetSecOPEN Traffic Mix . . . 16
7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 17 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 16
7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 17 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 17
7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 17 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 17
7.1.4. Test Procedures and expected Results . . . . . . . . 19 7.1.4. Test Procedures and expected Results . . . . . . . . 19
7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 20 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 20
7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 20 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 20
7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 20 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 20
7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 20 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 20
7.2.4. Test Procedures and Expected Results . . . . . . . . 22 7.2.4. Test Procedures and Expected Results . . . . . . . . 22
7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 23 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 23
7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23
skipping to change at page 3, line 6 skipping to change at page 3, line 6
7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 26 7.4. TCP/HTTP Transaction Latency . . . . . . . . . . . . . . 26
7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26
7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26
7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26
7.4.4. Test Procedures and Expected Results . . . . . . . . 28 7.4.4. Test Procedures and Expected Results . . . . . . . . 28
7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 29 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 29
7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29
7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 30
7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30 7.5.3. Test Parameters . . . . . . . . . . . . . . . . . . . 30
7.5.4. Test Procedures and expected Results . . . . . . . . 31 7.5.4. Test Procedures and expected Results . . . . . . . . 31
7.6. TCP/HTTPS Connections per second . . . . . . . . . . . . 33 7.6. TCP/HTTPS Connections per second . . . . . . . . . . . . 32
7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 33 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 32
7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 33
7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 33 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 33
7.6.4. Test Procedures and expected Results . . . . . . . . 35 7.6.4. Test Procedures and expected Results . . . . . . . . 35
7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 36 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 36
7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 36
7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 36
7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 36 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 36
7.7.4. Test Procedures and Expected Results . . . . . . . . 39 7.7.4. Test Procedures and Expected Results . . . . . . . . 39
7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 40 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 40
7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 40
skipping to change at page 7, line 9 skipping to change at page 7, line 9
|Application | x | | |Application | x | |
|Identification | | | |Identification | | |
+---------------+-------------+----------+ +---------------+-------------+----------+
Table 1: DUT/SUT Feature List Table 1: DUT/SUT Feature List
In summary, DUT/SUT SHOULD be configured as follows: In summary, DUT/SUT SHOULD be configured as follows:
o All security inspection enabled o All security inspection enabled
o Disposition of all traffic is logged - Logging to an external o Disposition of all flows of traffic are logged - Logging to an
device is permissible external device is permissible
o Detection of Common Vulnerabilities and Exposures (CVE) matching o Detection of Common Vulnerabilities and Exposures (CVE) matching
the following characteristics when searching the National the following characteristics when searching the National
Vulnerability Database (NVD) Vulnerability Database (NVD)
* Common Vulnerability Scoring System (CVSS) Version: 2 * Common Vulnerability Scoring System (CVSS) Version: 2
* CVSS V2 Metrics: AV:N/Au:N/I:C/A:C * CVSS V2 Metrics: AV:N/Au:N/I:C/A:C
* AV=Attack Vector, Au=Authentication, I=Integrity and * AV=Attack Vector, Au=Authentication, I=Integrity and
skipping to change at page 8, line 4 skipping to change at page 8, line 4
The RECOMMENDED throughput values for the following classes are: The RECOMMENDED throughput values for the following classes are:
Extra Small (XS) - supported throughput less than 1Gbit/s Extra Small (XS) - supported throughput less than 1Gbit/s
Small (S) - supported throughput less than 5Gbit/s Small (S) - supported throughput less than 5Gbit/s
Medium (M) - supported throughput greater than 5Gbit/s and less than Medium (M) - supported throughput greater than 5Gbit/s and less than
10Gbit/s 10Gbit/s
Large (L) - supported throughput greater than 10Gbit/s Large (L) - supported throughput greater than 10Gbit/s
The Access Conrol Rules (ACL) defined in Table 2 SHOULD be configured The Access Conrol Rules (ACL) defined in Table 2 MUST be configured
from top to bottom in the correct order as shown in the table. from top to bottom in the correct order as shown in the table.
(Note: There will be differences between how security vendors (Note: There will be differences between how security vendors
implement ACL decision making.) The configured ACL MUST NOT block implement ACL decision making.) The configured ACL MUST NOT block
the test traffic used for the benchmarking test scenarios. the test traffic used for the benchmarking test scenarios.
+---------------------------------------------------+---------------+ +---------------------------------------------------+---------------+
| | DUD/SUT | | | DUD/SUT |
| | Classification| | | Classification|
| | #rules | | | #rules |
+-----------+-----------+------------------+------------+---+---+---+ +-----------+-----------+------------------+------------+---+---+---+
skipping to change at page 13, line 9 skipping to change at page 13, line 9
target objective will be defined for each benchmarking test. The target objective will be defined for each benchmarking test. The
duration for the ramp up phase MUST be configured long enough, so duration for the ramp up phase MUST be configured long enough, so
that the test equipment does not overwhelm DUT/SUT's supported that the test equipment does not overwhelm DUT/SUT's supported
performance metrics namely; connections per second, concurrent performance metrics namely; connections per second, concurrent
TCP connections, and application transactions per second. The TCP connections, and application transactions per second. The
RECOMMENDED time duration for the ramp up phase is 180-300 RECOMMENDED time duration for the ramp up phase is 180-300
seconds. No measurements are made in this phase. seconds. No measurements are made in this phase.
3. In the sustain phase, the test equipment SHOULD continue 3. In the sustain phase, the test equipment SHOULD continue
generating traffic to constant target value for a constant number generating traffic to constant target value for a constant number
of active client IPs. The RECOMMENDED time duration for sustain of active client IPs. The mininum RECOMMENDED time duration for
phase is 600 seconds. This is the phase where measurements sustain phase is 300 seconds. This is the phase where
occur. measurements occur.
4. In the ramp down/close phase, no new connections are established, 4. In the ramp down/close phase, no new connections are established,
and no measurements are made. The time duration for ramp up and and no measurements are made. The time duration for ramp up and
ramp down phase SHOULD be same. The RECOMMENDED duration of this ramp down phase SHOULD be same. The RECOMMENDED duration of this
phase is between 180 to 300 seconds. phase is between 180 to 300 seconds.
5. The last phase is administrative and will occur when the test 5. The last phase is administrative and will occur when the test
equipment merges and collates the report data. equipment merges and collates the report data.
5. Test Bed Considerations 5. Test Bed Considerations
skipping to change at page 16, line 33 skipping to change at page 16, line 33
same period. Goodput result SHALL also be presented in the same same period. Goodput result SHALL also be presented in the same
format as throughput. format as throughput.
o URL Response time / Time to Last Byte (TTLB) o URL Response time / Time to Last Byte (TTLB)
This key performance indicator measures the minimum, average and This key performance indicator measures the minimum, average and
maximum per URL response time in the sustaining period. The maximum per URL response time in the sustaining period. The
latency is measured at Client and in this case would be the time latency is measured at Client and in this case would be the time
duration between sending a GET request from Client and the duration between sending a GET request from Client and the
receival of the complete response from the server. receival of the complete response from the server.
o Application Transaction Latency
This key performance indicator measures the minimum, average and
maximum the amount of time to receive all objects from the server.
The value of application transaction latency SHOULD be presented
in millisecond rounded to zero decimal.
o Time to First Byte (TTFB) o Time to First Byte (TTFB)
This key performance indicator will measure minimum, average and This key performance indicator will measure minimum, average and
maximum the time to first byte. TTFB is the elapsed time between maximum the time to first byte. TTFB is the elapsed time between
sending the SYN packet from the client and receiving the first sending the SYN packet from the client and receiving the first
byte of application date from the DUT/SUT. TTFB SHOULD be byte of application date from the DUT/SUT. TTFB SHOULD be
expressed in millisecond. expressed in millisecond.
7. Benchmarking Tests 7. Benchmarking Tests
7.1. Throughput Performance With NetSecOPEN Traffic Mix 7.1. Throughput Performance With NetSecOPEN Traffic Mix
7.1.1. Objective 7.1.1. Objective
Using NetSecOPEN traffic mix, determine the maximum sustainable Using NetSecOPEN traffic mix, determine the maximum sustainable
throughput performance supported by the DUT/SUT. (see Appendix A for throughput performance supported by the DUT/SUT. (see Appendix A for
details about traffic mix) details about traffic mix)
This test scenario is RECOMMENDED to perform twice; one with SSL This test scenario is RECOMMENDED to perform twice; one with SSL
inspection feature enabled and the second scenario with SSL inspection feature enabled and the second scenario with SSL
inspection feature disabled on the DUT/SUT. inspection feature disabled on the DUT/SUT.
7.1.2. Test Setup 7.1.2. Test Setup
Test bed setup MUST be configured as defined in Section 4. Any test Test bed setup MUST be configured as defined in Section 4. Any test
scenario specific test bed configuration changes MUST be documented. scenario specific test bed configuration changes MUST be documented.
7.1.3. Test Parameters 7.1.3. Test Parameters
skipping to change at page 18, line 31 skipping to change at page 18, line 24
Traffic profile: Test scenario MUST be run with a single application Traffic profile: Test scenario MUST be run with a single application
traffic mix profile (see Appendix A for details about traffic mix). traffic mix profile (see Appendix A for details about traffic mix).
The name of the NetSecOPEN traffic mix MUST be documented. The name of the NetSecOPEN traffic mix MUST be documented.
7.1.3.4. Test Results Validation Criteria 7.1.3.4. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole sustain phase of the traffic load profile. the whole sustain phase of the traffic load profile.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of total attempt transactions of 100,000 transactions) of total attempt transactions
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections connections) of total initiated TCP connections
c. Maximum deviation (max. dev) of application transaction time or c. Maximum deviation (max. dev) of URL Response Time or TTLB (Time
TTLB (Time To Last Byte) MUST be less than X (The value for "X" To Last Byte) MUST be less than X (The value for "X" will be
will be finalized and updated after completion of PoC test) finalized and updated after completion of PoC test)
The following equation MUST be used to calculate the deviation of The following equation MUST be used to calculate the deviation of
application transaction latency or TTLB URL Response Time or TTLB
max. dev = max((avg_latency - min_latency),(max_latency - max. dev = max((avg_latency - min_latency),(max_latency -
avg_latency)) / (Initial latency) avg_latency)) / (Initial latency)
Where, the initial latency is calculated using the following Where, the initial latency is calculated using the following
equation. For this calculation, the latency values (min', avg' equation. For this calculation, the latency values (min', avg'
and max') MUST be measured during test procedure step 1 as and max') MUST be measured during test procedure step 1 as
defined in Section 7.1.4.1. defined in Section 7.1.4.1.
The variable latency represents application transaction latency The variable latency represents URL Response Time or TTLB.
or TTLB.
Initial latency:= min((avg' latency - min' latency) | (max' Initial latency:= min((avg' latency - min' latency) | (max'
latency - avg' latency)) latency - avg' latency))
d. Maximum value of Time to First Byte (TTFB) MUST be less than X d. Maximum value of Time to First Byte (TTFB) MUST be less than X
7.1.3.5. Measurement 7.1.3.5. Measurement
Following KPI metrics MUST be reported for this test scenario. Following KPI metrics MUST be reported for this test scenario.
Mandatory KPIs: average Throughput, average Concurrent TCP Mandatory KPIs: average Throughput, TTFB (minimum, average and
connections, TTLB/application transaction latency (minimum, average maximum), TTLB (minimum, average and maximum) and average Application
and maximum) and average application transactions per second Transactions Per Second
Optional KPIs: average TCP connections per second, average TLS Note: TTLB MUST be reported along with min, max and avg object size
handshake rate and TTFB used in the traffic profile.
Optional KPIs: average TCP Connections Per Second and average TLS
Handshake Rate
7.1.4. Test Procedures and expected Results 7.1.4. Test Procedures and expected Results
The test procedures are designed to measure the throughput The test procedures are designed to measure the throughput
performance of the DUT/SUT at the sustaining period of traffic load performance of the DUT/SUT at the sustaining period of traffic load
profile. The test procedure consists of three major steps. profile. The test procedure consists of three major steps.
7.1.4.1. Step 1: Test Initialization and Qualification 7.1.4.1. Step 1: Test Initialization and Qualification
Verify the link status of the all connected physical interfaces. All Verify the link status of the all connected physical interfaces. All
skipping to change at page 21, line 44 skipping to change at page 21, line 44
c. During the sustain phase, traffic should be forwarded at a c. During the sustain phase, traffic should be forwarded at a
constant rate constant rate
d. Concurrent TCP connections SHOULD be constant during steady d. Concurrent TCP connections SHOULD be constant during steady
state. Any deviation of concurrent TCP connections MUST be less state. Any deviation of concurrent TCP connections MUST be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate almost at the same rate
7.2.3.4. Measurement 7.2.3.4. Measurement
Following KPI metrics MUST be reported for each test iteration. Following KPI metric MUST be reported for each test iteration.
Mandatory KPIs: average TCP connections per second, average average TCP Connections Per Second
Throughput and Average Time to First Byte (TTFB).
7.2.4. Test Procedures and Expected Results 7.2.4. Test Procedures and Expected Results
The test procedure is designed to measure the TCP connections per The test procedure is designed to measure the TCP connections per
second rate of the DUT/SUT at the sustaining period of the traffic second rate of the DUT/SUT at the sustaining period of the traffic
load profile. The test procedure consists of three major steps. load profile. The test procedure consists of three major steps.
This test procedure MAY be repeated multiple times with different IP This test procedure MAY be repeated multiple times with different IP
types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic
distribution. distribution.
skipping to change at page 25, line 12 skipping to change at page 25, line 12
b. Traffic should be forwarded constantly. b. Traffic should be forwarded constantly.
c. Concurrent connetions MUST be constant. The deviation of c. Concurrent connetions MUST be constant. The deviation of
concurrent TCP connection MUST NOT increase more than 10% concurrent TCP connection MUST NOT increase more than 10%
7.3.3.4. Measurement 7.3.3.4. Measurement
The KPI metrics MUST be reported for this test scenario: The KPI metrics MUST be reported for this test scenario:
Average Throughput, average HTTP transactions per second, concurrent average Throughput and average HTTP Transactions per Second
connections, and average TCP connections per second.
7.3.4. Test Procedures and Expected Results 7.3.4. Test Procedures and Expected Results
The test procedure is designed to measure HTTP throughput of the DUT/ The test procedure is designed to measure HTTP throughput of the DUT/
SUT. The test procedure consists of three major steps. This test SUT. The test procedure consists of three major steps. This test
procedure MAY be repeated multiple times with different IPv4 and IPv6 procedure MAY be repeated multiple times with different IPv4 and IPv6
traffic distribution and HTTP response object sizes. traffic distribution and HTTP response object sizes.
7.3.4.1. Step 1: Test Initialization and Qualification 7.3.4.1. Step 1: Test Initialization and Qualification
skipping to change at page 28, line 25 skipping to change at page 28, line 25
e. After ramp up the DUT MUST achieve the "Target objective" defined e. After ramp up the DUT MUST achieve the "Target objective" defined
in the parameter Section 7.4.3.2 and remain in that state for the in the parameter Section 7.4.3.2 and remain in that state for the
entire test duration (sustain phase). entire test duration (sustain phase).
7.4.3.4. Measurement 7.4.3.4. Measurement
Following KPI metrics MUST be reported for each test scenario and Following KPI metrics MUST be reported for each test scenario and
HTTP response object sizes separately: HTTP response object sizes separately:
average TCP connections per second and average application TTFB (minimum, average and maximum) and TTLB (minimum, average and
transaction latency maximum)
All KPI's are measured once the target throughput achieves the steady All KPI's are measured once the target throughput achieves the steady
state. state.
7.4.4. Test Procedures and Expected Results 7.4.4. Test Procedures and Expected Results
The test procedure is designed to measure the average application The test procedure is designed to measure the average application
transaction latencies or TTLB when the DUT is operating close to 50% transaction latencies or TTLB when the DUT is operating close to 50%
of its maximum achievable throughput or connections per second. This of its maximum achievable throughput or connections per second. This
test procedure CAN be repeated multiple times with different IP types test procedure CAN be repeated multiple times with different IP types
skipping to change at page 31, line 36 skipping to change at page 31, line 36
of 100,000 transaction) of total attempted transactions of 100,000 transaction) of total attempted transactions
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections connections) of total initiated TCP connections
c. During the sustain phase, traffic should be forwarded constantly c. During the sustain phase, traffic should be forwarded constantly
7.5.3.4. Measurement 7.5.3.4. Measurement
Following KPI metrics MUST be reported for this test scenario: Following KPI metric MUST be reported for this test scenario:
average Throughput, Concurrent TCP connections (minimum, average and average Concurrent TCP Connections
maximum), TTLB/ application transaction latency (minimum, average and
maximum) and average application transactions per second.
7.5.4. Test Procedures and expected Results 7.5.4. Test Procedures and expected Results
The test procedure is designed to measure the concurrent TCP The test procedure is designed to measure the concurrent TCP
connection capacity of the DUT/SUT at the sustaining period of connection capacity of the DUT/SUT at the sustaining period of
traffic load profile. The test procedure consists of three major traffic load profile. The test procedure consists of three major
steps. This test procedure MAY be repeated multiple times with steps. This test procedure MAY be repeated multiple times with
different IPv4 and IPv6 traffic distribution. different IPv4 and IPv6 traffic distribution.
7.5.4.1. Step 1: Test Initialization and Qualification 7.5.4.1. Step 1: Test Initialization and Qualification
skipping to change at page 35, line 9 skipping to change at page 34, line 47
constant rate constant rate
d. Concurrent TCP connections SHOULD be constant during steady d. Concurrent TCP connections SHOULD be constant during steady
state. This confirms that the DUT open and close the TCP state. This confirms that the DUT open and close the TCP
connections at the same rate connections at the same rate
7.6.3.4. Measurement 7.6.3.4. Measurement
Following KPI metrics MUST be reported for this test scenario: Following KPI metrics MUST be reported for this test scenario:
average TCP connections per second, average Throughput and Average average TCP Connections Per Second, average TLS Handshake Rate (TLS
Time to TCP First Byte. Handshake Rate can be measured in the test scenario using 1KB object
size)
7.6.4. Test Procedures and expected Results 7.6.4. Test Procedures and expected Results
The test procedure is designed to measure the TCP connections per The test procedure is designed to measure the TCP connections per
second rate of the DUT/SUT at the sustaining period of traffic load second rate of the DUT/SUT at the sustaining period of traffic load
profile. The test procedure consists of three major steps. This profile. The test procedure consists of three major steps. This
test procedure MAY be repeated multiple times with different IPv4 and test procedure MAY be repeated multiple times with different IPv4 and
IPv6 traffic distribution. IPv6 traffic distribution.
7.6.4.1. Step 1: Test Initialization and Qualification 7.6.4.1. Step 1: Test Initialization and Qualification
skipping to change at page 38, line 50 skipping to change at page 38, line 50
of 100,000 transactions) of attempt transactions. of 100,000 transactions) of attempt transactions.
b. Traffic should be forwarded constantly. b. Traffic should be forwarded constantly.
c. The deviation of concurrent TCP connections MUST be less than 10% c. The deviation of concurrent TCP connections MUST be less than 10%
7.7.3.4. Measurement 7.7.3.4. Measurement
The KPI metrics MUST be reported for this test scenario: The KPI metrics MUST be reported for this test scenario:
Average Throughput, Average transactions per second, concurrent average Throughput and average HTTPS Transactions Per Second
connections, and average TCP connections per second.
7.7.4. Test Procedures and Expected Results 7.7.4. Test Procedures and Expected Results
The test procedure consists of three major steps. This test The test procedure consists of three major steps. This test
procedure MAY be repeated multiple times with different IPv4 and IPv6 procedure MAY be repeated multiple times with different IPv4 and IPv6
traffic distribution and HTTPS response object sizes. traffic distribution and HTTPS response object sizes.
7.7.4.1. Step 1: Test Initialization and Qualification 7.7.4.1. Step 1: Test Initialization and Qualification
Verify the link status of the all connected physical interfaces. All Verify the link status of the all connected physical interfaces. All
skipping to change at page 42, line 10 skipping to change at page 42, line 10
e. After ramp up the DUT MUST achieve the "Target objective" defined e. After ramp up the DUT MUST achieve the "Target objective" defined
in the parameter Section 7.8.3.2 and remain in that state for the in the parameter Section 7.8.3.2 and remain in that state for the
entire test duration (sustain phase). entire test duration (sustain phase).
7.8.3.4. Measurement 7.8.3.4. Measurement
Following KPI metrics MUST be reported for each test scenario and Following KPI metrics MUST be reported for each test scenario and
HTTPS response object sizes separately: HTTPS response object sizes separately:
average TCP connections per second and average application TTFB (minimum, average and maximum) and TTLB (minimum, average and
transaction latency or TTLB maximum)
All KPI's are measured once the target connections per second All KPI's are measured once the target connections per second
achieves the steady state. achieves the steady state.
7.8.4. Test Procedures and Expected Results 7.8.4. Test Procedures and Expected Results
The test procedure is designed to measure average application The test procedure is designed to measure average TTFB or TTLB when
transaction latency or TTLB when the DUT is operating close to 50% of the DUT is operating close to 50% of its maximum achievable
its maximum achievable connections per second. This test procedure connections per second. This test procedure can be repeated multiple
can be repeated multiple times with different IP types (IPv4 only, times with different IP types (IPv4 only, IPv6 only and IPv4 and IPv6
IPv6 only and IPv4 and IPv6 mixed traffic distribution), HTTPS mixed traffic distribution), HTTPS response object sizes and single
response object sizes and single and multiple transactions per and multiple transactions per connection scenarios.
connection scenarios.
7.8.4.1. Step 1: Test Initialization and Qualification 7.8.4.1. Step 1: Test Initialization and Qualification
Verify the link status of the all connected physical interfaces. All Verify the link status of the all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure traffic load profile of the test equipment to establish Configure traffic load profile of the test equipment to establish
"Initial objective" as defined in the parameters Section 7.8.3.2. "Initial objective" as defined in the parameters Section 7.8.3.2.
The traffic load profile can be defined as described in The traffic load profile can be defined as described in
Section 4.3.4. Section 4.3.4.
skipping to change at page 45, line 26 skipping to change at page 45, line 23
of 100,000 transactions) of total attempted transactions of 100,000 transactions) of total attempted transactions
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections connections) of total initiated TCP connections
c. During the sustain phase, traffic SHOULD be forwarded constantly c. During the sustain phase, traffic SHOULD be forwarded constantly
7.9.3.4. Measurement 7.9.3.4. Measurement
Following KPI metrics MUST be reported for this test scenario: Following KPI metric MUST be reported for this test scenario:
Average Throughput, max. Min. Avg. Concurrent TCP connections, TTLB/ average Concurrent TCP Connections
application transaction latency and average application transactions
per second
7.9.4. Test Procedures and expected Results 7.9.4. Test Procedures and expected Results
The test procedure is designed to measure the concurrent TCP The test procedure is designed to measure the concurrent TCP
connection capacity of the DUT/SUT at the sustaining period of connection capacity of the DUT/SUT at the sustaining period of
traffic load profile. The test procedure consists of three major traffic load profile. The test procedure consists of three major
steps. This test procedure MAY be repeated multiple times with steps. This test procedure MAY be repeated multiple times with
different IPv4 and IPv6 traffic distribution. different IPv4 and IPv6 traffic distribution.
7.9.4.1. Step 1: Test Initialization and Qualification 7.9.4.1. Step 1: Test Initialization and Qualification
 End of changes. 30 change blocks. 
62 lines changed or deleted 50 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/