draft-ietf-bmwg-dcbench-terminology-11.txt   draft-ietf-bmwg-dcbench-terminology-12.txt 
Internet Engineering Task Force L. Avramov Internet Engineering Task Force L. Avramov
INTERNET-DRAFT, Intended status: Informational Google INTERNET-DRAFT, Intended status: Informational Google
Expires: December 17,2017 J. Rapp Expires: December 17,2017 J. Rapp
June 15, 2017 VMware June 15, 2017 VMware
Data Center Benchmarking Terminology Data Center Benchmarking Terminology
draft-ietf-bmwg-dcbench-terminology-11 draft-ietf-bmwg-dcbench-terminology-12
Abstract Abstract
The purpose of this informational document is to establish definitions The purpose of this informational document is to establish definitions
and describe measurement techniques for data center benchmarking, as and describe measurement techniques for data center benchmarking, as
well as it is to introduce new terminologies applicable to data center well as it is to introduce new terminologies applicable to data center
performance evaluations. The purpose of this document is not to define performance evaluations. The purpose of this document is not to define
the test methodology, but rather establish the important concepts for the test methodology, but rather establish the important concepts for
benchmarking network switches and routers in the data center. The benchmarking network switches and routers in the data center. The
terminologies are not only data center specific and can be seen as terminologies are not only data center specific and can be seen as
skipping to change at page 3, line 5 skipping to change at page 3, line 5
7.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . 14 7.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2. Discussion . . . . . . . . . . . . . . . . . . . . . . . . 14 7.2. Discussion . . . . . . . . . . . . . . . . . . . . . . . . 14
7.3. Measurement Units . . . . . . . . . . . . . . . . . . . . . 14 7.3. Measurement Units . . . . . . . . . . . . . . . . . . . . . 14
8. Security Considerations . . . . . . . . . . . . . . . . . . . 15 8. Security Considerations . . . . . . . . . . . . . . . . . . . 15
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15
10.1. Normative References . . . . . . . . . . . . . . . . . . 15 10.1. Normative References . . . . . . . . . . . . . . . . . . 15
10.2. Informative References . . . . . . . . . . . . . . . . . 16 10.2. Informative References . . . . . . . . . . . . . . . . . 16
10.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 16 10.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 16
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17
1. Introduction 1. Introduction
Traffic patterns in the data center are not uniform and are Traffic patterns in the data center are not uniform and are
constantly changing. They are dictated by the nature and variety of constantly changing. They are dictated by the nature and variety of
applications utilized in the data center. It can be largely east-west applications utilized in the data center. It can be largely east-west
traffic flows in one data center and north-south in another, while traffic flows (server to server inside the data center) in one data
some may combine both. Traffic patterns can be bursty in nature and center and north-south (outside of the data center to server) in
contain many-to-one, many-to-many, or one-to-many flows. Each flow another, while some may combine both. Traffic patterns can be bursty
may also be small and latency sensitive or large and throughput in nature and contain many-to-one, many-to-many, or one-to-many
sensitive while containing a mix of UDP and TCP traffic. One or more flows. Each flow may also be small and latency sensitive or large and
of these may coexist in a single cluster and flow through a single throughput sensitive while containing a mix of UDP and TCP traffic.
network device simultaneously. Benchmarking of network devices have One or more of these may coexist in a single cluster and flow through
long used [RFC1242], [RFC2432], [RFC2544], [RFC2889] and [RFC3918]. a single network device simultaneously. Benchmarking of network
These benchmarks have largely been focused around various latency devices have long used [RFC1242], [RFC2432], [RFC2544], [RFC2889] and
attributes and max throughput of the Device Under Test being [RFC3918]. These benchmarks have largely been focused around various
latency attributes and max throughput of the Device Under Test being
benchmarked. These standards are good at measuring theoretical max benchmarked. These standards are good at measuring theoretical max
throughput, forwarding rates and latency under testing conditions, throughput, forwarding rates and latency under testing conditions,
but they do not represent real traffic patterns that may affect these but they do not represent real traffic patterns that may affect these
networking devices. The data center networking devices covered are networking devices. The data center networking devices covered are
switches and routers. switches and routers.
The following document defines a set of definitions, metrics and The following document defines a set of definitions, metrics and
terminologies including congestion scenarios, switch buffer analysis terminologies including congestion scenarios, switch buffer analysis
and redefines basic definitions in order to represent a wide mix of and redefines basic definitions in order to represent a wide mix of
traffic conditions. The test methodologies are defined in [1]. traffic conditions. The test methodologies are defined in [1].
skipping to change at page 5, line 41 skipping to change at page 5, line 41
MUST be measured with the FILO mechanism: FILO will include the MUST be measured with the FILO mechanism: FILO will include the
latency of the switch and the latency of the frame as well as the latency of the switch and the latency of the frame as well as the
serialization delay. It is a picture of the 'whole' latency going serialization delay. It is a picture of the 'whole' latency going
through the DUT. For applications, which are latency sensitive and through the DUT. For applications, which are latency sensitive and
can function with initial bytes of the frame, FIFO MAY be an can function with initial bytes of the frame, FIFO MAY be an
additional type of measuring to supplement FILO. additional type of measuring to supplement FILO.
Not all DUTs are exclusively cut-through or store-and-forward. Data Not all DUTs are exclusively cut-through or store-and-forward. Data
Center DUTs are frequently store-and-forward for smaller packet sizes Center DUTs are frequently store-and-forward for smaller packet sizes
and then adopting a cut-through behavior. The change of behavior and then adopting a cut-through behavior. The change of behavior
happens for at specific larger packet sizes. The value of the packet happens at specific larger packet sizes. The value of the packet size
size for the behavior to change MAY be configurable depending on the for the behavior to change MAY be configurable depending on the DUT
DUT manufacturer. FILO covers all scenarios: Store-and-forward or manufacturer. FILO covers all scenarios: Store-and-forward or cut-
cut-through. The threshold of behavior change does not matter for through. The threshold of behavior change does not matter for
benchmarking since FILO covers both possible scenarios. benchmarking since FILO covers both possible scenarios.
LIFO mechanism can be used with store forward type of switches but LIFO mechanism can be used with store forward type of switches but
not with cut-through type of switches, as it will provide negative not with cut-through type of switches, as it will provide negative
latency values for larger packet sizes because LIFO removes the latency values for larger packet sizes because LIFO removes the
serialization delay. Therefore, this mechanism MUST NOT be used when serialization delay. Therefore, this mechanism MUST NOT be used when
comparing latencies of two different DUTs. comparing latencies of two different DUTs.
2.3 Measurement Units 2.3 Measurement Units
The measuring methods to use for benchmarking purposes are as The measuring methods to use for benchmarking purposes are as
follows: follows:
1) FILO MUST be used as a measuring method, as this will include the 1) FILO MUST be used as a measuring method, as this will include the
latency of the packet; and today the application commonly need to latency of the packet; and today the application commonly needs to
read the whole packet to process the information and take an action. read the whole packet to process the information and take an action.
2) FIFO MAY be used for certain applications able to proceed the data 2) FIFO MAY be used for certain applications able to proceed the data
as the first bits arrive (FPGA for example) as the first bits arrive (FPGA for example)
3) LIFO MUST NOT be used, because it subtracts the latency of the 3) LIFO MUST NOT be used, because it subtracts the latency of the
packet; unlike all the other methods. packet; unlike all the other methods.
3 Jitter 3 Jitter
3.1 Definition 3.1 Definition
Jitter in the data center context is synonymous with the common term Jitter in the data center context is synonymous with the common term
Delay variation. It is derived from multiple measurements of one-way Delay variation. It is derived from multiple measurements of one-way
delay, as described in RFC 3393. The mandatory definition of Delay delay, as described in RFC 3393. The mandatory definition of Delay
Variation is the PDV form from section 4.2 of RFC 5481. When Variation is the PDV form from section 4.2 of [RFC5481]. When
considering a stream of packets, the delays of all packets are considering a stream of packets, the delays of all packets are
subtracted from the minimum delay over all packets in the stream. subtracted from the minimum delay over all packets in the stream.
This facilitates assessment of the range of delay variation (Max - This facilitates assessment of the range of delay variation (Max -
Min), or a high percentile of PDV (99th percentile, for robustness Min), or a high percentile of PDV (99th percentile, for robustness
against outliers). against outliers).
If First-bit to Last-bit timestamps are used for Delay measurement, If First-bit to Last-bit timestamps are used for Delay measurement,
then Delay Variation MUST be measured using packets or frames of the then Delay Variation MUST be measured using packets or frames of the
same size, since the definition of latency includes the serialization same size, since the definition of latency includes the serialization
time for each packet. Otherwise if using First-bit to First-bit, the time for each packet. Otherwise if using First-bit to First-bit, the
size restriction does not apply. size restriction does not apply.
3.2 Discussion 3.2 Discussion
In addition to PDV Range and/or a high percentile of PDV, Inter- In addition to PDV Range and/or a high percentile of PDV, Inter-
Packet Delay Variation (IPDV) as defined in section 4.1 of RFC5481 Packet Delay Variation (IPDV) as defined in section 4.1 of [RFC5481]
(differences between two consecutive packets) MAY be used for the (differences between two consecutive packets) MAY be used for the
purpose of determining how packet spacing has changed during purpose of determining how packet spacing has changed during
transfer, for example, to see if packet stream has become closely- transfer, for example, to see if packet stream has become closely-
spaced or "bursty". However, the Absolute Value of IPDV SHOULD NOT be spaced or "bursty". However, the Absolute Value of IPDV SHOULD NOT be
used, as this collapses the "bursty" and "dispersed" sides of the used, as this collapses the "bursty" and "dispersed" sides of the
IPDV distribution together. IPDV distribution together.
3.3 Measurement Units 3.3 Measurement Units
The measurement of delay variation is expressed in units of seconds. The measurement of delay variation is expressed in units of seconds.
skipping to change at page 7, line 37 skipping to change at page 7, line 37
-Type of transceivers on DUT -Type of transceivers on DUT
-Type of cables -Type of cables
-Length of cables -Length of cables
-Software name, and version of traffic generator and DUT -Software name, and version of traffic generator and DUT
-List of enabled features on DUT MAY be provided and is recommended -List of enabled features on DUT MAY be provided and is recommended
[especially the control plane protocols such as LLDP, Spanning-Tree (especially the control plane protocols such as LLDP, Spanning-Tree
etc.]. A comprehensive configuration file MAY be provided to this etc.). A comprehensive configuration file MAY be provided to this
effect. effect.
4.2 Discussion 4.2 Discussion
Physical layer calibration is part of the end to end latency, which Physical layer calibration is part of the end to end latency, which
should be taken into acknowledgment while evaluating the DUT. Small should be taken into acknowledgment while evaluating the DUT. Small
variations of the physical components of the test may impact the variations of the physical components of the test may impact the
latency being measured, therefore they MUST be described when latency being measured, therefore they MUST be described when
presenting results. presenting results.
4.3 Measurement Units 4.3 Measurement Units
It is RECOMMENDED to use all cables of: The same type, the same It is RECOMMENDED to use all cables of: The same type, the same
length, when possible using the same vendor. It is a MUST to document length, when possible using the same vendor. It is a MUST to document
the cables specifications on section 4.1 along with the test results. the cables specifications on section 4.1 along with the test results.
The test report MUST specify if the cable latency has been removed The test report MUST specify if the cable latency has been removed
from the test measures or not. The accuracy of the traffic generator from the test measures or not. The accuracy of the traffic generator
measure MUST be provided [this is usually a value in the 20ns range measure MUST be provided (this is usually a value in the 20ns range
for current test equipment]. for current test equipment).
5 Line rate 5 Line rate
5.1 Definition 5.1 Definition
The transmit timing, or maximum transmitted data rate is controlled The transmit timing, or maximum transmitted data rate is controlled
by the "transmit clock" in the DUT. The receive timing (maximum by the "transmit clock" in the DUT. The receive timing (maximum
ingress data rate) is derived from the transmit clock of the ingress data rate) is derived from the transmit clock of the
connected interface. connected interface.
skipping to change at page 9, line 18 skipping to change at page 9, line 18
accept frames at a rate within +/- 100 PPM to comply with the accept frames at a rate within +/- 100 PPM to comply with the
standards. standards.
Very few clock circuits are precisely +/- 0.0 PPM because: Very few clock circuits are precisely +/- 0.0 PPM because:
1.The Ethernet standards allow a maximum of +/- 100 PPM (parts per 1.The Ethernet standards allow a maximum of +/- 100 PPM (parts per
million) variance over time. Therefore it is normal for the frequency million) variance over time. Therefore it is normal for the frequency
of the oscillator circuits to experience variation over time and over of the oscillator circuits to experience variation over time and over
a wide temperature range, among external factors. a wide temperature range, among external factors.
2.The crystals or clock modules, usually have a specific +/- PPM 2.The crystals, or clock modules, usually have a specific +/- PPM
variance that is significantly better than +/- 100 PPM. Often times variance that is significantly better than +/- 100 PPM. Often times
this is +/- 30 PPM or better in order to be considered a this is +/- 30 PPM or better in order to be considered a
"certification instrument". "certification instrument".
When testing an Ethernet switch throughput at "line rate", any When testing an Ethernet switch throughput at "line rate", any
specific switch will have a clock rate variance. If a test set is specific switch will have a clock rate variance. If a test set is
running +1 PPM faster than a switch under test, and a sustained line running +1 PPM faster than a switch under test, and a sustained line
rate test is performed, a gradual increase in latency and eventually rate test is performed, a gradual increase in latency and eventually
packet drops as buffers fill and overflow in the switch can be packet drops as buffers fill and overflow in the switch can be
observed. Depending on how much clock variance there is between the observed. Depending on how much clock variance there is between the
skipping to change at page 9, line 42 skipping to change at page 9, line 42
be demonstrated by setting the test set link occupancy to slightly be demonstrated by setting the test set link occupancy to slightly
less than 100 percent link occupancy. Typically 99 percent link less than 100 percent link occupancy. Typically 99 percent link
occupancy produces excellent low-latency and no packet loss. No occupancy produces excellent low-latency and no packet loss. No
Ethernet switch or router will have a transmit clock rate of exactly Ethernet switch or router will have a transmit clock rate of exactly
+/- 0.0 PPM. Very few (if any) test sets have a clock rate that is +/- 0.0 PPM. Very few (if any) test sets have a clock rate that is
precisely +/- 0.0 PPM. precisely +/- 0.0 PPM.
Test set equipment manufacturers are well-aware of the standards, and Test set equipment manufacturers are well-aware of the standards, and
allow a software-controlled +/- 100 PPM "offset" (clock-rate allow a software-controlled +/- 100 PPM "offset" (clock-rate
adjustment) to compensate for normal variations in the clock speed of adjustment) to compensate for normal variations in the clock speed of
"devices under test". This offset adjustment allows engineers to DUTs. This offset adjustment allows engineers to determine the
determine the approximate speed the connected device is operating, approximate speed the connected device is operating, and verify that
and verify that it is within parameters allowed by standards. it is within parameters allowed by standards.
5.3 Measurement Units 5.3 Measurement Units
"Line Rate" can be measured in terms of "Frame Rate": "Line Rate" can be measured in terms of "Frame Rate":
Frame Rate = Transmit-Clock-Frequency / (Frame-Length*8 + Minimum_Gap Frame Rate = Transmit-Clock-Frequency / (Frame-Length*8 + Minimum_Gap
+ Preamble + Start-Frame Delimiter) + Preamble + Start-Frame Delimiter)
Minimum_Gap represents the inter frame gap. This formula "scales up" Minimum_Gap represents the inter frame gap. This formula "scales up"
or "scales down" to represent 1 GB Ethernet, or 10 GB Ethernet and so or "scales down" to represent 1 GB Ethernet, or 10 GB Ethernet and so
skipping to change at page 10, line 50 skipping to change at page 10, line 50
frame buffering memory available on a DUT. This size is expressed in frame buffering memory available on a DUT. This size is expressed in
B (bytes); KB (kilobytes), MB (megabytes) or GB (gigabyte). When the B (bytes); KB (kilobytes), MB (megabytes) or GB (gigabyte). When the
buffer size is expressed it SHOULD be defined by a size metric stated buffer size is expressed it SHOULD be defined by a size metric stated
above. When the buffer size is expressed, an indication of the frame above. When the buffer size is expressed, an indication of the frame
MTU used for that measurement is also necessary as well as the cos MTU used for that measurement is also necessary as well as the cos
(class of service) or dscp (differentiated services code point) value (class of service) or dscp (differentiated services code point) value
set; as often times the buffers are carved by quality of service set; as often times the buffers are carved by quality of service
implementation. Please refer to the buffer efficiency section for implementation. Please refer to the buffer efficiency section for
further details. further details.
Example: Buffer Size of DUT when sending 1518 bytes frames is 18 Mb. Example: Buffer Size of DUT when sending 1518 bytes frames is 18 MB.
Port Buffer Size: The port buffer size is the amount of buffer for a Port Buffer Size: The port buffer size is the amount of buffer for a
single ingress port, egress port or combination of ingress and egress single ingress port, egress port or combination of ingress and egress
buffering location for a single port. The reason for mentioning the buffering location for a single port. The reason for mentioning the
three locations for the port buffer is because the DUT buffering three locations for the port buffer is because the DUT buffering
scheme can be unknown or untested, and so knowing the buffer location scheme can be unknown or untested, and so knowing the buffer location
helps clarify the buffer architecture and consequently the total helps clarify the buffer architecture and consequently the total
buffer size. The Port Buffer Size is an informational value that MAY buffer size. The Port Buffer Size is an informational value that MAY
be provided from the DUT vendor. It is not a value that is tested by be provided from the DUT vendor. It is not a value that is tested by
benchmarking. Benchmarking will be done using the Maximum Port Buffer benchmarking. Benchmarking will be done using the Maximum Port Buffer
skipping to change at page 12, line 47 skipping to change at page 12, line 47
-The intensity of microburst MAY be mentioned when a microburst test -The intensity of microburst MAY be mentioned when a microburst test
is performed is performed
-The cos or dscp value set during the test SHOULD be provided -The cos or dscp value set during the test SHOULD be provided
6.2 Incast 6.2 Incast
6.2.1 Definition 6.2.1 Definition
The term Incast, very commonly utilized in the data center, refers to The term Incast, very commonly utilized in the data center, refers to
the traffic pattern of many-to-one or many-to-many conversations. the traffic pattern of many-to-one or many-to-many conversations. It
Typically in the data center it would refer to many different ingress measures the number of ingress and egress ports and the level of
server ports (many), sending traffic to a common uplink (one), or synchronization attributed, as defined in this section. Typically in
multiple uplinks (many). This pattern is generalized for any network the data center it would refer to many different ingress server ports
as many incoming ports sending traffic to one or few uplinks. It can (many), sending traffic to a common uplink (one), or multiple uplinks
also be found in many-to-many traffic patterns. (many). This pattern is generalized for any network as many incoming
ports sending traffic to one or few uplinks. It can also be found in
many-to-many traffic patterns.
Synchronous arrival time: When two, or more, frames of respective Synchronous arrival time: When two, or more, frames of respective
sizes L1 and L2 arrive at their respective one or multiple ingress sizes L1 and L2 arrive at their respective one or multiple ingress
ports, and there is an overlap of the arrival time for any of the ports, and there is an overlap of the arrival time for any of the
bits on the DUT, then the frames L1 and L2 have a synchronous arrival bits on the Device Under Test (DUT), then the frames L1 and L2 have a
times. This is called incast. synchronous arrival times. This is called incast.
Asynchronous arrival time: Any condition not defined by synchronous Asynchronous arrival time: Any condition not defined by synchronous
arrival time. arrival time.
Percentage of synchronization: This defines the level of overlap Percentage of synchronization: This defines the level of overlap
[amount of bits] between the frames L1,L2..Ln. [amount of bits] between the frames L1,L2..Ln.
Example: Two 64 bytes frames, of length L1 and L2, arrive to ingress Example: Two 64 bytes frames, of length L1 and L2, arrive to ingress
port 1 and port 2 of the DUT. There is an overlap of 6.4 bytes port 1 and port 2 of the DUT. There is an overlap of 6.4 bytes
between the two where L1 and L2 were at the same time on the between the two where L1 and L2 were at the same time on the
respective ingress ports. Therefore the percentage of synchronization respective ingress ports. Therefore the percentage of synchronization
is 10%. is 10%.
Stateful type traffic defines packets exchanged with a stateful Stateful type traffic defines packets exchanged with a stateful
protocol such as for example TCP. protocol such as TCP.
Stateless type traffic defines packets exchanged with a stateless Stateless type traffic defines packets exchanged with a stateless
protocol such as for example UDP. protocol such as UDP.
6.2.2 Discussion 6.2.2 Discussion
In this scenario, buffers are solicited on the DUT. In an ingress In this scenario, buffers are solicited on the DUT. In an ingress
buffering mechanism, the ingress port buffers would be solicited buffering mechanism, the ingress port buffers would be solicited
along with Virtual Output Queues, when available; whereas in an along with Virtual Output Queues, when available; whereas in an
egress buffer mechanism, the egress buffer of the one outgoing port egress buffer mechanism, the egress buffer of the one outgoing port
would be used. would be used.
In either case, regardless of where the buffer memory is located on In either case, regardless of where the buffer memory is located on
skipping to change at page 14, line 14 skipping to change at page 14, line 16
be specified. be specified.
7 Application Throughput: Data Center Goodput 7 Application Throughput: Data Center Goodput
7.1. Definition 7.1. Definition
In Data Center Networking, a balanced network is a function of In Data Center Networking, a balanced network is a function of
maximal throughput 'and' minimal loss at any given time. This is maximal throughput 'and' minimal loss at any given time. This is
defined by the Goodput [4]. Goodput is the application-level defined by the Goodput [4]. Goodput is the application-level
throughput. The definition used is a variance of the definition in throughput. The definition used is a variance of the definition in
RFC 2647. [RFC2647].
Goodput is the number of bits per unit of time forwarded to the Goodput is the number of bits per unit of time forwarded to the
correct destination interface of the DUT/SUT, minus any bits correct destination interface of the DUT, minus any bits
retransmitted. retransmitted.
7.2. Discussion 7.2. Discussion
In data center benchmarking, the goodput is a value that SHOULD be In data center benchmarking, the goodput is a value that SHOULD be
measured. It provides a realistic idea of the usage of the available measured. It provides a realistic idea of the usage of the available
bandwidth. A goal in data center environments is to maximize the bandwidth. A goal in data center environments is to maximize the
goodput while minimizing the loss. goodput while minimizing the loss.
7.3. Measurement Units 7.3. Measurement Units
skipping to change at page 15, line 35 skipping to change at page 15, line 36
technology characterization using controlled stimuli in a laboratory technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the constraints environment, with dedicated address space and the constraints
specified in the sections above. specified in the sections above.
The benchmarking network topology will be an independent test setup The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test traffic into a production network, or misroute traffic to the test
management network. management network.
Further, benchmarking is performed on a "black-box" basis, relying Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT. solely on measurements observable external to the DUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for Special capabilities SHOULD NOT exist in the DUT specifically for
benchmarking purposes. Any implications for network security arising benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production from the DUT SHOULD be identical in the lab and in production
networks. networks.
9. IANA Considerations 9. IANA Considerations
NO IANA Action is requested at this time. NO IANA Action is requested at this time.
10. References 10. References
10.1. Normative References 10.1. Normative References
[RFC1242] Bradner, S. "Benchmarking Terminology for Network [1] Avramov L. and Rapp J., "Data Center Benchmarking Methodology",
April 2017.
[RFC1242] Bradner, S. "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, July 1991, <http://www.rfc- Interconnection Devices", RFC 1242, July 1991, <http://www.rfc-
editor.org/info/rfc1242> editor.org/info/rfc1242>
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999, Network Interconnect Devices", RFC 2544, March 1999,
<http://www.rfc-editor.org/info/rfc2554> <http://www.rfc-editor.org/info/rfc2554>
10.2. Informative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119,
March 1997, <http://www.rfc-editor.org/info/rfc2119>
[1] Avramov L. and Rapp J., "Data Center Benchmarking Methodology", [RFC5841] , Hay, R., "TCP Option to Denote Packet Mood", BCP
April 2017. 14, RFC 5841, April 2010, <http://www.rfc-
editor.org/info/rfc5841>
10.2. Informative References
[RFC2889] Mandeville R. and Perser J., "Benchmarking [RFC2889] Mandeville R. and Perser J., "Benchmarking
Methodology for LAN Switching Devices", RFC 2889, August 2000, Methodology for LAN Switching Devices", RFC 2889, August 2000,
<http://www.rfc-editor.org/info/rfc2889> <http://www.rfc-editor.org/info/rfc2889>
[RFC3918] Stopp D. and Hickman B., "Methodology for IP Multicast [RFC3918] Stopp D. and Hickman B., "Methodology for IP Multicast
Benchmarking", RFC 3918, October 2004, <http://www.rfc- Benchmarking", RFC 3918, October 2004, <http://www.rfc-
editor.org/info/rfc3918> editor.org/info/rfc3918>
[4] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D. [4] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D.
Joseph, "Understanding TCP Incast Throughput Collapse in Joseph, "Understanding TCP Incast Throughput Collapse in
Datacenter Networks, Datacenter Networks,
"http://yanpeichen.com/professional/usenixLoginIncastReady.pdf" "http://yanpeichen.com/professional/usenixLoginIncastReady.pdf"
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119,
March 1997, <http://www.rfc-editor.org/info/rfc2119>
[RFC2432] Dubray, K., "Terminology for IP Multicast [RFC2432] Dubray, K., "Terminology for IP Multicast
Benchmarking", BCP 14, RFC 2432, DOI 10.17487/RFC2432, October Benchmarking", BCP 14, RFC 2432, DOI 10.17487/RFC2432, October
1998, <http://www.rfc-editor.org/info/rfc2432> 1998, <http://www.rfc-editor.org/info/rfc2432>
[RFC2647] Newman D. ,"Benchmarking Terminology for Firewall
Performance" BCP 14, RFC 2647, August 1999, <http://www.rfc-
editor.org/info/rfc2647>
10.3. Acknowledgments 10.3. Acknowledgments
The authors would like to thank Alfred Morton, Scott Bradner, The authors would like to thank Alfred Morton, Scott Bradner,
Ian Cox, Tim Stevenson for their reviews and feedback. Ian Cox, Tim Stevenson for their reviews and feedback.
Authors' Addresses Authors' Addresses
Lucien Avramov Lucien Avramov
Google Google
1600 Amphitheatre Parkway 1600 Amphitheatre Parkway
 End of changes. 26 change blocks. 
51 lines changed or deleted 62 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/