draft-ietf-bmwg-dcbench-terminology-16.txt   draft-ietf-bmwg-dcbench-terminology-17.txt 
Internet Engineering Task Force L. Avramov Internet Engineering Task Force L. Avramov
INTERNET-DRAFT, Intended status: Informational Google INTERNET-DRAFT, Intended status: Informational Google
Expires: December 22,2017 J. Rapp Expires: December 23,2017 J. Rapp
June 20, 2017 VMware June 21, 2017 VMware
Data Center Benchmarking Terminology Data Center Benchmarking Terminology
draft-ietf-bmwg-dcbench-terminology-16 draft-ietf-bmwg-dcbench-terminology-17
Abstract Abstract
The purpose of this informational document is to establish definitions The purpose of this informational document is to establish definitions
and describe measurement techniques for data center benchmarking, as and describe measurement techniques for data center benchmarking, as
well as it is to introduce new terminologies applicable to performance well as it is to introduce new terminologies applicable to performance
evaluations of data center network equipment. This document establishes evaluations of data center network equipment. This document establishes
the important concepts for benchmarking network switches and routers in the important concepts for benchmarking network switches and routers in
the data center and, is a pre-requisite to the test methodology the data center and, is a pre-requisite to the test methodology
publication [1]. Many of these terms and methods may be applicable to publication [1]. Many of these terms and methods may be applicable to
skipping to change at page 2, line 43 skipping to change at page 2, line 43
6.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . 11 6.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . 11
6.1.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . 12 6.1.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . 12
6.1.3 Measurement Units . . . . . . . . . . . . . . . . . . . 12 6.1.3 Measurement Units . . . . . . . . . . . . . . . . . . . 12
6.2 Incast . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.2 Incast . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . 13 6.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . 13
6.2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . 14 6.2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . 14
6.2.3 Measurement Units . . . . . . . . . . . . . . . . . . . 14 6.2.3 Measurement Units . . . . . . . . . . . . . . . . . . . 14
7 Application Throughput: Data Center Goodput . . . . . . . . . . 14 7 Application Throughput: Data Center Goodput . . . . . . . . . . 14
7.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . 14 7.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . 14
7.2. Discussion . . . . . . . . . . . . . . . . . . . . . . . . 14 7.2. Discussion . . . . . . . . . . . . . . . . . . . . . . . . 14
7.3. Measurement Units . . . . . . . . . . . . . . . . . . . . . 14 7.3. Measurement Units . . . . . . . . . . . . . . . . . . . . . 15
8. Security Considerations . . . . . . . . . . . . . . . . . . . 15 8. Security Considerations . . . . . . . . . . . . . . . . . . . 15
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 16 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 16
10.1. Normative References . . . . . . . . . . . . . . . . . . 16 10.1. Normative References . . . . . . . . . . . . . . . . . . 16
10.2. Informative References . . . . . . . . . . . . . . . . . 16 10.2. Informative References . . . . . . . . . . . . . . . . . 16
10.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 17 10.3. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 17
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17
1. Introduction 1. Introduction
skipping to change at page 3, line 32 skipping to change at page 3, line 32
throughput, forwarding rates and latency under testing conditions, throughput, forwarding rates and latency under testing conditions,
but they do not represent real traffic patterns that may affect these but they do not represent real traffic patterns that may affect these
networking devices. The data center networking devices covered are networking devices. The data center networking devices covered are
switches and routers. switches and routers.
Currently, typical data center networking devices are characterized Currently, typical data center networking devices are characterized
by: by:
-High port density (48 ports of more) -High port density (48 ports of more)
-High speed (up to 100 Gb/s currently per port) -High speed (up to 100 GB/s currently per port)
-High throughput (line rate on all ports for Layer 2 and/or Layer 3) -High throughput (line rate on all ports for Layer 2 and/or Layer 3)
-Low latency (in the microsecond or nanosecond range) -Low latency (in the microsecond or nanosecond range)
-Low amount of buffer (in the Mb range) -Low amount of buffer (in the MB range per networking device)
-Layer 2 and Layer 3 forwarding capability (Layer 3 not mandatory) -Layer 2 and Layer 3 forwarding capability (Layer 3 not mandatory)
The following document defines a set of definitions, metrics and The following document defines a set of definitions, metrics and
terminologies including congestion scenarios, switch buffer analysis terminologies including congestion scenarios, switch buffer analysis
and redefines basic definitions in order to represent a wide mix of and redefines basic definitions in order to represent a wide mix of
traffic conditions. The test methodologies are defined in [1]. traffic conditions. The test methodologies are defined in [1].
1.1. Requirements Language 1.1. Requirements Language
skipping to change at page 4, line 27 skipping to change at page 4, line 27
Discussion: A brief discussion about the term, its application and Discussion: A brief discussion about the term, its application and
any restrictions on measurement procedures. any restrictions on measurement procedures.
Measurement Units: Methodology for the measure and units used to Measurement Units: Methodology for the measure and units used to
report measurements of this term, if applicable. report measurements of this term, if applicable.
2. Latency 2. Latency
2.1. Definition 2.1. Definition
Latency is a the amount of time it takes a frame to transit the DUT. Latency is a the amount of time it takes a frame to transit the
Latency is measured in units of time (seconds, milliseconds, Device Under Test (DUT). Latency is measured in units of time
microseconds and so on). The purpose of measuring latency is to (seconds, milliseconds, microseconds and so on). The purpose of
understand the impact of adding a device in the communication path. measuring latency is to understand the impact of adding a device in
the communication path.
The Latency interval can be assessed between different combinations The Latency interval can be assessed between different combinations
of events, regardless of the type of switching device (bit forwarding of events, regardless of the type of switching device (bit forwarding
aka cut-through, or store-and-forward type of device). [RFC1242] aka cut-through, or store-and-forward type of device). [RFC1242]
defined Latency differently for each of these types of devices. defined Latency differently for each of these types of devices.
Traditionally the latency measurement definitions are: Traditionally the latency measurement definitions are:
FILO (First In Last Out) FILO (First In Last Out)
skipping to change at page 6, line 47 skipping to change at page 6, line 47
3) LIFO MUST NOT be used, because it subtracts the latency of the 3) LIFO MUST NOT be used, because it subtracts the latency of the
packet; unlike all the other methods. packet; unlike all the other methods.
3 Jitter 3 Jitter
3.1 Definition 3.1 Definition
Jitter in the data center context is synonymous with the common term Jitter in the data center context is synonymous with the common term
Delay variation. It is derived from multiple measurements of one-way Delay variation. It is derived from multiple measurements of one-way
delay, as described in RFC 3393. The mandatory definition of Delay delay, as described in RFC 3393. The mandatory definition of Delay
Variation is the PDV form from section 4.2 of [RFC5481]. When Variation is the Packet Delay Variation (PDV) from section 4.2 of
considering a stream of packets, the delays of all packets are [RFC5481]. When considering a stream of packets, the delays of all
subtracted from the minimum delay over all packets in the stream. packets are subtracted from the minimum delay over all packets in the
This facilitates assessment of the range of delay variation (Max - stream. This facilitates assessment of the range of delay variation
Min), or a high percentile of PDV (99th percentile, for robustness (Max - Min), or a high percentile of PDV (99th percentile, for
against outliers). robustness against outliers).
If First-bit to Last-bit timestamps are used for Delay measurement, When First-bit to Last-bit timestamps are used for Delay measurement,
then Delay Variation MUST be measured using packets or frames of the then Delay Variation MUST be measured using packets or frames of the
same size, since the definition of latency includes the serialization same size, since the definition of latency includes the serialization
time for each packet. Otherwise if using First-bit to First-bit, the time for each packet. Otherwise if using First-bit to First-bit, the
size restriction does not apply. size restriction does not apply.
3.2 Discussion 3.2 Discussion
In addition to PDV Range and/or a high percentile of PDV, Inter- In addition to PDV Range and/or a high percentile of PDV, Inter-
Packet Delay Variation (IPDV) as defined in section 4.1 of [RFC5481] Packet Delay Variation (IPDV) as defined in section 4.1 of [RFC5481]
(differences between two consecutive packets) MAY be used for the (differences between two consecutive packets) MAY be used for the
skipping to change at page 11, line 22 skipping to change at page 11, line 22
frame buffering memory available on a DUT. This size is expressed in frame buffering memory available on a DUT. This size is expressed in
B (bytes); KB (kilobytes), MB (megabytes) or GB (gigabyte). When the B (bytes); KB (kilobytes), MB (megabytes) or GB (gigabyte). When the
buffer size is expressed it SHOULD be defined by a size metric stated buffer size is expressed it SHOULD be defined by a size metric stated
above. When the buffer size is expressed, an indication of the frame above. When the buffer size is expressed, an indication of the frame
MTU used for that measurement is also necessary as well as the cos MTU used for that measurement is also necessary as well as the cos
(class of service) or dscp (differentiated services code point) value (class of service) or dscp (differentiated services code point) value
set; as often times the buffers are carved by quality of service set; as often times the buffers are carved by quality of service
implementation. Please refer to the buffer efficiency section for implementation. Please refer to the buffer efficiency section for
further details. further details.
Example: Buffer Size of DUT when sending 1518 bytes frames is 18 MB. Example: Buffer Size of DUT when sending 1518 byte frames is 18 MB.
Port Buffer Size: The port buffer size is the amount of buffer for a Port Buffer Size: The port buffer size is the amount of buffer for a
single ingress port, egress port or combination of ingress and egress single ingress port, egress port or combination of ingress and egress
buffering location for a single port. The reason for mentioning the buffering location for a single port. The reason for mentioning the
three locations for the port buffer is because the DUT buffering three locations for the port buffer is because the DUT buffering
scheme can be unknown or untested, and so knowing the buffer location scheme can be unknown or untested, and so knowing the buffer location
helps clarify the buffer architecture and consequently the total helps clarify the buffer architecture and consequently the total
buffer size. The Port Buffer Size is an informational value that MAY buffer size. The Port Buffer Size is an informational value that MAY
be provided from the DUT vendor. It is not a value that is tested by be provided from the DUT vendor. It is not a value that is tested by
benchmarking. Benchmarking will be done using the Maximum Port Buffer benchmarking. Benchmarking will be done using the Maximum Port Buffer
skipping to change at page 14, line 30 skipping to change at page 14, line 30
It is a MUST to measure the number of ingress and egress ports. It is It is a MUST to measure the number of ingress and egress ports. It is
a MUST to have a non-null percentage of synchronization, which MUST a MUST to have a non-null percentage of synchronization, which MUST
be specified. be specified.
7 Application Throughput: Data Center Goodput 7 Application Throughput: Data Center Goodput
7.1. Definition 7.1. Definition
In Data Center Networking, a balanced network is a function of In Data Center Networking, a balanced network is a function of
maximal throughput 'and' minimal loss at any given time. This is maximal throughput and minimal loss at any given time. This is
defined by the Goodput [4]. Goodput is the application-level captured by the Goodput [4]. Goodput is the application-level
throughput. The definition used is a variance of the definition in throughput. For standard TCP applications, a very small loss can have
[RFC2647]. a dramatic effect on application throughput. [RFC2647] has a
definition of Goodput; the definition in this publication is a
variance.
Goodput is the number of bits per unit of time forwarded to the Goodput is the number of bits per unit of time forwarded to the
correct destination interface of the DUT, minus any bits correct destination interface of the DUT, minus any bits
retransmitted. retransmitted.
7.2. Discussion 7.2. Discussion
In data center benchmarking, the goodput is a value that SHOULD be In data center benchmarking, the goodput is a value that SHOULD be
measured. It provides a realistic idea of the usage of the available measured. It provides a realistic idea of the usage of the available
bandwidth. A goal in data center environments is to maximize the bandwidth. A goal in data center environments is to maximize the
goodput while minimizing the loss. goodput while minimizing the loss.
7.3. Measurement Units 7.3. Measurement Units
When S represents the payload bytes, which does not include packet or
TCP headers, and Ft is the Finishing Time of the last sender, the
Goodput, G, is then measured by the following formula:
G=S/Ft bytes per second The Goodput, G, is then measured by the following formula:
G=(S/F) x V bytes per second
-S represents the payload bytes, which does not include packet or TCP
headers
-F is the frame size
-V is the speed of the media in bytes per second
Example: A TCP file transfer over HTTP protocol on a 10GB/s media.
Example: A TCP file transfer over HTTP protocol on a 10Gb/s media.
The file cannot be transferred over Ethernet as a single continuous The file cannot be transferred over Ethernet as a single continuous
stream. It must be broken down into individual frames of 1500 bytes stream. It must be broken down into individual frames of 1500B when
when the standard MTU [Maximum Transmission Unit] is used. Each the standard MTU (Maximum Transmission Unit) is used. Each packet
packet requires 20 bytes of IP header information and 20 bytes of TCP requires 20B of IP header information and 20B of TCP header
header information; therefore 1460 byte are available per packet for information; therefore 1460B are available per packet for the file
the file transfer. Linux based systems are further limited to 1448 transfer. Linux based systems are further limited to 1448B as they
bytes as they also carry a 12 byte timestamp. Finally, the date is also carry a 12B timestamp. Finally, the date is transmitted in this
transmitted in this example over Ethernet which adds a 26 byte example over Ethernet which adds a 26B overhead per packet.
overhead per packet.
G= 1460/1526 x 10 Gbit/s which is 9.567 Gbit/s or 1.196 Gigabytes per G= 1460/1526 x 10 Gbit/s which is 9.567 Gbit per second or 1.196 GB
second. per second.
Please note: This example does not take into consideration additional Please note: This example does not take into consideration the
Ethernet overhead, such as the interframe gap (a minimum of 96 bit additional Ethernet overhead, such as the interframe gap (a minimum
times), nor collisions (which have a variable impact, depending on of 96 bit times), nor collisions (which have a variable impact,
the network load). depending on the network load).
When conducting Goodput measurements please document in addition to When conducting Goodput measurements please document in addition to
the 4.1 section: the 4.1 section the following information:
-The TCP Stack used -The TCP Stack used
-OS Versions -OS Versions
-NIC firmware version and model -NIC firmware version and model
For example, Windows TCP stacks and different Linux versions can For example, Windows TCP stacks and different Linux versions can
influence TCP based tests results. influence TCP based tests results.
 End of changes. 17 change blocks. 
42 lines changed or deleted 51 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/