draft-ietf-bmwg-ca-bench-meth-01.txt   draft-ietf-bmwg-ca-bench-meth-02.txt 
Internet Engineering Task Force M. Hamilton Internet Engineering Task Force M. Hamilton
Internet-Draft BreakingPoint Systems Internet-Draft BreakingPoint Systems
Intended status: Informational S. Banks Intended status: Informational S. Banks
Expires: September 13, 2012 Cisco Systems Expires: January 17, 2013 Cisco Systems
March 12, 2012 July 16, 2012
Benchmarking Methodology for Content-Aware Network Devices Benchmarking Methodology for Content-Aware Network Devices
draft-ietf-bmwg-ca-bench-meth-01 draft-ietf-bmwg-ca-bench-meth-02
Abstract Abstract
This document defines a set of test scenarios and metrics that can be This document defines a set of test scenarios and metrics that can be
used to benchmark content-aware network devices. The scenarios in used to benchmark content-aware network devices. The scenarios in
the following document are intended to more accurately predict the the following document are intended to more accurately predict the
performance of these devices when subjected to dynamic traffic performance of these devices when subjected to dynamic traffic
patterns. This document will operate within the constraints of the patterns. This document will operate within the constraints of the
Benchmarking Working Group charter, namely black box characterization Benchmarking Working Group charter, namely black box characterization
in a laboratory environment. in a laboratory environment.
skipping to change at page 1, line 37 skipping to change at page 1, line 37
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 13, 2012. This Internet-Draft will expire on January 17, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 34 skipping to change at page 2, line 34
3.7.4. Other Considerations . . . . . . . . . . . . . . . . . 9 3.7.4. Other Considerations . . . . . . . . . . . . . . . . . 9
4. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 9 4. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 9
4.1. Maximum Application Session Establishment Rate . . . . . . 9 4.1. Maximum Application Session Establishment Rate . . . . . . 9
4.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 10 4.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 10
4.1.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 10 4.1.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 10
4.1.2.1. Application-Layer Parameters . . . . . . . . . . . 10 4.1.2.1. Application-Layer Parameters . . . . . . . . . . . 10
4.1.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 10 4.1.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 10
4.1.4. Measurement . . . . . . . . . . . . . . . . . . . . . 10 4.1.4. Measurement . . . . . . . . . . . . . . . . . . . . . 10
4.1.4.1. Maximum Application Flow Rate . . . . . . . . . . 10 4.1.4.1. Maximum Application Flow Rate . . . . . . . . . . 10
4.1.4.2. Application Flow Duration . . . . . . . . . . . . 11 4.1.4.2. Application Flow Duration . . . . . . . . . . . . 11
4.1.4.3. Packet Loss . . . . . . . . . . . . . . . . . . . 11 4.1.4.3. Application Efficiency . . . . . . . . . . . . . . 11
4.1.4.4. Application Flow Latency . . . . . . . . . . . . . 11 4.1.4.4. Application Flow Latency . . . . . . . . . . . . . 11
4.2. Application Throughput . . . . . . . . . . . . . . . . . . 11 4.2. Application Throughput . . . . . . . . . . . . . . . . . . 11
4.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 11 4.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 11
4.2.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 11 4.2.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 11
4.2.2.1. Parameters . . . . . . . . . . . . . . . . . . . . 11 4.2.2.1. Parameters . . . . . . . . . . . . . . . . . . . . 11
4.2.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 11 4.2.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 12
4.2.4. Measurement . . . . . . . . . . . . . . . . . . . . . 11 4.2.4. Measurement . . . . . . . . . . . . . . . . . . . . . 12
4.2.4.1. Maximum Throughput . . . . . . . . . . . . . . . . 12 4.2.4.1. Maximum Throughput . . . . . . . . . . . . . . . . 12
4.2.4.2. Packet Loss . . . . . . . . . . . . . . . . . . . 12 4.2.4.2. Maximum Application Flow Rate . . . . . . . . . . 12
4.2.4.3. Maximum Application Flow Rate . . . . . . . . . . 12 4.2.4.3. Application Flow Duration . . . . . . . . . . . . 12
4.2.4.4. Application Flow Duration . . . . . . . . . . . . 12 4.2.4.4. Application Efficiency . . . . . . . . . . . . . . 12
4.2.4.5. Packet Loss . . . . . . . . . . . . . . . . . . . 12 4.2.4.5. Packet Loss . . . . . . . . . . . . . . . . . . . 12
4.2.4.6. Application Flow Latency . . . . . . . . . . . . . 12 4.2.4.6. Application Flow Latency . . . . . . . . . . . . . 12
4.3. Malformed Traffic Handling . . . . . . . . . . . . . . . . 12 4.3. Malformed Traffic Handling . . . . . . . . . . . . . . . . 13
4.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 12 4.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 13
4.3.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 12 4.3.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 13
4.3.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 13 4.3.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 13
4.3.4. Measurement . . . . . . . . . . . . . . . . . . . . . 13 4.3.4. Measurement . . . . . . . . . . . . . . . . . . . . . 13
5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13
6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13
7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.1. Normative References . . . . . . . . . . . . . . . . . . . 14 7.1. Normative References . . . . . . . . . . . . . . . . . . . 14
7.2. Informative References . . . . . . . . . . . . . . . . . . 14 7.2. Informative References . . . . . . . . . . . . . . . . . . 15
Appendix A. Example Traffic Mix . . . . . . . . . . . . . . . . . 15 Appendix A. Example Traffic Mix . . . . . . . . . . . . . . . . . 15
Appendix B. Malformed Traffic Algorithm . . . . . . . . . . . . . 17 Appendix B. Malformed Traffic Algorithm . . . . . . . . . . . . . 17
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 19 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 19
1. Introduction 1. Introduction
Content-aware and deep packet inspection (DPI) device deployments Content-aware and deep packet inspection (DPI) device deployments
have grown significantly in recent years. No longer are devices have grown significantly in recent years. No longer are devices
simply using Ethernet and IP headers to make forwarding decisions. simply using Ethernet and IP headers to make forwarding decisions.
This class of device now uses application-specific data to make these This class of device now uses application-specific data to make these
decisions. For example, a web-application firewall (WAF) may use decisions. For example, a web-application firewall (WAF) may use
search criteria upon the HTTP uniform resource indicator (URI)[1] to search criteria upon the HTTP uniform resource indicator (URI)[1] to
decide whether a HTTP GET method may traverse the network. In the decide whether a HTTP GET method may traverse the network. In the
case of lawful/legal intercept technology, a device could use the case of lawful/legal intercept technology, a device could use the
phone number within the Session Description Protocol[13] to determine phone number within the Session Description Protocol[14] to determine
whether a voice-over-IP phone may be allowed to connect. In addition whether a voice-over-IP phone may be allowed to connect. In addition
to the development of entirely new classes of devices, devices that to the development of entirely new classes of devices, devices that
could historically be classified as 'stateless' or raw forwarding could historically be classified as 'stateless' or raw forwarding
devices are now performing DPI functionality. Devices such as core devices are now performing DPI functionality. Devices such as core
and edge routers are now being developed with DPI functionality to and edge routers are now being developed with DPI functionality to
make more intelligent routing and forwarding decisions. make more intelligent routing and forwarding decisions.
The Benchmarking Working Group (BMWG) has historically produced The Benchmarking Working Group (BMWG) has historically produced
Internet Drafts and Requests for Comment that are focused Internet Drafts and Requests for Comment that are focused
specifically on creating output metrics that are derived from a very specifically on creating output metrics that are derived from a very
specific and well-defined set of input parameters that are completely specific and well-defined set of input parameters that are completely
and unequivocally reproducible from test bed to test bed. The end and unequivocally reproducible from test bed to test bed. The end
goal of such methodologies is to, in the words of the RFC 2544 [2], goal of such methodologies is to, in the words of the RFC 2544 [2],
reduce "specsmanship" in the industry and hold vendors accountable reduce "specsmanship" in the industry and hold vendors accountable
for performance claims. for performance claims.
The end goal of this methodology is to generate performance metrics The end goal of this methodology is to generate performance metrics
in a lab environment that will more closely relate to actual observed in a lab environment that will closely relate to actual observed
performance on production networks. By utilizing dynamic traffic performance on production networks. By utilizing dynamic traffic
patterns relevant to modern networks, this methodology should be able patterns relevant to modern networks, this methodology should be able
to more closely tie laboratory and production metrics. It should be to closely tie laboratory and production metrics. It should be
further noted than any metrics acquired from production networks further noted than any metrics acquired from production networks
SHOULD be captured according to the policies and procedures of the SHOULD be captured according to the policies and procedures of the
IPPM or PMOL working groups. IPPM or PMOL working groups.
An explicit non-goal of this document is to replace existing An explicit non-goal of this document is to replace existing
methodology/terminology pairs such as RFC 2544 [2]/RFC 1242 [3] or methodology/terminology pairs such as RFC 2544 [2]/RFC 1242 [3] or
RFC 3511 [4]/RFC 2647 [5]. The explicit goal of this document is to RFC 3511 [4]/RFC 2647 [5]. The explicit goal of this document is to
create a methodology more suited for modern devices while create a methodology more suited for modern devices while
complementing the data acquired using existing BMWG methodologies. complementing the data acquired using existing BMWG methodologies.
This document does not assume completely repeatable input stimulus. This document does not assume completely repeatable input stimulus.
skipping to change at page 7, line 49 skipping to change at page 7, line 49
ceiling of the device, or if it is actually being limited by one of ceiling of the device, or if it is actually being limited by one of
the other metrics. If we do the appropriate math, 10000 flows per the other metrics. If we do the appropriate math, 10000 flows per
second, with each flow at 640 total bytes means that we are achieving second, with each flow at 640 total bytes means that we are achieving
an aggregate bitrate of roughly 49 Mbps. This is dramatically less an aggregate bitrate of roughly 49 Mbps. This is dramatically less
than the 1 gigabit physical link we are using. We can conclude that than the 1 gigabit physical link we are using. We can conclude that
10,000 flows per second is in fact the performance limit of the 10,000 flows per second is in fact the performance limit of the
device. device.
If we change the example slightly and increase the size of each If we change the example slightly and increase the size of each
datagram to 1312 bytes, then it becomes necessary to recompute the datagram to 1312 bytes, then it becomes necessary to recompute the
math. Assuming the same observed DUT limitation of 10,000 flows per load. Assuming the same observed DUT limitation of 10,000 flows per
second, it must be ensured that this is an artifact of the DUT, and second, it must be ensured that this is an artifact of the DUT, and
not of physical limitations. For each flow, we'll require 104,960 not of physical limitations. For each flow, we'll require 104,960
bits. 10,000 flows per second implies a throughput of roughly 1 Gbps. bits. 10,000 flows per second implies a throughput of roughly 1 Gbps.
At this point, we cannot definitively answer whether the DUT is At this point, we cannot definitively answer whether the DUT is
actually limited to 10,000 flows per second. If we are able to actually limited to 10,000 flows per second. If we are able to
modify the scenario, and utilize 10 Gigabit interfaces, then perhaps modify the scenario, and utilize 10 Gigabit interfaces, then perhaps
the flow per second ceiling will be reached at a higher number than the flow per second ceiling will be reached at a higher number than
10,000. 10,000.
This example illustrates why a user of this methodology SHOULD This example illustrates why a user of this methodology SHOULD
skipping to change at page 9, line 27 skipping to change at page 9, line 27
The IETF has historically provided guidance and information on TCP The IETF has historically provided guidance and information on TCP
stack considerations. This methodology is strictly focused on stack considerations. This methodology is strictly focused on
performance metrics at layers above 4, thus does not specifically performance metrics at layers above 4, thus does not specifically
define any TCP stack configuration parameters of either the tester or define any TCP stack configuration parameters of either the tester or
the DUTs. The TCP configuration of the tester MUST remain constant the DUTs. The TCP configuration of the tester MUST remain constant
across all DUTs in order to ensure comparable results. While the across all DUTs in order to ensure comparable results. While the
following list of references is not exhaustive, each document following list of references is not exhaustive, each document
contains a relevant discussion on TCP stack considerations. contains a relevant discussion on TCP stack considerations.
Congestion control algorithms are discussed in Section 2 of RFC 3148 The general IETF TCP roadmap is defined in RFC 4614 [11] and
[11] with even more detailed references. TCP receive and congestion congestion control algorithms are discussed in Section 2 of RFC 3148
window sizes are discussed in detail in RFC 6349 [12]. [12] with even more detailed references. TCP receive and congestion
window sizes are discussed in detail in RFC 6349 [13].
3.7.4. Other Considerations 3.7.4. Other Considerations
Various content-aware devices will have widely varying feature sets. Various content-aware devices will have widely varying feature sets.
In the interest of representative test results, the DUT features that In the interest of representative test results, the DUT features that
will likely be enabled in the final deployment SHOULD be used. This will likely be enabled in the final deployment SHOULD be used. This
methodology is not intended to advise on which features should be methodology is not intended to advise on which features should be
enabled, but to suggest using actual deployment configurations. enabled, but to suggest using actual deployment configurations.
4. Benchmarking Tests 4. Benchmarking Tests
skipping to change at page 10, line 26 skipping to change at page 10, line 26
For each application protocol in use during the test run, the table For each application protocol in use during the test run, the table
provided in Section 3.5 SHOULD be published. provided in Section 3.5 SHOULD be published.
4.1.3. Procedure 4.1.3. Procedure
The test SHOULD generate application network traffic that meets the The test SHOULD generate application network traffic that meets the
conditions of Section 3.3. The traffic pattern SHOULD begin with an conditions of Section 3.3. The traffic pattern SHOULD begin with an
application flow rate of 10% of expected maximum. The test SHOULD be application flow rate of 10% of expected maximum. The test SHOULD be
configured to increase the attempt rate in units of 10% up through configured to increase the attempt rate in units of 10% up through
110% of expected maximum. In the case where expected maximum is 110% of expected maximum. In the case where expected maximum is
limited by physical link rate as discovered through , the maximum limited by physical link rate as discovered through Appendix A, the
rate will attempted will be 100% of expected maximum, or link maximum rate will attempted will be 100% of expected maximum, or
capacity. The duration of each loading phase SHOULD be at least 30 "wire-speed performance". The duration of each loading phase SHOULD
seconds. This test MAY be repeated, each subsequent iteration be at least 30 seconds. This test MAY be repeated, each subsequent
beginning at 5% of expected maximum and increasing session iteration beginning at 5% of expected maximum and increasing session
establishment rate to 110% of the maximum observed from the previous establishment rate to 110% of the maximum observed from the previous
test run. test run.
This procedure MAY be repeated any number of times with the results This procedure MAY be repeated any reasonable number of times with
being averaged together. the results being averaged together.
4.1.4. Measurement 4.1.4. Measurement
The following metrics MAY be determined from this test, and SHOULD be The following metrics MAY be determined from this test, and SHOULD be
observed for each application protocol within the traffic mix: observed for each application protocol within the traffic mix:
4.1.4.1. Maximum Application Flow Rate 4.1.4.1. Maximum Application Flow Rate
The test tool SHOULD report the maximum rate at which application The test tool SHOULD report the maximum rate at which application
flows were completed, as defined by RFC 2647 [5], Section 3.7. This flows were completed, as defined by RFC 2647 [5], Section 3.7. This
rate SHOULD be reported individually for each application protocol rate SHOULD be reported individually for each application protocol
present within the traffic mix. present within the traffic mix.
4.1.4.2. Application Flow Duration 4.1.4.2. Application Flow Duration
The test tool SHOULD report the minimum, maximum and average The test tool SHOULD report the minimum, maximum and average
application duration, as defined by RFC 2647 [5], Section 3.9. This application duration, as defined by RFC 2647 [5], Section 3.9. This
duration SHOULD be reported individually for each application duration SHOULD be reported individually for each application
protocol present within the traffic mix. protocol present within the traffic mix.
4.1.4.3. Packet Loss 4.1.4.3. Application Efficiency
The test tool SHOULD report the number of flow packets lost or The test tool SHOULD report the application efficiency, similarly
dropped from source to destination. defined for TCP by RFC 6349 [13].
Transmitted Bytes - Retransmitted Bytes
App Efficiency % = --------------------------------------- X 100
Transmitted Bytes
Figure 2: Application Efficiency Percent Calculation
Note than calculation less than 100% does not necessarily imply
noticeably degraded performance since certain applications utilize
algorithms to maintain a quality user experience in the face of data
loss.
4.1.4.4. Application Flow Latency 4.1.4.4. Application Flow Latency
The test tool SHOULD report the minimum, maximum and average amount The test tool SHOULD report the minimum, maximum and average amount
of time an application flow member takes to traverse the DUT, as of time an application flow member takes to traverse the DUT, as
defined by RFC 1242 [3], Section 3.8. This rate SHOULD be reported defined by RFC 1242 [3], Section 3.8. This value SHOULD be reported
individually for each application protocol present within the traffic individually for each application protocol present within the traffic
mix. mix.
4.2. Application Throughput 4.2. Application Throughput
4.2.1. Objective 4.2.1. Objective
To determine the maximum rate through which a device is able to To determine the maximum rate through which a device is able to
forward bits when using application flows as defined in the previous forward bits when using application flows as defined in the previous
sections. sections.
skipping to change at page 12, line 10 skipping to change at page 12, line 22
4.2.4. Measurement 4.2.4. Measurement
The following metrics MAY be determined from this test, and SHOULD be The following metrics MAY be determined from this test, and SHOULD be
observed for each application protocol within the traffic mix: observed for each application protocol within the traffic mix:
4.2.4.1. Maximum Throughput 4.2.4.1. Maximum Throughput
The test tool SHOULD report the minimum, maximum and average The test tool SHOULD report the minimum, maximum and average
application throughput. application throughput.
4.2.4.2. Packet Loss 4.2.4.2. Maximum Application Flow Rate
The test tool SHOULD report the number of network packets lost or
dropped from source to destination.
4.2.4.3. Maximum Application Flow Rate
The test tool SHOULD report the maximum rate at which application The test tool SHOULD report the maximum rate at which application
flows were completed, as defined by RFC 2647 [5], Section 3.7. This flows were completed, as defined by RFC 2647 [5], Section 3.7. This
rate SHOULD be reported individually for each application protocol rate SHOULD be reported individually for each application protocol
present within the traffic mix. present within the traffic mix.
4.2.4.4. Application Flow Duration 4.2.4.3. Application Flow Duration
The test tool SHOULD report the minimum, maximum and average The test tool SHOULD report the minimum, maximum and average
application duration, as defined by RFC 2647 [5], Section 3.9. This application duration, as defined by RFC 2647 [5], Section 3.9. This
duration SHOULD be reported individually for each application duration SHOULD be reported individually for each application
protocol present within the traffic mix. protocol present within the traffic mix.
4.2.4.4. Application Efficiency
The test tool SHOULD report the application efficiency as defined in
Section 4.1.4.3.
4.2.4.5. Packet Loss 4.2.4.5. Packet Loss
The test tool SHOULD report the number of flow packets lost or The test tool SHOULD report the number of packets lost or dropped
dropped from source to destination. from source to destination.
4.2.4.6. Application Flow Latency 4.2.4.6. Application Flow Latency
The test tool SHOULD report the minimum, maximum and average amount The test tool SHOULD report the minimum, maximum and average amount
of time an application flow member takes to traverse the DUT, as of time an application flow member takes to traverse the DUT, as
defined by RFC 1242 [3], Section 3.13. This rate SHOULD be reported defined by RFC 1242 [3], Section 3.13. This value SHOULD be reported
individually for each application protocol present within the traffic individually for each application protocol present within the traffic
mix. mix.
4.3. Malformed Traffic Handling 4.3. Malformed Traffic Handling
4.3.1. Objective 4.3.1. Objective
To determine the effects on performance and stability that malformed To determine the effects on performance and stability that malformed
traffic may have on the DUT. traffic may have on the DUT.
skipping to change at page 13, line 30 skipping to change at page 13, line 42
For each protocol present in the traffic mix, the metrics specified For each protocol present in the traffic mix, the metrics specified
by Section 4.1.4 and Section 4.2.4 MAY be determined. This data may by Section 4.1.4 and Section 4.2.4 MAY be determined. This data may
be used to ascertain the effects of fuzzed traffic on the DUT. be used to ascertain the effects of fuzzed traffic on the DUT.
5. IANA Considerations 5. IANA Considerations
This memo includes no request to IANA. This memo includes no request to IANA.
All drafts are required to have an IANA considerations section (see All drafts are required to have an IANA considerations section (see
the update of RFC 2434 [14] for a guide). If the draft does not the update of RFC 2434 [15] for a guide). If the draft does not
require IANA to do anything, the section contains an explicit require IANA to do anything, the section contains an explicit
statement that this is the case (as above). If there are no statement that this is the case (as above). If there are no
requirements for IANA, the section will be removed during conversion requirements for IANA, the section will be removed during conversion
into an RFC by the RFC Editor. into an RFC by the RFC Editor.
6. Security Considerations 6. Security Considerations
Benchmarking activities as described in this memo are limited to Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the other constraints environment, with dedicated address space and the other constraints
skipping to change at page 14, line 40 skipping to change at page 15, line 5
"IPv6 Benchmarking Methodology for Network Interconnect "IPv6 Benchmarking Methodology for Network Interconnect
Devices", RFC 5180, May 2008. Devices", RFC 5180, May 2008.
[9] Brownlee, N., Mills, C., and G. Ruth, "Traffic Flow [9] Brownlee, N., Mills, C., and G. Ruth, "Traffic Flow
Measurement: Architecture", RFC 2722, October 1999. Measurement: Architecture", RFC 2722, October 1999.
[10] Rekhter, Y., Moskowitz, R., Karrenberg, D., Groot, G., and E. [10] Rekhter, Y., Moskowitz, R., Karrenberg, D., Groot, G., and E.
Lear, "Address Allocation for Private Internets", BCP 5, Lear, "Address Allocation for Private Internets", BCP 5,
RFC 1918, February 1996. RFC 1918, February 1996.
[11] Mathis, M. and M. Allman, "A Framework for Defining Empirical [11] Duke, M., Braden, R., Eddy, W., and E. Blanton, "A Roadmap for
Transmission Control Protocol (TCP) Specification Documents",
RFC 4614, September 2006.
[12] Mathis, M. and M. Allman, "A Framework for Defining Empirical
Bulk Transfer Capacity Metrics", RFC 3148, July 2001. Bulk Transfer Capacity Metrics", RFC 3148, July 2001.
[12] Constantine, B., Forget, G., Geib, R., and R. Schrage, [13] Constantine, B., Forget, G., Geib, R., and R. Schrage,
"Framework for TCP Throughput Testing", RFC 6349, August 2011. "Framework for TCP Throughput Testing", RFC 6349, August 2011.
7.2. Informative References 7.2. Informative References
[13] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session [14] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session
Description Protocol", RFC 4566, July 2006. Description Protocol", RFC 4566, July 2006.
[14] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA [15] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA
Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. Considerations Section in RFCs", BCP 26, RFC 5226, May 2008.
Appendix A. Example Traffic Mix Appendix A. Example Traffic Mix
This appendix shows an example case of a protocol mix that may be This appendix shows an example case of a protocol mix that may be
used with this methodology. used with this methodology. This mix closely represents the research
published by Sandvine in their biannual report for the first half of
+---------------------------+-----------------------+-------------+ 2012 on North American fixed access service provider networks.
| Application Flow | Options | Value |
+---------------------------+-----------------------+-------------+
| Web 1kB | | |
| | Flow Size (L7) | 1kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 80 |
| Web 10kB | | |
| | Flow Size (L7) | 10kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 80 |
| Web 100kB | | |
| | Flow Size (L7) | 100kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 80 |
| BitTorrent Movie Download | | |
| | Flow Size (L7) | 500 MB |
| | Flow Percentage | 5% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 6881-6889 |
| SMTP Email | | |
| | Flow Size (L7) | 50 kB |
| | Flow Percentage | 10% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 25 |
| IMAP Email | | |
| | Flow Size (L7) | 100 kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 143 |
| DNS | | |
| | Flow Size (L7) | 2 kB |
| | Flow Percentage | 10% |
| | Transport Protocol(s) | UDP |
| | Destination Port(s) | 53 |
| RTP | | |
| | Flow Size (L7) | 100 MB |
| | Flow Percentage | 10% |
| | Transport Protocol(s) | UDP |
| | Destination Port(s) | 20000-65535 |
+---------------------------+-----------------------+-------------+
+------------+------------------+--------------------+--------+
| Direction | Application Flow | Options | Value |
+------------+------------------+--------------------+--------+
| Upstream | BitTorrent | | |
| | | Avg Flow Size (L7) | 512 MB |
| | | Flow Percentage | 44.4% |
| | HTTP | | |
| | | Avg Flow Size (L7) | 128 kB |
| | | Flow Percentage | 7.3% |
| | Skype | | |
| | | Avg Flow Size (L7) | 8 MB |
| | | Flow Percentage | 4.9% |
| | SSL/TLS | | |
| | | Avg Flow Size (L7) | 128 kB |
| | | Flow Percentage | 3.2% |
| | Netflix | | |
| | | Avg Flow Size (L7) | 500 kB |
| | | Flow Percentage | 3.1% |
| | PPStream | | |
| | | Avg Flow Size (L7) | 500 MB |
| | | Flow Percentage | 2.2% |
| | YouTube | | |
| | | Avg Flow Size (L7) | 4 MB |
| | | Flow Percentage | 1.9% |
| | Facebook | | |
| | | Avg Flow Size (L7) | 2 MB |
| | | Flow Percentage | 1.9% |
| | Teredo | | |
| | | Avg Flow Size (L7) | 500 MB |
| | | Flow Percentage | 1.2% |
| | Apple iMessage | | |
| | | Avg Flow Size (L7) | 40 kB |
| | | Flow Percentage | 1.1% |
| | Bulk TCP | | |
| | | Avg Flow Size (L7) | 128 kB |
| | | Flow Percentage | 28.8% |
| Downstream | Netflix | | |
| | | Avg Flow Size (L7) | 512 MB |
| | | Flow Percentage | 32.9% |
| | YouTube | | |
| | | Avg Flow Size (L7) | 5 MB |
| | | Flow Percentage | 13.8% |
| | HTTP | | |
| | | Avg Flow Size (L7) | 1 MB |
| | | Flow Percentage | 12.1% |
| | BitTorrent | | |
| | | Avg Flow Size (L7) | 500 MB |
| | | Flow Percentage | 6.3% |
| | iTunes | | |
| | | Avg Flow Size (L7) | 32 MB |
| | | Flow Percentage | 3.8% |
| | Flash Video | | |
| | | Avg Flow Size (L7) | 100 MB |
| | | Flow Percentage | 2.6% |
| | MPEG | | |
| | | Avg Flow Size (L7) | 100 MB |
| | | Flow Percentage | 2.0% |
| | RTMP | | |
| | | Avg Flow Size (L7) | 50 MB |
| | | Flow Percentage | 2.0% |
| | Hulu | | |
| | | Avg Flow Size (L7) | 300 MB |
| | | Flow Percentage | 1.8% |
| | SSL/TLS | | |
| | | Avg Flow Size (L7) | 256 kB |
| | | Flow Percentage | 1.6% |
| | Bulk TCP | | |
| | | Avg Flow Size (L7) | 500 kB |
| | | Flow Percentage | 21.1% |
+------------+------------------+--------------------+--------+
Table 1: Example Traffic Pattern Table 1: Example Traffic Pattern
Appendix B. Malformed Traffic Algorithm Appendix B. Malformed Traffic Algorithm
Each application flow will be broken into multiple transport Each application flow will be broken into multiple transport
segments, IP packets, and Ethernet frames. The malformed traffic segments, IP packets, and Ethernet frames. The malformed traffic
algorithm looks very similar to the IP Stack Integrity Checker algorithm looks very similar to the IP Stack Integrity Checker
project at http://isic.sourceforge.net. project at http://isic.sourceforge.net.
The algorithm is very simple and starts by defining each of the The algorithm is very simple and starts by defining each of the
 End of changes. 30 change blocks. 
93 lines changed or deleted 136 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/