draft-ietf-bmwg-sip-bench-term-06.txt   draft-ietf-bmwg-sip-bench-term-07.txt 
Benchmarking Methodology Working C. Davids Benchmarking Methodology Working Group C. Davids
Group Illinois Institute of Technology Internet-Draft Illinois Institute of Technology
Internet-Draft V. Gurbani Intended status: Informational V. Gurbani
Expires: May 12, 2013 Bell Laboratories, Alcatel-Lucent Expires: July 10, 2013 Bell Laboratories,
Alcatel-Lucent
S. Poretsky S. Poretsky
Allot Communications Allot Communications
November 8, 2012 January 6, 2013
Terminology for Benchmarking Session Initiation Protocol (SIP) Terminology for Benchmarking Session Initiation Protocol (SIP)
Networking Devices Networking Devices
draft-ietf-bmwg-sip-bench-term-06 draft-ietf-bmwg-sip-bench-term-07
Abstract Abstract
This document provides a terminology for benchmarking the SIP This document provides a terminology for benchmarking the SIP
performance of networking devices. The term performance in this performance of networking devices. The term performance in this
context means the capacity of the device- or system-under-test to context means the capacity of the device- or system-under-test to
process SIP messages. Terms are included for test components, test process SIP messages. Terms are included for test components, test
setup parameters, and performance benchmark metrics for black-box setup parameters, and performance benchmark metrics for black-box
benchmarking of SIP networking devices. The performance benchmark benchmarking of SIP networking devices. The performance benchmark
metrics are obtained for the SIP signaling plane only. The terms are metrics are obtained for the SIP signaling plane only. The terms are
skipping to change at page 2, line 4 skipping to change at page 2, line 4
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on May 12, 2013. This Internet-Draft will expire on July 10, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
skipping to change at page 8, line 34 skipping to change at page 8, line 34
REGISTER and INVITE requests are challenged is a condition of test REGISTER and INVITE requests are challenged is a condition of test
which will be recorded along with other such parameters which may which will be recorded along with other such parameters which may
impact the SIP performance of the device or system under test. impact the SIP performance of the device or system under test.
o Re-INVITE requests are not considered in scope of this work item o Re-INVITE requests are not considered in scope of this work item
since the benchmarks for INVITEs are based on the dialog created since the benchmarks for INVITEs are based on the dialog created
by the INVITE and not on the transactions that take place within by the INVITE and not on the transactions that take place within
that dialog. that dialog.
o Only session establishment is considered for the performance o Only session establishment is considered for the performance
benchmarks. Session disconnect is not considered in the scope of benchmarks. Session disconnect is not considered in the scope of
this work item. This is because our goal is to determine the this work item. This is because our goal is to determine the
maximum throughput of the device or system under test, that is the maximum capacity of the device or system under test, that is the
number of simultaneous SIP sessions that the device or system can number of simultaneous SIP sessions that the device or system can
support. It is true that there are BYE requests being created support. It is true that there are BYE requests being created
during the test process. These transactions do contribute to the during the test process. These transactions do contribute to the
load on the device or system under test and thus are accounted for load on the device or system under test and thus are accounted for
in the metric we derive. We do not seek a separate metric for the in the metric we derive. We do not seek a separate metric for the
number of BYE transactions a device or system can support. number of BYE transactions a device or system can support.
o SIP Overload [I-D.ietf-soc-overload-design] is within the scope of o SIP Overload [RFC6357] is within the scope of this work item. We
this work item. We test to failure and then can continue to test to failure and then can continue to observe and record the
observe and record the behavior of the system after failures are behavior of the system after failures are recorded. The cause of
recorded. The cause of failure is not within the scope of this failure is not within the scope of this work. We note the failure
work. We note the failure and may continue to test until a and may continue to test until a different failure or condition is
different failure or condition is encountered. Considerations on encountered. Considerations on how to handle overload are
how to handle overload are deferred to work progressing in the SOC deferred to work progressing in the SOC working group
working group [I-D.ietf-soc-overload-control]. Vendors are, of [I-D.ietf-soc-overload-control]. Vendors are, of course, free to
course, free to implement their specific overload control behavior implement their specific overload control behavior as the expected
as the expected test outcome if it is different from the IETF test outcome if it is different from the IETF recommendations.
recommendations. However, such behavior MUST be documented and However, such behavior MUST be documented and interpreted
interpreted appropriately across multiple vendor implementations. appropriately across multiple vendor implementations. This will
This will make it more meaningful to compare the performance of make it more meaningful to compare the performance of different
different SIP overload implementations. SIP overload implementations.
o IMS-specific scenarios are not considered, but test cases can be o IMS-specific scenarios are not considered, but test cases can be
applied with 3GPP-specific SIP signaling and the P-CSCF as a DUT. applied with 3GPP-specific SIP signaling and the P-CSCF as a DUT.
2.2. Benchmarking Models 2.2. Benchmarking Models
This section shows ten models to be used when benchmarking SIP This section shows ten models to be used when benchmarking SIP
performance of a networking device. Figure 1 shows shows the performance of a networking device. Figure 1 shows shows the
configuration needed to benchmark the tester itself. This model will configuration needed to benchmark the tester itself. This model will
be used to establish the limitations of the test apparatus. be used to establish the limitations of the test apparatus.
+--------+ Signaling request +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| Tester | | Tester | | Tester | | Tester |
| EA | Signaling response | EA | | EA | Signaling response | EA |
| |<-----------------------------+ | | |<-----------------------------+ |
+--------+ +--------+ +--------+ +--------+
/|\ /|\ /|\ /|\
| Media | | Media |
+=========================================+ +=========================================+
Figure 1: Baseline performance of the Emulated Agent without a DUT Figure 1: Baseline performance of the Emulated Agent without
present a DUT present
Figure 2 shows the DUT playing the role of a user agent client (UAC), Figure 2 shows the DUT playing the role of a user agent client (UAC),
initiating requests and absorbing responses. This model can be used initiating requests and absorbing responses. This model can be used
to baseline the performance of the DUT acting as an UAC without to baseline the performance of the DUT acting as an UAC without
associated media. associated media.
+--------+ Signaling request +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| DUT | | Tester | | DUT | | Tester |
| | Signaling response | EA | | | Signaling response | EA |
| |<-----------------------------+ | | |<-----------------------------+ |
+--------+ +--------+ +--------+ +--------+
Figure 2: Baseline performance for DUT acting as a user agent client Figure 2: Baseline performance for DUT acting as a
without associated media user agent client without associated media
Figure 3 shows the DUT plays the role of a user agent server (UAS), Figure 3 shows the DUT plays the role of a user agent server (UAS),
absorbing the requests and sending responses. This model can be used absorbing the requests and sending responses. This model can be used
as a baseline performance for the DUT acting as a UAS without as a baseline performance for the DUT acting as a UAS without
associated media. associated media.
+--------+ Signaling request +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| Tester | | DUT | | Tester | | DUT |
| EA | Response | | | EA | Response | |
| |<-----------------------------+ | | |<-----------------------------+ |
+--------+ +--------+ +--------+ +--------+
Figure 3: Baseline performance for DUT acting as a user agent server Figure 3: Baseline performance for DUT acting as a
without associated media user agent server without associated media
Figure 4 shows the DUT plays the role of a user agent client (UAC), Figure 4 shows the DUT plays the role of a user agent client (UAC),
initiating requests and absorbing responses. This model can be used initiating requests and absorbing responses. This model can be used
as a baseline performance for the DUT acting as a UAC with associated as a baseline performance for the DUT acting as a UAC with associated
media. media.
+--------+ Signaling request +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| DUT | | Tester | | DUT | | Tester |
| | Signaling response | (EA) | | | Signaling response | (EA) |
| |<-----------------------------+ | | |<-----------------------------+ |
| |<============ Media =========>| | | |<============ Media =========>| |
+--------+ +--------+ +--------+ +--------+
Figure 4: Baseline performance for DUT acting as a user agent client Figure 4: Baseline performance for DUT acting as a
with associated media user agent client with associated media
Figure 5 shows the DUT plays the role of a user agent server (UAS), Figure 5 shows the DUT plays the role of a user agent server (UAS),
absorbing the requests and sending responses. This model can be used absorbing the requests and sending responses. This model can be used
as a baseline performance for the DUT acting as a UAS with associated as a baseline performance for the DUT acting as a UAS with associated
media. media.
+--------+ Signaling request +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| Tester | | DUT | | Tester | | DUT |
| (EA) | Response | | | (EA) | Response | |
| |<-----------------------------+ | | |<-----------------------------+ |
| |<============ Media =========>| | | |<============ Media =========>| |
+--------+ +--------+ +--------+ +--------+
Figure 5: Baseline performance for DUT acting as a user agent server Figure 5: Baseline performance for DUT acting as a
with associated media user agent server with associated media
Figure 6 shows that the Tester acts as the initiating and responding Figure 6 shows that the Tester acts as the initiating and responding
EA as the DUT/SUT forwards Session Attempts. EA as the DUT/SUT forwards Session Attempts.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
| |<------------+ |<------------+ | | |<------------+ |<------------+ |
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +------------>| DUT +------------>| Tester | | Tester +------------>| DUT +------------>| Tester |
| (EA) | | | | (EA) | | (EA) | | | | (EA) |
| | | | | | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
Figure 6: DUT/SUT performance benchmark for session establishment Figure 6: DUT/SUT performance benchmark for session
without media establishment without media
Figure 7 is used when performing those same benchmarks with Figure 7 is used when performing those same benchmarks with
Associated Media traversing the DUT/SUT. Associated Media traversing the DUT/SUT.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
| |<------------+ |<------------+ | | |<------------+ |<------------+ |
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +------------>| DUT +------------>| Tester | | Tester +------------>| DUT +------------>| Tester |
| (EA) | | | | (EA) | | (EA) | | | | (EA) |
| | Media | | Media | | | | Media | | Media | |
| |<===========>| |<===========>| | | |<===========>| |<===========>| |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
Figure 7: DUT/SUT performance benchmark for session establishment Figure 7: DUT/SUT performance benchmark for session
with media traversing the DUT establishment with media traversing the DUT
Figure 8 is to be used when performing those same benchmarks with Figure 8 is to be used when performing those same benchmarks with
Associated Media, but the media does not traverse the DUT/SUT. Associated Media, but the media does not traverse the DUT/SUT.
Again, the benchmarking of the media is not within the scope of this Again, the benchmarking of the media is not within the scope of this
work item. The SIP control signaling is benchmarked in the presence work item. The SIP control signaling is benchmarked in the presence
of Associated Media to determine if the SDP body of the signaling and of Associated Media to determine if the SDP body of the signaling and
the handling of media impacts the performance of the DUT/SUT. the handling of media impacts the performance of the DUT/SUT.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
skipping to change at page 12, line 18 skipping to change at page 12, line 18
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +------------>| DUT +------------>| Tester | | Tester +------------>| DUT +------------>| Tester |
| (EA) | | | | (EA) | | (EA) | | | | (EA) |
| | | | | | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
/|\ /|\ /|\ /|\
| Media | | Media |
+=============================================+ +=============================================+
Figure 8: DUT/SUT performance benchmark for session establishment Figure 8: DUT/SUT performance benchmark for session
with media external to the DUT establishment with media external to the DUT
Figure 9 is used when performing benchmarks that require one or more Figure 9 is used when performing benchmarks that require one or more
intermediaries to be in the signaling path. The intent is to gather intermediaries to be in the signaling path. The intent is to gather
benchmarking statistics with a series of DUTs in place. In this benchmarking statistics with a series of DUTs in place. In this
topology, the media is delivered end-to-end and does not traverse the topology, the media is delivered end-to-end and does not traverse the
DUT. DUT.
SUT SUT
------------------^^^^^^^^------------- ------------------^^^^^^^^-------------
/ \ / \
skipping to change at page 12, line 43 skipping to change at page 12, line 43
| | | | | | | | | | | | | | | |
| | Response | | Response | | Response | | | | Response | | Response | | Response | |
|Tester+--------->|DUT+--------->|DUT|--------->|Tester| |Tester+--------->|DUT+--------->|DUT|--------->|Tester|
| (EA) | | | | | | (EA) | | (EA) | | | | | | (EA) |
| | | | | | | | | | | | | | | |
+------+ +---+ +---+ +------+ +------+ +---+ +---+ +------+
/|\ /|\ /|\ /|\
| Media | | Media |
+=============================================+ +=============================================+
Figure 9: DUT/SUT performance benchmark for session establishment Figure 9: DUT/SUT performance benchmark for session
with multiple DUTs and end-to-end media establishment with multiple DUTs and end-to-end media
Figure 10 is used when performing benchmarks that require one or more Figure 10 is used when performing benchmarks that require one or more
intermediaries to be in the signaling path. The intent is to gather intermediaries to be in the signaling path. The intent is to gather
benchmarking statistics with a series of DUTs in place. In this benchmarking statistics with a series of DUTs in place. In this
topology, the media is delivered hop-by-hop through each DUT. topology, the media is delivered hop-by-hop through each DUT.
SUT SUT
-----------------^^^^^^^^------------- -----------------^^^^^^^^-------------
/ \ / \
+------+ Session +---+ Session +---+ Session +------+ +------+ Session +---+ Session +---+ Session +------+
| | Attempt | | Attempt | | Attempt | | | | Attempt | | Attempt | | Attempt | |
| |<---------+ |<---------+ |<---------+ | | |<---------+ |<---------+ |<---------+ |
| | | | | | | | | | | | | | | |
| | Response | | Response | | Response | | | | Response | | Response | | Response | |
|Tester+--------->|DUT+--------->|DUT|--------->|Tester| |Tester+--------->|DUT+--------->|DUT|--------->|Tester|
| (EA) | | | | | | (EA) | | (EA) | | | | | | (EA) |
| | | | | | | | | | | | | | | |
| |<========>| |<========>| |<========>| | | |<========>| |<========>| |<========>| |
+------+ Media +---+ Media +---+ Media +------+ +------+ Media +---+ Media +---+ Media +------+
Figure 10: DUT/SUT performance benchmark for session establishment Figure 10: DUT/SUT performance benchmark for session
with multiple DUTs and hop- by-hop media establishment with multiple DUTs and hop- by-hop
media
Figure 11 illustrates the SIP signaling for an Established Session. Figure 11 illustrates the SIP signaling for an Established Session.
The Tester acts as the EAs and initiates a Session Attempt with the The Tester acts as the EAs and initiates a Session Attempt with the
DUT/SUT. When the Emulated Agent (EA) receives a 200 OK from the DUT/SUT. When the Emulated Agent (EA) receives a 200 OK from the
DUT/SUT that session is considered to be an Established Session. The DUT/SUT that session is considered to be an Established Session. The
illustration indicates three states of the session bring created by illustration indicates three states of the session bring created by
the EA - Attempting, Established, and Disconnecting. Sessions can be the EA - Attempting, Established, and Disconnecting. Sessions can be
one of two type: Invite-Initiated Session (IS) or Non-Invite one of two type: Invite-Initiated Session (IS) or Non-Invite
Initiated Session (NS). Failure for the DUT/SUT to successfully Initiated Session (NS). Failure for the DUT/SUT to successfully
respond within the Establishment Threshold Time is considered a respond within the Establishment Threshold Time is considered a
skipping to change at page 19, line 24 skipping to change at page 19, line 24
N/A. N/A.
Issues: Issues:
None. None.
3.1.5. Overload 3.1.5. Overload
Definition: Definition:
Overload is defined as the state where a SIP server does not have Overload is defined as the state where a SIP server does not have
sufficient resources to process all incoming SIP messages sufficient resources to process all incoming SIP messages
[I-D.ietf-soc-overload-design]. [RFC6357].
Discussion: Discussion:
The distinction between an overload condition and other failure The distinction between an overload condition and other failure
scenarios is outside the scope of black box testing and of this scenarios is outside the scope of black box testing and of this
document. Under overload conditions, all or a percentage of document. Under overload conditions, all or a percentage of
Session Attempts will fail due to lack of resources. In black box Session Attempts will fail due to lack of resources. In black box
testing the cause of the failure is not explored. The fact that a testing the cause of the failure is not explored. The fact that a
failure occurred for whatever reason, will trigger the tester to failure occurred for whatever reason, will trigger the tester to
reduce the offered load, as described in the companion methodology reduce the offered load, as described in the companion methodology
document, [I-D.ietf-bmwg-sip-bench-meth]. SIP server resources document, [I-D.ietf-bmwg-sip-bench-meth]. SIP server resources
skipping to change at page 23, line 33 skipping to change at page 23, line 33
reported with the maximum and average Standing Sessions for the reported with the maximum and average Standing Sessions for the
DUT/SUT for the duration of the test. In order to determine the DUT/SUT for the duration of the test. In order to determine the
maximum and average Standing Sessions on the DUT/SUT for the maximum and average Standing Sessions on the DUT/SUT for the
duration of the test it is necessary to make periodic measurements duration of the test it is necessary to make periodic measurements
of the number of Standing Sessions on the DUT/SUT. The of the number of Standing Sessions on the DUT/SUT. The
recommended value for the measurement period is 1 second. Since recommended value for the measurement period is 1 second. Since
we cannot directly poll the DUT/SUT, we take the number of we cannot directly poll the DUT/SUT, we take the number of
standing sessions on the DUT/SUT to be the number of distinct standing sessions on the DUT/SUT to be the number of distinct
calls as measured by the number of distinct Call-IDs that the EA calls as measured by the number of distinct Call-IDs that the EA
is processing at the time of measurement. The EA must make that is processing at the time of measurement. The EA must make that
count available for viewing ad recording. count available for viewing and recording.
Measurement Units: Measurement Units:
Number of sessions Number of sessions
Issues: Issues:
None. None.
See Also: See Also:
Session Duration Session Duration
Session Attempt Rate Session Attempt Rate
skipping to change at page 25, line 11 skipping to change at page 25, line 11
Signaling Plane Signaling Plane
3.2.3. SIP-Aware Stateful Firewall 3.2.3. SIP-Aware Stateful Firewall
Definition: Definition:
Device in the test topology that provides protection against Device in the test topology that provides protection against
various types of security threats to which the Signaling and Media various types of security threats to which the Signaling and Media
Planes of the EAs and Signaling Server are vulnerable. Planes of the EAs and Signaling Server are vulnerable.
Discussion: Discussion:
Threats may include Denial-of-Service, theft of service and misuse Threats may include Denial-of-Service, theft of service and misuse
of service.he SIP-Aware Stateful Firewall MAY be an internal of service. The SIP-Aware Stateful Firewall MAY be an internal
component or function of the Session Server. The SIP-Aware component or function of the Session Server. The SIP-Aware
Stateful Firewall MAY be a standalone device. If it is a Stateful Firewall MAY be a standalone device. If it is a
standalone device it MUST be paired with a Signaling Server. If standalone device it MUST be paired with a Signaling Server. If
it is a standalone device it MUST be benchmarked as part of a SUT. it is a standalone device it MUST be benchmarked as part of a SUT.
SIP-Aware Stateful Firewalls MAY include Network Address SIP-Aware Stateful Firewalls MAY include Network Address
Translation (NAT) functionality. Ideally, the inclusion of the Translation (NAT) functionality. Ideally, the inclusion of the
SIP-Aware Stateful Firewall in the SUT does not lower the measured SIP-Aware Stateful Firewall in the SUT does not lower the measured
values of the performance benchmarks. values of the performance benchmarks.
Measurement Units: Measurement Units:
skipping to change at page 36, line 30 skipping to change at page 36, line 30
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
A., Peterson, J., Sparks, R., Handley, M., and E. A., Peterson, J., Sparks, R., Handley, M., and E.
Schooler, "SIP: Session Initiation Protocol", RFC 3261, Schooler, "SIP: Session Initiation Protocol", RFC 3261,
June 2002. June 2002.
[I-D.ietf-bmwg-sip-bench-meth] [I-D.ietf-bmwg-sip-bench-meth]
Davids, C., Gurbani, V., and S. Poretsky, "Methodology for Davids, C., Gurbani, V., and S. Poretsky, "Methodology for
Benchmarking SIP Networking Devices", Benchmarking SIP Networking Devices",
draft-ietf-bmwg-sip-bench-meth-05 (work in progress), draft-ietf-bmwg-sip-bench-meth-06 (work in progress),
October 2012. November 2012.
7.2. Informational References 7.2. Informational References
[RFC2285] Mandeville, R., "Benchmarking Terminology for LAN [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN
Switching Devices", RFC 2285, February 1998. Switching Devices", RFC 2285, February 1998.
[RFC1242] Bradner, S., "Benchmarking terminology for network [RFC1242] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991. interconnection devices", RFC 1242, July 1991.
[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V.
Jacobson, "RTP: A Transport Protocol for Real-Time Jacobson, "RTP: A Transport Protocol for Real-Time
Applications", STD 64, RFC 3550, July 2003. Applications", STD 64, RFC 3550, July 2003.
[RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K.
Norrman, "The Secure Real-time Transport Protocol (SRTP)", Norrman, "The Secure Real-time Transport Protocol (SRTP)",
RFC 3711, March 2004. RFC 3711, March 2004.
[I-D.ietf-soc-overload-design] [RFC6357] Hilt, V., Noel, E., Shen, C., and A. Abdelal, "Design
Hilt, V., Noel, E., Shen, C., and A. Abdelal, "Design
Considerations for Session Initiation Protocol (SIP) Considerations for Session Initiation Protocol (SIP)
Overload Control", draft-ietf-soc-overload-design-08 (work Overload Control", RFC 6357, August 2011.
in progress), July 2011.
[I-D.ietf-soc-overload-control] [I-D.ietf-soc-overload-control]
Gurbani, V., Hilt, V., and H. Schulzrinne, "Session Gurbani, V., Hilt, V., and H. Schulzrinne, "Session
Initiation Protocol (SIP) Overload Control", Initiation Protocol (SIP) Overload Control",
draft-ietf-soc-overload-control-10 (work in progress), draft-ietf-soc-overload-control-11 (work in progress),
October 2012. November 2012.
Appendix A. White Box Benchmarking Terminology Appendix A. White Box Benchmarking Terminology
Session Attempt Arrival Rate Session Attempt Arrival Rate
Definition: Definition:
The number of Session Attempts received at the DUT/SUT over a The number of Session Attempts received at the DUT/SUT over a
specified time period. specified time period.
Discussion: Discussion:
 End of changes. 24 change blocks. 
54 lines changed or deleted 54 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/