draft-ietf-bmwg-sip-bench-term-02.txt   draft-ietf-bmwg-sip-bench-term-03.txt 
Benchmarking Methodology Working S. Poretsky Benchmarking Methodology Working C. Davids
Group Allot Communications Group Illinois Institute of Technology
Internet-Draft V. Gurbani Internet-Draft V. Gurbani
Expires: January 13, 2011 Bell Laboratories, Alcatel-Lucent Expires: September 15, 2011 Bell Laboratories, Alcatel-Lucent
C. Davids S. Poretsky
Illinois Institute of Technology Allot Communications
July 12, 2010 March 14, 2011
Terminology for Benchmarking Session Initiation Protocol (SIP) Terminology for Benchmarking Session Initiation Protocol (SIP)
Networking Devices Networking Devices
draft-ietf-bmwg-sip-bench-term-02 draft-ietf-bmwg-sip-bench-term-03
Abstract Abstract
This document provides a terminology for benchmarking SIP performance This document provides a terminology for benchmarking SIP performance
in networking devices. Terms are included for test components, test in networking devices. Terms are included for test components, test
setup parameters, and performance benchmark metrics for black-box setup parameters, and performance benchmark metrics for black-box
benchmarking of SIP networking devices. The performance benchmark benchmarking of SIP networking devices. The performance benchmark
metrics are obtained for the SIP control plane and media plane. The metrics are obtained for the SIP control plane and media plane. The
terms are intended for use in a companion methodology document for terms are intended for use in a companion methodology document for
complete performance characterization of a device in a variety of complete performance characterization of a device in a variety of
skipping to change at page 1, line 36 skipping to change at page 1, line 36
methodology document for SIP performance benchmarking because SIP methodology document for SIP performance benchmarking because SIP
allows a wide range of configuration and operational conditions that allows a wide range of configuration and operational conditions that
can influence performance benchmark measurements. It is necessary to can influence performance benchmark measurements. It is necessary to
have terminology and methodology standards to ensure that reported have terminology and methodology standards to ensure that reported
benchmarks have consistent definition and were obtained following the benchmarks have consistent definition and were obtained following the
same procedures. Benchmarks can be applied to compare performance of same procedures. Benchmarks can be applied to compare performance of
a variety of SIP networking devices. a variety of SIP networking devices.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF). Note that other groups may also distribute
other groups may also distribute working documents as Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at This Internet-Draft will expire on September 15, 2011.
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on January 13, 2011.
Copyright Notice Copyright Notice
Copyright (c) 2010 IETF Trust and the persons identified as the Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2. Benchmarking Models . . . . . . . . . . . . . . . . . . . 8 2.2. Benchmarking Models . . . . . . . . . . . . . . . . . . . 7
3. Term Definitions . . . . . . . . . . . . . . . . . . . . . . . 11 3. Term Definitions . . . . . . . . . . . . . . . . . . . . . . . 12
3.1. Protocol Components . . . . . . . . . . . . . . . . . . . 11 3.1. Protocol Components . . . . . . . . . . . . . . . . . . . 12
3.1.1. Session . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.1. Session . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2. Signaling Plane . . . . . . . . . . . . . . . . . . . 14 3.1.2. Signaling Plane . . . . . . . . . . . . . . . . . . . 15
3.1.3. Media Plane . . . . . . . . . . . . . . . . . . . . . 15 3.1.3. Media Plane . . . . . . . . . . . . . . . . . . . . . 16
3.1.4. Associated Media . . . . . . . . . . . . . . . . . . . 15 3.1.4. Associated Media . . . . . . . . . . . . . . . . . . . 16
3.1.5. Overload . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.5. Overload . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.6. Session Attempt . . . . . . . . . . . . . . . . . . . 17 3.1.6. Session Attempt . . . . . . . . . . . . . . . . . . . 18
3.1.7. Established Session . . . . . . . . . . . . . . . . . 17 3.1.7. Established Session . . . . . . . . . . . . . . . . . 18
3.1.8. Invite-initiated Session (IS) . . . . . . . . . . . . 18 3.1.8. Invite-initiated Session (IS) . . . . . . . . . . . . 19
3.1.9. Non-INVITE-initiated Session (NS) . . . . . . . . . . 18 3.1.9. Non-INVITE-initiated Session (NS) . . . . . . . . . . 19
3.1.10. Session Attempt Failure . . . . . . . . . . . . . . . 19 3.1.10. Session Attempt Failure . . . . . . . . . . . . . . . 20
3.1.11. Standing Sessions Count . . . . . . . . . . . . . . . 19 3.1.11. Standing Sessions Count . . . . . . . . . . . . . . . 20
3.2. Test Components . . . . . . . . . . . . . . . . . . . . . 20 3.2. Test Components . . . . . . . . . . . . . . . . . . . . . 21
3.2.1. Emulated Agent . . . . . . . . . . . . . . . . . . . . 20 3.2.1. Emulated Agent . . . . . . . . . . . . . . . . . . . . 21
3.2.2. Signaling Server . . . . . . . . . . . . . . . . . . . 20 3.2.2. Signaling Server . . . . . . . . . . . . . . . . . . . 21
3.2.3. SIP-Aware Stateful Firewall . . . . . . . . . . . . . 21 3.2.3. SIP-Aware Stateful Firewall . . . . . . . . . . . . . 22
3.2.4. SIP Transport Protocol . . . . . . . . . . . . . . . . 21 3.2.4. SIP Transport Protocol . . . . . . . . . . . . . . . . 22
3.3. Test Setup Parameters . . . . . . . . . . . . . . . . . . 22 3.3. Test Setup Parameters . . . . . . . . . . . . . . . . . . 23
3.3.1. Session Attempt Rate . . . . . . . . . . . . . . . . . 22 3.3.1. Session Attempt Rate . . . . . . . . . . . . . . . . . 23
3.3.2. IS Media Attempt Rate . . . . . . . . . . . . . . . . 22 3.3.2. IS Media Attempt Rate . . . . . . . . . . . . . . . . 23
3.3.3. Establishment Threshold Time . . . . . . . . . . . . . 23 3.3.3. Establishment Threshold Time . . . . . . . . . . . . . 24
3.3.4. Session Duration . . . . . . . . . . . . . . . . . . . 24 3.3.4. Session Duration . . . . . . . . . . . . . . . . . . . 25
3.3.5. Media Packet Size . . . . . . . . . . . . . . . . . . 24 3.3.5. Media Packet Size . . . . . . . . . . . . . . . . . . 25
3.3.6. Media Offered Load . . . . . . . . . . . . . . . . . . 25 3.3.6. Media Offered Load . . . . . . . . . . . . . . . . . . 26
3.3.7. Media Session Hold Time . . . . . . . . . . . . . . . 25 3.3.7. Media Session Hold Time . . . . . . . . . . . . . . . 26
3.3.8. Loop Detection Option . . . . . . . . . . . . . . . . 26 3.3.8. Loop Detection Option . . . . . . . . . . . . . . . . 27
3.3.9. Forking Option . . . . . . . . . . . . . . . . . . . . 26 3.3.9. Forking Option . . . . . . . . . . . . . . . . . . . . 27
3.4. Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . 27 3.4. Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.1. Registration Rate . . . . . . . . . . . . . . . . . . 27 3.4.1. Registration Rate . . . . . . . . . . . . . . . . . . 28
3.4.2. Session Establishment Rate . . . . . . . . . . . . . . 28 3.4.2. Session Establishment Rate . . . . . . . . . . . . . . 29
3.4.3. Session Capacity . . . . . . . . . . . . . . . . . . . 29 3.4.3. Session Capacity . . . . . . . . . . . . . . . . . . . 30
3.4.4. Session Overload Capacity . . . . . . . . . . . . . . 30 3.4.4. Session Overload Capacity . . . . . . . . . . . . . . 31
3.4.5. Session Establishment Performance . . . . . . . . . . 30 3.4.5. Session Establishment Performance . . . . . . . . . . 31
3.4.6. Session Attempt Delay . . . . . . . . . . . . . . . . 31 3.4.6. Session Attempt Delay . . . . . . . . . . . . . . . . 32
3.4.7. IM Rate . . . . . . . . . . . . . . . . . . . . . . . 31 3.4.7. IM Rate . . . . . . . . . . . . . . . . . . . . . . . 32
4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 32 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 33
5. Security Considerations . . . . . . . . . . . . . . . . . . . 32 5. Security Considerations . . . . . . . . . . . . . . . . . . . 33
6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 33 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 34
7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 33 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.1. Normative References . . . . . . . . . . . . . . . . . . . 33 7.1. Normative References . . . . . . . . . . . . . . . . . . . 34
7.2. Informational References . . . . . . . . . . . . . . . . . 33 7.2. Informational References . . . . . . . . . . . . . . . . . 34
Appendix A. White Box Benchmarking Terminology . . . . . . . . . 35
Appendix A. White Box Benchmarking Terminology . . . . . . . . . 34 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 34
1. Terminology 1. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in BCP 14, RFC2119 document are to be interpreted as described in BCP 14, RFC2119
[RFC2119]. RFC 2119 defines the use of these key words to help make [RFC2119]. RFC 2119 defines the use of these key words to help make
the intent of standards track documents as clear as possible. While the intent of standards track documents as clear as possible. While
this document uses these keywords, this document is not a standards this document uses these keywords, this document is not a standards
track document. The term Throughput is defined in RFC2544 [RFC2544]. track document. The term Throughput is defined in RFC2544 [RFC2544].
skipping to change at page 8, line 27 skipping to change at page 7, line 27
o REGISTER and INVITE requests may be challenged or remain o REGISTER and INVITE requests may be challenged or remain
unchallenged for authentication purpose as this may impact the unchallenged for authentication purpose as this may impact the
performance benchmarks. Any observable performance degradation performance benchmarks. Any observable performance degradation
due to authentication is of interest to the SIP community. due to authentication is of interest to the SIP community.
Whether or not the REGISTER and INVITE requests are challenged is Whether or not the REGISTER and INVITE requests are challenged is
a condition of test and will be recorded and reported. a condition of test and will be recorded and reported.
o Re-INVITE requests are not considered in scope of this work item. o Re-INVITE requests are not considered in scope of this work item.
o Only session establishment is considered for the performance o Only session establishment is considered for the performance
benchmarks. Session disconnect is not considered in the scope of benchmarks. Session disconnect is not considered in the scope of
this work item. this work item.
o SIP Overload [I-D.ietf-sipping-overload-reqs] is within the scope o SIP Overload [I-D.ietf-soc-overload-design] is within the scope of
of this work item. We test to failure and then can continue to this work item. We test to failure and then can continue to
observe and record the behavior of the system after failures are observe and record the behavior of the system after failures are
recorded. The cause of failure is not within the scope of this recorded. The cause of failure is not within the scope of this
work. We note the failure and may continue to test until a work. We note the failure and may continue to test until a
different failure or condition is encountered. Considerations on different failure or condition is encountered. Considerations on
how to handle overload are deferred to work progressing in the how to handle overload are deferred to work progressing in the SOC
SIPPING working group [I-D.ietf-sipping-overload-design]. Vendors working group [I-D.ietf-soc-overload-control]. Vendors are, of
are, of course, free to implement their specific overload control course, free to implement their specific overload control behavior
behavior as the expected test outcome if it is different from the as the expected test outcome if it is different from the IETF
IETF recommendations. However, such behavior MUST be documented recommendations. However, such behavior MUST be documented and
and interpreted appropriately across multiple vendor interpreted appropriately across multiple vendor implementations.
implementations. This will make it more meaningful to compare the This will make it more meaningful to compare the performance of
performance of different SIP overload implementations. different SIP overload implementations.
o IMS-specific scenarios are not considered, but test cases can be o IMS-specific scenarios are not considered, but test cases can be
applied with 3GPP-specific SIP signaling and the P-CSCF as a DUT. applied with 3GPP-specific SIP signaling and the P-CSCF as a DUT.
2.2. Benchmarking Models 2.2. Benchmarking Models
This section shows the five models to be used when benchmarking SIP This section shows the five models to be used when benchmarking SIP
performance of a networking device. Figure 1 shows a configuration performance of a networking device. Figure 1 shows the DUT plays the
in which the Tester acting as the Emulated agents is in loopback role of a user agent client (UAC), initiating requests and absorbing
testing itself for the purpose of baselining its performance. responses. This model can be used as a baseline performance for the
DUT acting as a UAC without associated media.
+--------+ Signaling request +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| Tester | | Tester | | DUT | | Tester |
| EA | Signaling response | EA | | | Signaling response | EA |
| |<-----------------------------+ | | |<-----------------------------+ |
+--------+ +--------+ +--------+ +--------+
Figure 1: Test topology 1 - Emulated agent (EA) baseline performance Figure 1: Baseline performance for DUT acting as a user agent client
measurement without associated media
Figure 2 shows the basic configuration for benchmarking the Figure 2 shows the DUT plays the role of a user agent server (UAS),
Registration of the DUT/SUT. absorbing the requests and sending responses. This model can be used
as a baseline performance for the DUT acting as a UAS without
associated media.
+--------+ Registration +--------+ +--------+ Signaling request +--------+
| +----------------------------->| | | +----------------------------->| |
| Tester | | DUT | | Tester | | DUT |
| EA | Response | | | EA | Response | |
| |<-----------------------------+ | | |<-----------------------------+ |
+--------+ +--------+ +--------+ +--------+
Figure 2: Test topology 2 - Emulated agent (EA) registration to DUT/ Figure 2: Baseline performance for DUT acting as a user agent server
SUT without associated media
Figure 3 shows that the Tester acts as the initiating and responding Figure 3 shows the DUT plays the role of a user agent client (UAC),
initiating requests and absorbing responses. This model can be used
as a baseline performance for the DUT acting as a UAC with associated
media.
+--------+ Signaling request +--------+
| +----------------------------->| |
| DUT | | Tester |
| | Signaling response | EA |
| |<-----------------------------+ |
| |<============ Media =========>| |
+--------+ +--------+
Figure 3: Baseline performance for DUT acting as a user agent client
with associated media
Figure 4 shows the DUT plays the role of a user agent server (UAS),
absorbing the requests and sending responses. This model can be used
as a baseline performance for the DUT acting as a UAS with associated
media.
+--------+ Signaling request +--------+
| +----------------------------->| |
| Tester | | DUT |
| EA | Response | |
| |<-----------------------------+ |
| |<============ Media =========>| |
+--------+ +--------+
Figure 4: Baseline performance for DUT acting as a user agent server
with associated media
Figure 5 shows that the Tester acts as the initiating and responding
Emulated Agents as the DUT/SUT forwards Session Attempts. Emulated Agents as the DUT/SUT forwards Session Attempts.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
| |<------------+ |<------------+ | | |<------------+ |<------------+ |
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +------------>| DUT +------------>| Tester | | Tester +------------>| DUT +------------>| Tester |
| (EA) | | | | (EA) | | (EA) | | | | (EA) |
| | | | | | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
Figure 3: Test topology 3 - DUT/SUT performance benchmark for session Figure 5: DUT/SUT performance benchmark for session establishment
establishment without media without media
Figure 4 is used when performing those same benchmarks with Figure 6 is used when performing those same benchmarks with
Associated Media traversing the DUT/SUT. Associated Media traversing the DUT/SUT.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
| |<------------+ |<------------+ | | |<------------+ |<------------+ |
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +------------>| DUT +------------>| Tester | | Tester +------------>| DUT +------------>| Tester |
| | | | | (EA) | | | | | | (EA) |
| | Media | | Media | | | | Media | | Media | |
| |<===========>| |<===========>| | | |<===========>| |<===========>| |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
Figure 4: Test topology 4 - DUT/SUT performance benchmark for session Figure 6: DUT/SUT performance benchmark for session establishment
establishment with media traversing the DUT with media traversing the DUT
Figure 5 is to be used when performing those same benchmarks with Figure 7 is to be used when performing those same benchmarks with
Associated Media, but the media does not traverse the DUT/SUT. Associated Media, but the media does not traverse the DUT/SUT.
Again, the benchmarking of the media is not within the scope of this Again, the benchmarking of the media is not within the scope of this
work item. The SIP control signaling is benchmarked in the presence work item. The SIP control signaling is benchmarked in the presence
of Associated Media to determine if the SDP body of the signaling and of Associated Media to determine if the SDP body of the signaling and
the handling of media impacts the performance of the DUT/SUT. the handling of media impacts the performance of the DUT/SUT.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
| |<------------+ |<------------+ | | |<------------+ |<------------+ |
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +------------>| DUT +------------>| Tester | | Tester +------------>| DUT +------------>| Tester |
| | | | | (EA) | | | | | | (EA) |
| | | | | | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
/|\ /|\ /|\ /|\
| Media | | Media |
+=============================================+ +=============================================+
Figure 5: Test topology 5 - DUT/SUT performance benchmark for session Figure 7: DUT/SUT performance benchmark for session establishment
establishment with media external to the DUT with media external to the DUT
Figure 6 illustrates the SIP signaling for an Established Session. Figure 8 is used when performing benchmarks that require one or more
intermediaries to be in the signaling path. The intent is to gather
benchmarking statistics with a series of DUTs in place. In this
topology, the media is delivered end-to-end and does not traverse the
DUT.
SUT
'--------------------------^^^^^^^^-----------------------`
/ \
+------+ Session +---+ Session +---+ Session +------+
| | Attempt | | Attempt | | Attempt | |
| |<---------+ |<---------+ |<---------+ |
| | | | | | | |
| | Response | | Response | | Response | |
|Tester+--------->|DUT+--------->|DUT|--------->|Tester|
| | | | | | | |
| | | | | | | |
+------+ +---+ +---+ +------+
/|\ /|\
| Media |
+=============================================+
Figure 8: DUT/SUT performance benchmark for session establishment
with multiple DUTs and end-to-end media
Figure 9 is used when performing benchmarks that require one or more
intermediaries to be in the signaling path. The intent is to gather
benchmarking statistics with a series of DUTs in place. In this
topology, the media is delivered hop-by-hop through each DUT.
SUT
'--------------------------^^^^^^^^-----------------------`
/ \
+------+ Session +---+ Session +---+ Session +------+
| | Attempt | | Attempt | | Attempt | |
| |<---------+ |<---------+ |<---------+ |
| | | | | | | |
| | Response | | Response | | Response | |
|Tester+--------->|DUT+--------->|DUT|--------->|Tester|
| | | | | | | |
| | | | | | | |
| |<========>| |<========>| |<========>| |
+------+ Media +---+ Media +---+ Media +------+
Figure 9: DUT/SUT performance benchmark for session establishment
with multiple DUTs and hop- by-hop media
Figure 10 illustrates the SIP signaling for an Established Session.
The Tester acts as the Emulated Agent(s) and initiates a Session The Tester acts as the Emulated Agent(s) and initiates a Session
Attempt with the DUT/SUT. When the Emulated Agent (EA) receives a Attempt with the DUT/SUT. When the Emulated Agent (EA) receives a
200 OK from the DUT/SUT that session is considered to be an 200 OK from the DUT/SUT that session is considered to be an
Established Session. The illustration indicates three states of the Established Session. The illustration indicates three states of the
session bring created by the EA - Attempting, Established, and session bring created by the EA - Attempting, Established, and
Disconnecting. Sessions can be one of two type: Invite-Initiated Disconnecting. Sessions can be one of two type: Invite-Initiated
Session (IS) or Non-Invite Initiated Session (NS). Failure for the Session (IS) or Non-Invite Initiated Session (NS). Failure for the
DUT/SUT to successfully respond within the Establishment Threshold DUT/SUT to successfully respond within the Establishment Threshold
Time is considered a Session Attempt Failure. SIP Invite messages Time is considered a Session Attempt Failure. SIP Invite messages
MUST include the SDP body to specify the Associated Media. Use of MUST include the SDP body to specify the Associated Media. Use of
Associated Media, to be sourced from the EA, is optional. When Associated Media, to be sourced from the EA, is optional. When
Associated Media is used, it may traverse the DUT/SUT depending upon Associated Media is used, it may traverse the DUT/SUT depending upon
the type of DUT/SUT. The Associated Media is shown in Figure 6 as the type of DUT/SUT. The Associated Media is shown in Figure 10 as
"Media" connected to media ports M1 and M2 on the EA. After the EA "Media" connected to media ports M1 and M2 on the EA. After the EA
sends a BYE, the session disconnects. Performance test cases for sends a BYE, the session disconnects. Performance test cases for
session disconnects are not considered in this work item (the BYE session disconnects are not considered in this work item (the BYE
request is shown for completeness.) request is shown for completeness.)
EA DUT/SUT M1 M2 EA DUT/SUT M1 M2
| | | | | | | |
| INVITE | | | | INVITE | | |
--------+--------------->| | | --------+--------------->| | |
| | | | | | | |
skipping to change at page 11, line 36 skipping to change at page 12, line 40
Established | |<=====>| Established | |<=====>|
| | | | | | | |
| BYE | | | | BYE | | |
--------+--------------> | | | --------+--------------> | | |
| | | | | | | |
Disconnecting | | | Disconnecting | | |
| 200 OK | | | | 200 OK | | |
--------|<-------------- | | | --------|<-------------- | | |
| | | | | | | |
Figure 6: Basic SIP test topology Figure 10: Basic SIP test topology
3. Term Definitions 3. Term Definitions
3.1. Protocol Components 3.1. Protocol Components
3.1.1. Session 3.1.1. Session
Definition: Definition:
The combination of signaling and media messages and processes that The combination of signaling and media messages and processes that
enable two or more participants to communicate. enable two or more participants to communicate.
Discussion: Discussion:
SIP messages in the signaling plane can be used to create and SIP messages in the signaling plane can be used to create and
manage applications for one or more end users. SIP is often used manage applications for one or more end users. SIP is often used
to create and manage media streams in support of applications. A to create and manage media streams in support of applications. A
session always has a signaling component and may have a media session always has a signaling component and may have a media
component. Therefore, a Session may be defined as signaling only component. Therefore, a Session may be defined as signaling only
or a combination of signaling and media (c.f. Associated Media, or a combination of signaling and media (c.f. Associated Media,
skipping to change at page 13, line 18 skipping to change at page 14, line 18
sessions are represented as an array session[x]. sessions are represented as an array session[x].
Sessions will be represented as a vector array with three Sessions will be represented as a vector array with three
components, as follows: components, as follows:
session-> session->
session[x].sig, the signaling component session[x].sig, the signaling component
session[x].medc, the media control component (e.g. RTCP) session[x].medc, the media control component (e.g. RTCP)
session[x].med[y], an array of associated media streams (e.g. session[x].med[y], an array of associated media streams (e.g.
RTP, SRTP, RTSP, MSRP). This media component may consist of zero RTP, SRTP, RTSP, MSRP). This media component may consist of zero
or more media streams. or more media streams.
Figure 7 models the vectors of the session. Figure 11 models the vectors of the session.
Measurement Units: Measurement Units:
N/A. N/A.
Issues: Issues:
None. None.
See Also: See Also:
Media Plane Media Plane
Signaling Plane Signaling Plane
skipping to change at page 14, line 31 skipping to change at page 15, line 31
/ / / /
/ | / |
/ / / /
sess.med / | sess.med / |
/_ _ _ _ _ _ _ _/ /_ _ _ _ _ _ _ _/
/ /
/ /
/ /
/ /
Figure 7: Application or session components Figure 11: Application or session components
3.1.2. Signaling Plane 3.1.2. Signaling Plane
Definition: Definition:
The control plane in which SIP messages [RFC3261] are exchanged The control plane in which SIP messages [RFC3261] are exchanged
between SIP Agents [RFC3261] to establish a connection for media between SIP Agents [RFC3261] to establish a connection for media
exchange. exchange.
Discussion: Discussion:
SIP messages are used to establish sessions in several ways: SIP messages are used to establish sessions in several ways:
skipping to change at page 16, line 22 skipping to change at page 17, line 22
N/A. N/A.
Issues: Issues:
None. None.
3.1.5. Overload 3.1.5. Overload
Definition: Definition:
Overload is defined as the state where a SIP server does not have Overload is defined as the state where a SIP server does not have
sufficient resources to process all incoming SIP messages sufficient resources to process all incoming SIP messages
[I-D.ietf-sipping-overload-reqs]. The distinction between an [I-D.ietf-soc-overload-design]. The distinction between an
overload condition and other failure scenarios is outside the overload condition and other failure scenarios is outside the
scope of this document which is blackbox testing. scope of this document which is blackbox testing.
Discussion: Discussion:
Under overload conditions, all or a percentage of Session Attempts Under overload conditions, all or a percentage of Session Attempts
will fail due to lack of resources. SIP server resources may will fail due to lack of resources. SIP server resources may
include CPU processing capacity, network bandwidth, input/output include CPU processing capacity, network bandwidth, input/output
queues, or disk resources. Any combination of resources may be queues, or disk resources. Any combination of resources may be
fully utilized when a SIP server (the DUT/SUT) is in the overload fully utilized when a SIP server (the DUT/SUT) is in the overload
condition. For proxy-only type of devices, overload issues will condition. For proxy-only type of devices, overload issues will
skipping to change at page 33, line 27 skipping to change at page 34, line 27
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
A., Peterson, J., Sparks, R., Handley, M., and E. A., Peterson, J., Sparks, R., Handley, M., and E.
Schooler, "SIP: Session Initiation Protocol", RFC 3261, Schooler, "SIP: Session Initiation Protocol", RFC 3261,
June 2002. June 2002.
[I-D.ietf-bmwg-sip-bench-meth] [I-D.ietf-bmwg-sip-bench-meth]
Poretsky, S., Gurbani, V., and C. Davids, "Methodology for Davids, C., Gurbani, V., and S. Poretsky, "Methodology for
Benchmarking SIP Networking Devices", Benchmarking SIP Networking Devices",
draft-ietf-bmwg-sip-bench-meth-02 (work in progress), draft-ietf-bmwg-sip-bench-meth-03 (work in progress),
July 2010. March 2011.
7.2. Informational References 7.2. Informational References
[RFC2285] Mandeville, R., "Benchmarking Terminology for LAN [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN
Switching Devices", RFC 2285, February 1998. Switching Devices", RFC 2285, February 1998.
[RFC1242] Bradner, S., "Benchmarking terminology for network [RFC1242] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991. interconnection devices", RFC 1242, July 1991.
[RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V.
Jacobson, "RTP: A Transport Protocol for Real-Time Jacobson, "RTP: A Transport Protocol for Real-Time
Applications", STD 64, RFC 3550, July 2003. Applications", STD 64, RFC 3550, July 2003.
[RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K.
Norrman, "The Secure Real-time Transport Protocol (SRTP)", Norrman, "The Secure Real-time Transport Protocol (SRTP)",
RFC 3711, March 2004. RFC 3711, March 2004.
[I-D.ietf-sipping-overload-design] [I-D.ietf-soc-overload-design]
Hilt, V., Noel, E., Shen, C., and A. Abdelal, "Design Hilt, V., Noel, E., Shen, C., and A. Abdelal, "Design
Considerations for Session Initiation Protocol (SIP) Considerations for Session Initiation Protocol (SIP)
Overload Control", draft-ietf-sipping-overload-design-02 Overload Control", draft-ietf-soc-overload-design-05 (work
(work in progress), July 2009. in progress), March 2011.
[I-D.ietf-sipping-overload-reqs] [I-D.ietf-soc-overload-control]
Rosenberg, J., "Requirements for Management of Overload in Gurbani, V., Hilt, V., and H. Schulzrinne, "Session
the Session Initiation Protocol", Initiation Protocol (SIP) Overload Control",
draft-ietf-sipping-overload-reqs-05 (work in progress), draft-ietf-soc-overload-control-02 (work in progress),
July 2008. February 2011.
Appendix A. White Box Benchmarking Terminology Appendix A. White Box Benchmarking Terminology
Session Attempt Arrival Rate Session Attempt Arrival Rate
Definition: Definition:
The number of Session Attempts received at the DUT/SUT over a The number of Session Attempts received at the DUT/SUT over a
specified time period. specified time period.
Discussion: Discussion:
skipping to change at page 34, line 39 skipping to change at page 35, line 39
Session attempts/sec Session attempts/sec
Issues: Issues:
None. None.
See Also: See Also:
Session Attempt Session Attempt
Authors' Addresses Authors' Addresses
Scott Poretsky Carol Davids
Allot Communications Illinois Institute of Technology
300 TradeCenter, Suite 4680 201 East Loop Road
Woburn, MA 08101 Wheaton, IL 60187
USA USA
Phone: +1 508 309 2179 Phone: +1 630 682 6024
Email: sporetsky@allot.com Email: davids@iit.edu
Vijay K. Gurbani Vijay K. Gurbani
Bell Laboratories, Alcatel-Lucent Bell Laboratories, Alcatel-Lucent
1960 Lucent Lane 1960 Lucent Lane
Rm 9C-533 Rm 9C-533
Naperville, IL 60566 Naperville, IL 60566
USA USA
Phone: +1 630 224 0216 Phone: +1 630 224 0216
Email: vkg@alcatel-lucent.com Email: vkg@bell-labs.com
Carol Davids Scott Poretsky
Illinois Institute of Technology Allot Communications
201 East Loop Road 300 TradeCenter, Suite 4680
Wheaton, IL 60187 Woburn, MA 08101
USA USA
Phone: +1 630 682 6024 Phone: +1 508 309 2179
Email: davids@iit.edu Email: sporetsky@allot.com
 End of changes. 41 change blocks. 
130 lines changed or deleted 203 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/