Network Working Group                                      Aamer Akhter
Internet Draft                                              Rajiv Asati                                            Cisco Systems
Intended status: Informational
Expires: May 2009                                           Rajiv Asati
                                                          Cisco Systems
                                                     September 15,

                                                       November 3, 2008

                  MPLS Forwarding Benchmarking Methodology
                draft-ietf-bmwg-mpls-forwarding-meth-00.txt
                draft-ietf-bmwg-mpls-forwarding-meth-01.txt

Status of this Memo

   By submitting this Internet-Draft, each author represents that
   any applicable patent or other IPR claims of which he or she is
   aware have been or will be disclosed, and any of which he or she
   becomes aware will be disclosed, in accordance with Section 6 of
   BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
        http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
        http://www.ietf.org/shadow.html

   This Internet-Draft will expire on March 15, May 3, 2009.

Abstract

   The purpose of this draft is to describe

   This document describes a methodology specific to the benchmarking
   of MPLS forwarding devices. The scope of this
   benchmarking will be devices, limited to various types of packet-forwarding packet-
   forwarding and delay measurements. It builds upon the tenets set
   forth in RFC2544 [RFC2544], RFC1242 [RFC1242] and other IETF
   Benchmarking Methodology Working Group (BMWG) efforts.  This
   document seeks to extend these efforts to the MPLS paradigm.

Table of Contents

   1. Introduction...................................................2
   2. Document Scope.................................................3
   3. Key Words to Reflect Requirements..............................3
   4. Test Methodology...............................................3
   4.1. Test Considerations..........................................4
   4.1.1. IGP Support................................................4 Support................................................5
   4.1.2. Label Distribution Support.................................5
   4.1.3. Frame Sizes................................................5
   4.1.4. TTL Time-to-Live (TTL) or Hop Count...........................................5 Limit............................6
   4.1.5. Trial Duration.............................................6
   4.1.6. Address Resolution and Dynamic Protocol State..............6 State..............7
   4.1.7. Abbreviations Used.........................................7
   5. Reporting Format...............................................7
   6. MPLS Forwarding Benchmarking tests.............................8 Tests.............................8
   6.1. Throughput...................................................9 Throughput..................................................11
   6.1.1. Throughput for MPLS Label Imposition.......................9 Imposition......................11
   6.1.2. Throughout for MPLS Label Swap............................10 Swap............................12
   6.1.3. Throughout for MPLS Label Disposition.....................11 Disposition.....................13
   6.1.4. Throughput for MPLS Label Disposition (Aggregate).........12 (Aggregate).........14
   6.2. Latency Measurement.........................................13 Measurement.........................................15
   6.3. Frame Loss Rate Measurement (FLR)...........................15
   6.4. System Recovery.............................................16
   6.5. Reset.......................................................17
   7. Security Considerations.......................................18
   8. IANA Considerations...........................................18
   9. References....................................................19
   9.1. Acknowledgement...............................................18
   10. References...................................................19
   10.1. Normative References........................................19
   9.2. References.......................................19
   10.2. Informative References......................................19 References.....................................19
   Author's Addresses...............................................20
   Intellectual Property Statement..................................20
   Disclaimer of Validity...........................................21
   Copyright Statement..............................................21
   Acknowledgment...................................................21

1. Introduction

   Over the past several years MPLS networks have gained greater
   popularity. However, there is no standard method to compare and
   contrast the varying implementations and their strong and weak
   points. This document proposes a methodology using common criteria
   for the comparison of various implementations of basic MPLS
   forwarding devices.

   The terms used in this document remain consistent with those defined
   in "Benchmarking Terminology for Network Interconnect Devices"
   RFC1242 [RFC1242]. This terminology SHOULD be consulted before using
   or applying the recommendations of this document.

2. Document Scope

   The purpose of this draft is to describe a methodology specific to
   the benchmarking of MPLS forwarding devices. The scope of this
   benchmarking will be limited to various types of packet-forwarding
   and delay measurements in a laboratory setting. It builds upon the
   tenets set forth in RFC2544 [RFC2544], RFC1242 [RFC1242] and other
   IETF Benchmarking Methodology Working Group (BMWG) efforts.

   MPLS [RFC3031] is a foundation enabling technology for other more
   advanced technologies such as Layer 3 MPLS-VPNs, Layer 2 MPLS-VPNs,
   and MPLS Traffic Engineering. This document focuses on MPLS
   forwarding characterization. This document is not a replacement for,
   but a complement to, RFC 2544.

3. Key Words to Reflect Requirements

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in BCP 14, RFC 2119
   [RFC2119].  RFC 2119 defines the use of these key words to help make
   the intent of standards track documents as clear as possible.  While
   this document uses these keywords, this document is not a standards
   track document.

4. Test Methodology

   The set of methodologies described in this document will use the
   topologies described in this section. An effort has been made to
   exclude superfluous equipment needs such that each test can be
   carried out with the minimum number of requirements.

   Figure 1 illustrates the sample topology in which the DUT is
   connected to the test ports on the test tool.

                    +-----------------+
    +---------+     |                 |     +---------+
    | Test    |     |                 |     | Test    |
    | Port A1 +-----+ DA1         DB1 -----+ Port B1 |
    +---------+     |                 |     +---------+
    +---------+     |       DUT       |     +---------+
    | Test    |     |                 |     | Test    |
    | Port A2 +-----+ DA2         DB2 +-----+ Port B2 |
    +---------+     |                 |     +---------+
         ...        |                 |        ...
    +---------+     |                 |     +---------+
    | Test    |     +-----------------+     | Test    |
    | Port Ap |                             | Port Bp |
    +---------+                             +---------+

           Figure 1 Topology #1 for MPLS Forwarding Benchmarking

   Where

   p = number of ports (p) is ports; determined by the maximum unidirectional
   forwarding throughput of the DUT and the load capacity of the media
   between the Test Ports and DUT.

   For example, if the DUT's forwarding throughput is 100 frames per
   second (fps), and the media capacity is 50 fps, then p = 2.

   The exact throughput is a measured quantity obtained through
   testing. Throughput may vary depending on the number of ports used,
   and other factors. The number of ports used (p) SHOULD be reported
   for both Tx and Rx sides of DUT. Please see Test Setup in section 6.

4.1. Test Considerations

   This methodology assumes a full-duplex uniform medium topology. The
   medium used MUST be reported in each test result. Issues regarding
   mixed transmission media, speed mismatches, media header differences
   etc, are not under consideration. Traffic-affecting features such as
   Flow control, QoS, Graceful Restart etc. MUST be disabled, unless
   explicitly requested by in the test case. Additionally, any non-
   essential traffic MUST also be avoided.

4.1.1. IGP Support

   It is highly RECOMMENDED that all of the interfaces (A1, DA1, DB1,
   A2..) on DUT and test tool support an IGP such as IS-IS, OSPF,
   EIGRP, RIP etc. Furthermore, there are testing considerations in
   this document that the device is able to provide a stable control-
   plane during heavy forwarding workloads. The route distribution
   method used (OSPF, IS-IS, EIGRP, RIP etc.) MUST be reported.

4.1.2. Label Distribution Support

   The DUT and test tool must support at least one protocol for
   exchanging MPLS labels. The DUT and test tool MUST be capable of
   learning and advertising MPLS label bindings via the chosen
   protocol(s), and use them during packet forwarding all the time
   (includes
   (including when the label bindings change). The most commonly used
   protocol is
   protocols are Label Distribution Protocol (LDP) [RFC5036], RSVP-TE Resource
   Reservation Protocol-Traffic Engineering (RSVP-TE) [RFC5151] and MP-BGP [RFC4364].
   Border Gateway Protocol (BGP) [RFC3107].

   All of the interfaces connected to the DUT such as A1, DA1, DB1, A2
   etc., SHOULD support Label Distribution Protocol (LDP), LDP, RSVP-TE, and
   MP-BGP BGP for IPv4 or IPv6 FECs.
   Forwarding Equivalence Classes (FECs).

   This draft document discourages the use of static label to establish the
   MPLS label switched paths, since it is not commonly used in the
   production networks.

4.1.3. Frame Sizes

   Each test SHOULD be run with different (layer 2) frame sizes in
   different trials. For better reference, the The recommended sizes for IPv4 are 64, 128, 256,
   512, 1024, 1280 and 1518 for IPv4. 1518. Recommended sizes for other media can be
   found in RFC 2544 and IPv6 Benchmarking [RFC5180]. Frame sizes MUST
   be based on the pre-MPLS shim version of the frame.

   In addition to the individual frame size trials, an IMIX traffic run results MAY also be included.
   collected with multiple simultaneous frame sizes (sometimes referred
   to as an IMIX to simulate real network traffic according to the
   frame size ordering and usage). There is no standard for mixtures of
   frame sizes, and the results are subject to wide interpretation. See
   section 18 of RFC 2544.

   When running trials between different using multiple simultaneous frame sizes, the DUT
   configuration MUST remain the same.

4.1.4. TTL Time-to-Live (TTL) or Hop Count Limit

   The MPLS TTL or IPv4 TTL or IPv6 Hop Count Limit (depending on which
   portion of the packet frame the DUT is basing the forwarding behavior) MUST
   be large enough to traverse the DUT.

   If TTL/Hop Limit Decrement is a configurable option on the DUT, the
   setting SHOULD be reported.

4.1.5. Trial Duration

   Unless otherwise specified, the test portion of each trial SHOULD be
   no less than 30 seconds when static routing is in place place, and no less
   than 200 seconds when a dynamic routing protocol and LDP (default
   LDP holddown timer is 180 seconds) are being used.

   The longer trial time used for when dynamic routing protocols are being
   used is for verifying to
   verify that the DUT is able to maintain a stable control plane when
   the data-forwarding plane is under stress.

4.1.5.1. Traffic Verification

   In all cases the cases, sent traffic MUST be accounted for, whether it was
   received on the wrong port, correct port or not received at all.
   Specifically, traffic loss (also referred to as frame loss) is
   defined as the traffic (i.e. one or more frames) not received where
   expected (i.e. received on incorrect port, or received with
   incorrect layer2 or above header information etc.). In addition, the
   MPLS header
   presence or non-presence absence of the packet MUST be verified,
   as well as MPLS header, ethertype (0x8847 vs. 0x0800),
   checksum, frame sequencing and correct MPLS TTL
   decrementing.

   The MPLS header presence will decrementing, MUST
   be determined by the test. Some tests
   will require verified in the MPLS header to be imposed while others will require
   a swap or disposition. In general, many received frame.

   Many test tools will may, by default default, only verify that they have received
   the embedded signature on the receive side, but side. However, for MPLS header
   presence verification, some tests will not validate require the MPLS header to be
   imposed while others will require a swap or disposition. Hence, this
   document requires the test tool to verify the MPLS stack depth. An
   even greater level of verification would be to check if the correct
   label was imposed, but that is considered out of scope for these tests.

   "In all cases the sent traffic MUST be accounted for, whether it was
   received on the wrong port, correct port or not received at all. In
   addition, the MPLS header...."

4.1.6. Address Resolution and Dynamic Protocol State

   If the test or media is making use of a dynamic protocol (eg ARP,
   OSPF, LDP), all state for the protocols should be pre-established
   before the start of the trial.

4.1.7. Abbreviations Used

   Please refer to Figure 1, "Port based Remote Network" for a topology
   view of the network. The following abbreviations are used in this
   document -

   M  := Module Side (could be A or B)

   P  := port number

   RN := Remote Network (can also be thought of as a network that is
   reachable via) Mp. via Mp).

   Y  := number of network. (ie (i.e. the first network reachable via B1
   would be called B1RN1 and the 5th network would be called B1RN5)

5. Reporting Format

   For each test case, it is recommended RECOMMENDED that the following variables
   be reported in addition to the specific parameters requested by the
   test case:

        Parameter                       Unit                       Units or Examples

        Internet Protocol               IPv4, IPv6, Dual-Stack

        Label Distribution Protocol     LDP, RSVP-TE, BGP (or
                                        combinations)

        MPLS Forwarding Operation       Imposition, Swap,
                                        Disposition

        IGP                             ISIS, OSPF, EIGRP, RIP,
                                        static, etc.
                                        static.

        Throughput                      Frames per second

        Interface Type                  GigE, POS, ATM etc

        Interface Speed                 1 gbps, 100 Mbps, etc

        Interface Encapsulation         VLAN, PPP, HDLC
        Packet

        Frame Size                      Bytes

        Number of A and B               1A, 2B
        interfaces (see Figure 1)

   The individual test cases may have additional reporting requirements
   that may refer to other RFCs.

6. MPLS Forwarding Benchmarking tests Tests

   MPLS is altogether a different forwarding paradigm from IP. Unlike IP packet
   and IP forwarding, an MPLS packet is likely to may contain more than one MPLS headers
   header and may go through one of three forwarding operations -
   imposition, swap and disposition. Such characteristics desire
   further granularity in MPLS forwarding benchmarking than those of
   described in RFC2544. Thus the benchmarking includes, but is not
   limited to:

     1. Throughput

     2. Latency
     3. Frame Loss rate

     4. System Recovery

     5. Reset

     6. MPLS EXP field Operations (including explicit-null cases)

     7. Negative Scenarios (TTL expiry, etc)

     8. Multicast

   This document focuses on the first five categories. categories, inline with the
   spirit of RFC2544. All the benchmarking test cases described in this
   document are expected to to, at a minimum minimum, follow the below 'Test Setup' and
   'Test Procedure.' Procedure' below -

   Test Setup
     It is recommended that

     Referring to Figure 1, a single A and B interface SHOULD be used. used
     (p = 1 SHOULD be used). However, if the forwarding throughput of
     the DUT is more than that of the media rate of a single interface,
     then additional A and B interfaces MUST be enabled so as to exceed
     the DUT's forwarding throughput. In such case, the tool traffic
     should use IP addresses assigned to BpRN1 and BpAN as the IP
     destinations in a weighted round robin fashion. The
     weighting ratio between  BpRN1 and BpAN is a constant test
     parameter. A suggested ratio is 1:100 with BpAN:BpRN1. The traffic
     streams offered MUST conform to section 16 of RFC 2544.

   Test Procedure (Refer to section 26 of RFC 2544)

     Send traffic from port port(s) Ap towards DUT at a constant load
     towards IP prefixes (BpRN1 addresses) advertised by the tool on
     the receive ports, for a fixed duration of time. time interval.

     If any frame loss is detected, a new iteration is needed where the
     offered load is decreased and the sender will transmit again. An
     iterative search algorithm MUST be used to determine the maximum
     offered frame rate with a zero frame loss.

     Each iteration will should involve varying the offered load of the
     regular
     traffic, while keeping the other parameters (test duration, number
     of interfaces, number of addresses, frame size etc) constant,
     until the maximum rate at which none of the offered frames are
     dropped is determined.

6.1. Throughput

   This section contains the description of the tests that are related
   to the characterization of DUT's MPLS frame traffic forwarding.

6.1.1. Throughput for MPLS Label Imposition

   Objective

     To obtain the maximum forwarding rate DUT's Throughput (as per RFC 2544) during label
     imposition (i.e. IP to MPLS) for a regular (IPv4 or IPv6) packet by the DUT. MPLS).

   Test Setup

     In addition to setup described in section 6, the test tool should
     advertise the IP prefix(es) i.e. RNx(using a routing protocol as
     per section 1.1) and associated MPLS label (using a label
     distribution protocol as per section 1.2) on its receive ports Bp
     to DUT. The test tool may learn these IP prefixes on it's its transmit
     ports Ap from DUT.

   Discussion

     The DUT's MPLS forwarding table must contain a non-reserved MPLS
     label value as the outgoing label for the learned prefix,
     resulting in IP-to-MPLS forwarding operation. The testool test tool must
     receive MPLS packets on receive ports Bp (from DUT) with the same
     label values that are advertised.

   Procedure

     Please see Test Procedure in section 6. Additionally, the test
     tool must MUST send unlabeled IP packets on transmit ports Ap (with IP
     destination belonging to above IP prefix(es)), and expect to
     receive MPLS packets on receive ports Bp.

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

     Results for each test SHOULD be in the form of a table with a row
     for each of the tested frame sizes. Additional columns SHOULD
     include: offered load and measured throughput.

6.1.2. Throughout for MPLS Label Swap

   Objective

     To obtain the maximum DUT's Throughput (as per RFC 2544) during label swap rate for a labeled packet
     swapping (i.e. MPLS to MPLS) by the DUT. MPLS).

   Test Setup

     In addition to setup described in section 6, the test tool must be
     set up to advertise IP prefix (using a routing protocol as per
     section 1.1) and associated MPLS label (using a label distribution
     protocol as per section 1.2) on the receive ports Bp, and learn
     the IP prefix(es) with the appropriate MPLS labels on the transmit
     ports Ap. The test tool then must use the learned MPLS label
     values and learned IP prefix values in MPLS packets transmitted on
     ports Ap.

   Discussion

     The DUT's MPLS forwarding table must contain non-reserved MPLS
     label values as the outgoing and incoming labels for the learned
     prefix, resulting in MPLS-to-MPLS forwarding operation. The
     testool test
     tool must receive MPLS packets on receive ports Bp (from DUT). The
     received MPLS packets must contain the same number of MPLS headers
     as those of transmitted MPLS Packets.

   Procedure

     Please see Test Procedure in section 6. Additionally, the test
     tool must send MPLS packets on its transmit ports Ap (with IP
     destination belonging to advertised IP prefix(es)), and expect to
     receive MPLS packets on its receive ports Bp.

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

     Results for each test SHOULD be in the form of a table with a row
     for each of the tested frame sizes. Additional columns SHOULD
     include: offered load and measured throughput.

6.1.3. Throughout for MPLS Label Disposition

   Objective

     To obtain the maximum DUT's Throughput (as per RFC 2544) during label
     disposition rate for MPLS packet (i.e. MPLS to IP) by the DUT, when DUT installs ''Untagged'' using "Untagged" outgoing label.

   Test Setup

     In addition to setup described in section 6, the test tool must be
     set up to advertise the IP prefix(es) (using a routing protocol as
     per section 1.1) without any MPLS label on the receive ports Bp,
     and learn the IP prefix(es) with the appropriate MPLS labels on
     the transmit ports Ap. The test tool then must use the learned
     MPLS label values and learned IP prefix values in MPLS packets
     transmitted on ports Ap.

   Discussion

     The DUT's MPLS forwarding table must contain an untagged outgoing
     label for the learned prefix, resulting in MPLS-to-IP forwarding
     operation. The testool test tool must receive IP packets on receive ports
     Bp (from DUT).

   Procedure

     Please see Test Procedure in section 6. Additionally, the test
     tool must send MPLS packets on its transmit ports Ap (with IP
     destination belonging to advertised IP prefix(es)), and expect to
     receive IP packets on its receive ports Bp.

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

     Results for each test SHOULD be in the form of a table with a row
     for each of the tested frame sizes. Additional columns SHOULD
     include: offered load and measured throughput.

6.1.4. Throughput for MPLS Label Disposition (Aggregate)

   Objective

     To obtain the maximum label disposition rate for MPLS packet DUT's Throughput (as per RFC 2544) during label
     disposition (i.e. MPLS to IP) by the DUT, when DUT installs ''Aggregate'' using "Aggregate" outgoing label.

   Test Setup

     In addition to setup described in section 6, the DUT should be
     provisioned such that it allocates an aggregate outgoing label to
     a prefix (where the prefix may be a 'BGP aggregated prefix' , 'BGP
     VPN connected prefix' or an IGP aggregation that results in an
     aggregate label, etc. and must include the addresses belonging to
     the DUT receive ports Bp).

     The DUT must advertise the IP prefix(es) along with the MPLS
     label(s) via a label distribution protocol to the testool test tool on
     tool transmit ports Ap.

     The test tool then must use the learned MPLS label values and
     learned IP prefix values in MPLS packets transmitted on ports Ap.

   Discussion

     The DUT's MPLS forwarding table must contain an aggregate outgoing
     label and IP forwarding table must contain a valid entry for the
     learned prefix, resulting in MPLS-to-IP forwarding operation (i.e.
     MPLS header removal followed by IP lookup). The testool test tool must
     receive IP packets on receive ports Bp (from DUT).

   Procedure

     Please see Test Procedure in section 6. Additionally, the test
     tool must send MPLS packets on its transmit ports Ap (with IP
     destination belonging to advertised IP prefix(es)), and expect to
     receive IP packets on its receive ports Bp.

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

     Results for each test SHOULD be in the form of a table with a row
     for each of the tested frame sizes. Additional columns SHOULD
     include: offered load and measured throughput.

6.2. Latency Measurement

   This measures the time taken by the DUT to forward the MPLS packet
   during various MPLS switching paths such as IP-to-MPLS or MPLS-to-
   MPLS or MPLS-to-IP involving one or more MPLS headers.

   The forwarding delay measurement requires the accurate propagation
   delay measurement as a prerequisite.

   One of the propagation delay measurement mechanisms is to connect
   test transmit port such as A1 and test receive port such as B1 with
   the wire length=X (bypass DA1 and DB1) and measure the time (t1)
   taken by the packet to reach from A1 to B1.

   Once the time t1 has been recorded, then the DUT should be inserted
   such that the test port A1 connects to DA1 and B1 connects to DB1,
   and the sum of A1-DA1 wire length and B1-DB1 wire length equals X.

   The packet should be sent from A1 to B1 such that the packet is
   received by DA1, which after consulting with its forwarding table,
   forwards the packet to B1 via DB1. The time (t2) taken by the packet
   to reach B1 (from A1) is recorded.

   The difference of time t2-t1 would provide the ballpark measurement
   of DUT's forwarding delay.

   The measurement for t2 should be performed under each of three
   forwarding operations (IP-to-MPLS, MPLS-to-MPLS, MPLS-to-IP) and
   measured accordingly.

   Objective

     To obtain the maximum latency induced by the DUT during MPLS
     packet forwarding for each of three forwarding operations.

   Test Setup

     Follow the test setup Test Setup guidelines established for each of three
     MPLS forwarding operations in section 6.1.1 (for IP-to-MPLS),
     6.1.2 (for MPLS-to-MPLS) ), and 6.1.3 and 6.1.4 (for MPLS-to-IP)
     one by one.

   Procedure

     Please refer to RFC2544. Additionally, follow section 26.2 in RFC2544 in addition to following
     the associated procedure for each MPLS forwarding operation in
     accord with the Test Setup described earlier -

         IP-to-MPLS forwarding      (Imposition)   Section 6.1.1
         MPLS-to-MPLS forwarding    (Swap)         Section 6.1.2
         MPLS-to-IP forwarding      (Disposition)  Section 6.1.3
         MPLS-to-IP forwarding      (Aggregate)    Section 6.1.4

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

6.3.  Frame Loss Rate Measurement (FLR)

   This measures the percentage of MPLS frames that were not forwarded
   during various switching paths such as IP-to-MPLS (imposition) or
   MPLS-to-IP (swap) or MPLS-IP (disposition) by the DUT under
   overloaded state.

   Please refer to RFC2544 section 26.3 for more details.

   Objective

     To obtain the frame loss rate, as defined in RFC1242, for each of
     three MPLS forwarding operations of a DUT, throughout the range of
     input data rates and frame sizes.

   Test Setup

     Follow the test setup Test Setup guidelines established for each of three
     MPLS forwarding operations in section 6.1.1 (for IP-to-MPLS),
     6.1.2 (for MPLS-to-MPLS) MPLS-to-MPLS), and 6.1.3 and 6.1.4 (for MPLS-to-IP) and
     procedure one by one.

   Procedure

     Please refer to RFC2544.

     Additionally, section 26.3 of RFC 2544 RFC2544 and follow the
     associated procedure (and test Setup) for each MPLS forwarding operation one-by-one
     in accord with the Test Setup described earlier -

         IP-to-MPLS forwarding      (Imposition)   Section 6.1.1
         MPLS-to-MPLS forwarding    (Swap)         Section 6.1.2
         MPLS-to-IP forwarding      (Disposition)  Section 6.1.3
         MPLS-to-IP forwarding      (Aggregate)    Section 6.1.4

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

6.4. System Recovery

   Objective

     To characterize the speed at which a DUT recovers from an overload
     condition.

   Test Setup

     Follow the test setup Test Setup guidelines established for each of three
     MPLS forwarding operations in section 6.1.1 (for IP-to-MPLS),
     6.1.2 (for MPLS-to-MPLS) and 6.1.3 (for MPLS-to-IP) and procedure
     one by one.

   Procedure

     Please refer to RFC2544 section 26.5.

     Additionally, and follow the associated procedure (and test Setup) for
     each MPLS forwarding operation one-by-one in the referenced sections one-by-
     one in accord with the Test Setup described earlier -

         IP-to-MPLS forwarding      (Imposition)   Section 6.1.1
         MPLS-to-MPLS forwarding    (Swap)         Section 6.1.2
         MPLS-to-IP forwarding      (Disposition)  Section 6.1.3
         MPLS-to-IP forwarding      (Aggregate)    Section 6.1.4

   Reporting Format

     Same as RFC2544, in addition to RFC2544 and the parameters in Section 4. of section 5.

6.5. Reset

   Objective

     To characterize the speed at which a DUT recovers from a device or
     software reset.

   Test Setup

     Follow the test setup Test Setup guidelines established for each of three
     MPLS forwarding operations in section 6.1.1 (for IP-to-MPLS),
     6.1.2 (for MPLS-to-MPLS) and 6.1.3 (for MPLS-to-IP) and procedure
     one by one.

     For this test, all graceful-restart features MUST be disabled.

   Procedure

     Please refer to RFC2544 section 26.5. Examples of hardware and
     software resets are:

      hardware

      Hardware reset - forwarding module resetting (e.g. OIR).

      software

      Software reset - reset initiated through a CLI (e.g. reload).

     Additionally, follow the associated specific section for procedure (and test
     Setup) for each MPLS forwarding operation one-by-one -

         IP-to-MPLS forwarding      (Imposition)   Section 6.1.1
         MPLS-to-MPLS forwarding    (Swap)         Section 6.1.2
         MPLS-to-IP forwarding      (Disposition)  Section 6.1.3
         MPLS-to-IP forwarding      (Aggregate)    Section 6.1.4

   Reporting Format

     Same as RFC2544, in addition to parameters in Section 4, RFC2544 and the parameters of section 5 including the
     specific kind of reset performed.

7. Security Considerations

   During

   Benchmarking activities, as described in this memo, are limited to
   technology characterization using controlled stimuli in a laboratory
   environment, with dedicated address space and the course of test, constraints
   specified in the test sections above.

   The benchmarking network topology must will be disconnected
   from an independent test setup
   and MUST NOT be connected to devices that may forward the test
   traffic into a production
   environment. network or misroute traffic to the test
   management network.

   There are no specific security considerations within the scope of
   this document.

8. IANA Considerations

   There are no considerations for IANA at this time.

9. Acknowledgement

   The authors would like to thank Mo Khalid, who motivated us to write
   this document. We would like to thank Chip Popoviciu, Jay Karthik,
   Rajiv Pap, Samir Vapiwala, Silvija Andrijic Dry, Scott Bradner, Al
   Morton and Bill Cerveny for their careful review and suggestions.

10. References

9.1.

10.1. Normative References

   [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
             Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC2544] Bradner, S. and McQuaid, J., "Benchmarking Methodology for
             Network Interconnect Devices", RFC 2544, March 1999.

   [RFC1242] Bradner, S., Editor, "Benchmarking Terminology for Network
             Interconnection Devices", RFC 1242, July 1991.

   [RFC3031] Rosen et al., ''Multiprotocol "Multiprotocol Label Switching
             Architecture'',
             Architecture", Rosen et al., RFC 3031, August 1999.

   [RFC4364]

   [RFC3107] Rosen, E. and Rekhter, Y., ''BGP/MPLS IP Virtual Private
             Networks (VPNs)'', "Carrying Label Information in
             BGP-4", RFC 4364, February 2006. 3107, May 2001.

   [RFC5036] Andersson, L., Doolan, P., Feldman, N., Fredette, A. and
             B. Thomas, "LDP Specification", RFC 5036, January 2001.

9.2.

10.2. Informative References

   [RFC2544] Bradner, S. and McQuaid, J., "Benchmarking Methodology for
             Network Interconnect Devices", RFC 2544, March 1999.

   [RFC1242] Bradner, S., Editor, "Benchmarking Terminology for Network
             Interconnection Devices", RFC 1242, July 1991.

   [RFC5180] Popoviciu, C., et al, "IPv6 Benchmarking Methodology for
             Network Interconnect Devices", RFC 5180, May 2008.

   [RFC5151] Farrel, et al, "Inter-Domain MPLS and GMPLS Traffic
             Engineering --Resource Reservation Protocol-Traffic
             Engineering (RSVP-TE) Extensions", RFC 5151, Feb 2008.

Author's Addresses

   Aamer Akhter
   Cisco Systems
   7025 Kit Creek Road
   RTP, NC 27709
   USA

   Phone: 919 392 2564

   Email: aakhter@cisco.com

   Rajiv Asati
   Cisco Systems
   7025 Kit Creek Road
   RTP, NC 27709
   USA

   Phone: 919 392 8558

   Email: rajiva@cisco.com

Intellectual Property Statement

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed
   to pertain to the implementation or use of the technology described
   in this document or the extent to which any license under such
   rights might or might not be available; nor does it represent that
   it has made any independent effort to identify any such rights.
   Information on the procedures with respect to rights in RFC
   documents can be found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use
   of such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository
   at http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at
   ietf-ipr@ietf.org.

Disclaimer of Validity

Copyright Statement

   This document and the information contained herein are provided on
   an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE
   REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE
   IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL
   WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY
   WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE
   ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
   FOR A PARTICULAR PURPOSE.

   Copyright (C) The IETF Trust (2008).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

Acknowledgment

   Special thanks to Scott Bradner for his very insightful comments
   delivered on very short notice.

   Funding for the RFC Editor function is provided by the IETF
   Administrative Support Activity (IASA).