draft-ietf-spring-segment-routing-msdc-02.txt   draft-ietf-spring-segment-routing-msdc-03.txt 
Network Working Group C. Filsfils, Ed. Network Working Group C. Filsfils, Ed.
Internet-Draft S. Previdi, Ed. Internet-Draft S. Previdi, Ed.
Intended status: Informational Cisco Systems, Inc. Intended status: Informational Cisco Systems, Inc.
Expires: April 6, 2017 J. Mitchell Expires: September 4, 2017 J. Mitchell
Unaffiliated Unaffiliated
E. Aries E. Aries
Juniper Networks Juniper Networks
P. Lapukhov P. Lapukhov
Facebook Facebook
October 3, 2016 March 3, 2017
BGP-Prefix Segment in large-scale data centers BGP-Prefix Segment in large-scale data centers
draft-ietf-spring-segment-routing-msdc-02 draft-ietf-spring-segment-routing-msdc-03
Abstract Abstract
This document describes the motivation and benefits for applying This document describes the motivation and benefits for applying
segment routing in the data-center. It describes the design to segment routing in BGP-based large-scale data-centers. It describes
deploy segment routing in the data-center, for both the MPLS and IPv6 the design to deploy segment routing in those data-centers, for both
dataplanes. the MPLS and IPv6 dataplanes.
Requirements Language Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119]. document are to be interpreted as described in RFC 2119 [RFC2119].
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
skipping to change at page 1, line 45 skipping to change at page 1, line 45
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 6, 2017. This Internet-Draft will expire on September 4, 2017.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Large Scale Data Center Network Design Summary . . . . . . . 3 2. Large Scale Data Center Network Design Summary . . . . . . . 3
2.1. Reference design . . . . . . . . . . . . . . . . . . . . 4 2.1. Reference design . . . . . . . . . . . . . . . . . . . . 4
3. Some open problems in large data-center networks . . . . . . 5 3. Some open problems in large data-center networks . . . . . . 5
4. Applying Segment Routing in the DC with MPLS dataplane . . . 6 4. Applying Segment Routing in the DC with MPLS dataplane . . . 6
4.1. BGP Prefix Segment . . . . . . . . . . . . . . . . . . . 6 4.1. BGP Prefix Segment (BGP-Prefix-SID) . . . . . . . . . . . 6
4.2. eBGP Labeled Unicast (RFC3107) . . . . . . . . . . . . . 7 4.2. eBGP Labeled Unicast (RFC3107) . . . . . . . . . . . . . 7
4.2.1. Control Plane . . . . . . . . . . . . . . . . . . . . 7 4.2.1. Control Plane . . . . . . . . . . . . . . . . . . . . 7
4.2.2. Data Plane . . . . . . . . . . . . . . . . . . . . . 9 4.2.2. Data Plane . . . . . . . . . . . . . . . . . . . . . 9
4.2.3. Network Design Variation . . . . . . . . . . . . . . 10 4.2.3. Network Design Variation . . . . . . . . . . . . . . 10
4.2.4. Global BGP Prefix Segment through the fabric . . . . 10 4.2.4. Global BGP Prefix Segment through the fabric . . . . 10
4.2.5. Incremental Deployments . . . . . . . . . . . . . . . 11 4.2.5. Incremental Deployments . . . . . . . . . . . . . . . 11
4.3. iBGP Labeled Unicast (RFC3107) . . . . . . . . . . . . . 12 4.3. iBGP Labeled Unicast (RFC3107) . . . . . . . . . . . . . 12
5. Applying Segment Routing in the DC with IPv6 dataplane . . . 12 5. Applying Segment Routing in the DC with IPv6 dataplane . . . 12
6. Communicating path information to the host . . . . . . . . . 13 6. Communicating path information to the host . . . . . . . . . 13
7. Addressing the open problems . . . . . . . . . . . . . . . . 14 7. Addressing the open problems . . . . . . . . . . . . . . . . 14
7.1. Per-packet and flowlet switching . . . . . . . . . . . . 14 7.1. Per-packet and flowlet switching . . . . . . . . . . . . 14
7.2. Performance-aware routing . . . . . . . . . . . . . . . . 15 7.2. Performance-aware routing . . . . . . . . . . . . . . . . 15
7.3. Non-oblivious routing . . . . . . . . . . . . . . . . . . 16 7.3. Deterministic network probing . . . . . . . . . . . . . . 16
7.4. Deterministic network probing . . . . . . . . . . . . . . 16
8. Additional Benefits . . . . . . . . . . . . . . . . . . . . . 16 8. Additional Benefits . . . . . . . . . . . . . . . . . . . . . 16
8.1. MPLS Dataplane with operational simplicity . . . . . . . 16 8.1. MPLS Dataplane with operational simplicity . . . . . . . 16
8.2. Minimizing the FIB table . . . . . . . . . . . . . . . . 17 8.2. Minimizing the FIB table . . . . . . . . . . . . . . . . 17
8.3. Egress Peer Engineering . . . . . . . . . . . . . . . . . 17 8.3. Egress Peer Engineering . . . . . . . . . . . . . . . . . 17
8.4. Incremental Deployments . . . . . . . . . . . . . . . . . 18 8.4. Anycast . . . . . . . . . . . . . . . . . . . . . . . . . 18
8.5. Anycast . . . . . . . . . . . . . . . . . . . . . . . . . 18
9. Preferred SRGB Allocation . . . . . . . . . . . . . . . . . . 18 9. Preferred SRGB Allocation . . . . . . . . . . . . . . . . . . 18
10. Alternative Options . . . . . . . . . . . . . . . . . . . . . 19 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19
11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20 11. Manageability Considerations . . . . . . . . . . . . . . . . 19
12. Manageability Considerations . . . . . . . . . . . . . . . . 20 12. Security Considerations . . . . . . . . . . . . . . . . . . . 19
13. Security Considerations . . . . . . . . . . . . . . . . . . . 20 13. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 20
14. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 20 14. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 20
15. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 20 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 21
16. References . . . . . . . . . . . . . . . . . . . . . . . . . 21 15.1. Normative References . . . . . . . . . . . . . . . . . . 21
16.1. Normative References . . . . . . . . . . . . . . . . . . 21 15.2. Informative References . . . . . . . . . . . . . . . . . 21
16.2. Informative References . . . . . . . . . . . . . . . . . 21
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22
1. Introduction 1. Introduction
Segment Routing (SR), as described in Segment Routing (SR), as described in
[I-D.ietf-spring-segment-routing] leverages the source routing [I-D.ietf-spring-segment-routing] leverages the source routing
paradigm. A node steers a packet through an ordered list of paradigm. A node steers a packet through an ordered list of
instructions, called segments. A segment can represent any instructions, called segments. A segment can represent any
instruction, topological or service-based. A segment can have a instruction, topological or service-based. A segment can have a
local semantic to an SR node or global within an SR domain. SR local semantic to an SR node or global within an SR domain. SR
allows to enforce a flow through any topological path and service allows to enforce a flow through any topological path and service
chain while maintaining per-flow state only at the ingress node to chain while maintaining per-flow state only at the ingress node to
the SR domain. Segment Routing can be applied to the MPLS and IPv6 the SR domain. Segment Routing can be applied to the MPLS and IPv6
data-planes. data-planes.
The use-case use-cases described in this document should be The use-cases described in this document should be considered in the
considered in the context of the BGP-based large-scale data-center context of the BGP-based large-scale data-center (DC) design
(DC) design described in [RFC7938]. We extend it by applying SR both described in [RFC7938]. We extend it by applying SR both with IPv6
with IPv6 and MPLS dataplane. and MPLS dataplane.
2. Large Scale Data Center Network Design Summary 2. Large Scale Data Center Network Design Summary
This section provides a brief summary of the informational document This section provides a brief summary of the informational document
[RFC7938] that outlines a practical network design suitable for data- [RFC7938] that outlines a practical network design suitable for data-
centers of various scales: centers of various scales:
o Data-center networks have highly symmetric topologies with o Data-center networks have highly symmetric topologies with
multiple parallel paths between two server attachment points. The multiple parallel paths between two server attachment points. The
well-known Clos topology is most popular among the operators. In well-known Clos topology is most popular among the operators. In
a Clos topology, the minimum number of parallel paths between two a Clos topology, the minimum number of parallel paths between two
elements is determined by the "width" of the middle stage. See elements is determined by the "width" of the "Tier-1" stage. See
Figure 1 below for an illustration of the concept. Figure 1 below for an illustration of the concept.
o Large-scale data-centers commonly use a routing protocol, such as o Large-scale data-centers commonly use a routing protocol, such as
BGP4 [RFC4271] in order to provide endpoint connectivity. BGP4 [RFC4271] in order to provide endpoint connectivity.
Recovery after a network failure is therefore driven either by Recovery after a network failure is therefore driven either by
local knowledge of directly available backup paths or by local knowledge of directly available backup paths or by
distributed signaling between the network devices. distributed signaling between the network devices.
o Within data-center networks, traffic is load-shared using the o Within data-center networks, traffic is load-shared using the
Equal Cost Multipath (ECMP) mechanism. With ECMP, every network Equal Cost Multipath (ECMP) mechanism. With ECMP, every network
device implements a pseudo-random decision, mapping packets to one device implements a pseudo-random decision, mapping packets to one
of the parallel paths by means of a hash function calculated over of the parallel paths by means of a hash function calculated over
certain parts of the packet, typically a combination of various certain parts of the packet, typically a combination of various
packet header fields. packet header fields.
The following is a schematic of a five-stage Clos topology, with four The following is a schematic of a five-stage Clos topology, with four
devices in the middle stage. Notice that number of paths between devices in the "Tier-1" stage. Notice that number of paths between
Node1 and Node12 equals to four: the paths have to cross all of Node1 and Node12 equals to four: the paths have to cross all of
Tier-1 devices. At the same time, the number of paths between Node1 Tier-1 devices. At the same time, the number of paths between Node1
and Node2 equals two, and the paths only cross Tier-2 devices. Other and Node2 equals two, and the paths only cross Tier-2 devices. Other
topologies are possible, but for simplicity we'll only look into the topologies are possible, but for simplicity we'll only look into the
topologies that have a single path from Tier-1 to Tier-3. The rest topologies that have a single path from Tier-1 to Tier-3. The rest
could be treated similarly, with a few modifications to the logic. could be treated similarly, with a few modifications to the logic.
2.1. Reference design 2.1. Reference design
Tier-1 Tier-1
skipping to change at page 4, line 31 skipping to change at page 4, line 29
+->| 5 |--+ +->| 5 |--+
| +-----+ | | +-----+ |
Tier-2 | | Tier-2 Tier-2 | | Tier-2
+-----+ | +-----+ | +-----+ +-----+ | +-----+ | +-----+
+------------>|NODE |--+->|NODE |--+--|NODE |-------------+ +------------>|NODE |--+->|NODE |--+--|NODE |-------------+
| +-----| 3 |--+ | 6 | +--| 9 |-----+ | | +-----| 3 |--+ | 6 | +--| 9 |-----+ |
| | +-----+ +-----+ +-----+ | | | | +-----+ +-----+ +-----+ | |
| | | | | | | |
| | +-----+ +-----+ +-----+ | | | | +-----+ +-----+ +-----+ | |
| +-----+---->|NODE |--+ |NODE | +--|NODE |-----+-----+ | | +-----+---->|NODE |--+ |NODE | +--|NODE |-----+-----+ |
| | | +---| 4 |--+->| 7 |--+--| 10 |--+ | | | | | | +---| 4 |--+->| 7 |--+--| 10 |---+ | | |
| | | | +-----+ | +-----+ | +-----+ | | | | | | | | +-----+ | +-----+ | +-----+ | | | |
| | | | | | | | | | | | | | | | | | | |
+-----+ +-----+ | +-----+ | +-----+ +-----+ +-----+ +-----+ | +-----+ | +-----+ +-----+
|NODE | |NODE | Tier-3 +->|NODE |--+ Tier-3 |NODE | |NODE | |NODE | |NODE | Tier-3 +->|NODE |--+ Tier-3 |NODE | |NODE |
| 1 | | 2 | | 8 | | 11 | | 12 | | 1 | | 2 | | 8 | | 11 | | 12 |
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
| | | | | | | | | | | | | | | |
A O B O <- Servers -> Z O O O A O B O <- Servers -> Z O O O
Figure 1: 5-stage Clos topology Figure 1: 5-stage Clos topology
In the reference topology illustrated in Figure 1, we assume: In the reference topology illustrated in Figure 1, we assume:
o Each node is its own AS (Node X has AS X) o Each node is its own AS (Node X has AS X)
* For simple and efficient route propagation filtering, Nodes 5, * For simple and efficient route propagation filtering, Node5,
6, 7 and 8 share the same AS, Nodes 3 and 4 share the same AS, Node6, Node7 and Node8 use the same AS, Node3 and Node4 use the
Nodes 9 and 10 share the same AS. same AS, Node9 and Node10 use the same AS.
* For efficient usage of the scarce 2-byte Private Use AS pool, * For efficient usage of the scarce 2-byte Private Use AS pool,
different Tier-3 nodes might share the same AS. different Tier-3 nodes might use the same AS.
* Without loss of generality, we will simplify these details in * Without loss of generality, we will simplify these details in
this document and assume that each node has its own AS. this document and assume that each node has its own AS.
o Each node peers with its neighbors via BGP session o Each node peers with its neighbors with a BGP session. If not
specified, eBGP is assumed. In a specific use-case, iBGP will be
* If not specified, eBGP is assumed. In a specific use-case, used but this will be called out explicitly in that case.
iBGP will be used but this will be called out explicitly in
that case.
o Each node originates the IPv4 address of it's loopback interface o Each node originates the IPv4 address of its loopback interface
into BGP and announces it to its neighbors. into BGP and announces it to its neighbors.
* The loopback of Node X is 192.0.2.x/32. * The loopback of Node X is 192.0.2.x/32.
In this document, we also refer to the Tier-1, Tier-2 and Tier-3 In this document, we also refer to the Tier-1, Tier-2 and Tier-3
switches respectively as Spine, Leaf and ToR (top of rack) switches. nodes respectively as Spine, Leaf and ToR (top of rack) nodes. When
When a ToR switch acts as a gateway to the "outside world", we call a ToR node acts as a gateway to the "outside world", we call it a
it a border switch. border node.
3. Some open problems in large data-center networks 3. Some open problems in large data-center networks
The data-center network design summarized above provides means for The data-center network design summarized above provides means for
moving traffic between hosts with reasonable efficiency. There are moving traffic between hosts with reasonable efficiency. There are
few open performance and reliability problems that arise in such few open performance and reliability problems that arise in such
design: design:
o ECMP routing is most commonly realized per-flow. This means that o ECMP routing is most commonly realized per-flow. This means that
large, long-lived "elephant" flows may affect performance of large, long-lived "elephant" flows may affect performance of
smaller, short-lived "mouse" flows and reduce efficiency of per- smaller, short-lived "mouse" flows and reduce efficiency of per-
flow load-sharing. In other words, per-flow ECMP that does not flow load-sharing. In other words, per-flow ECMP does not perform
perform efficiently when flow life-time distribution is heavy- efficiently when flow lifetime distribution is heavy-tailed.
tailed. Furthermore, due to hash-function inefficiencies it is Furthermore, due to hash-function inefficiencies it is possible to
possible to have frequent flow collisions, where more flows get have frequent flow collisions, where more flows get placed on one
placed on one path over the others path over the others.
o Shortest-path routing with ECMP implements oblivious routing o Shortest-path routing with ECMP implements an oblivious routing
model, which is not aware of the network imbalances. If the model, which is not aware of the network imbalances. If the
network symmetry is broken, for example due to link failures, network symmetry is broken, for example due to link failures,
utilization hotspots may appear. For example, if a link fails utilization hotspots may appear. For example, if a link fails
between Tier-1 and Tier-2 devices (e.g. "Node5" and "Node9"), between Tier-1 and Tier-2 devices (e.g. Node5 and Node9), Tier-3
Tier-3 devices "Node1" and "Node2" will not be aware of that, devices Node1 and Node2 will not be aware of that, since there are
since there are other paths available from perspective of "Node3". other paths available from perspective of Node3. They will
They will continue sending roughly equal traffic to Node3 and continue sending roughly equal traffic to Node3 and Node4 as if
Node4 as if the failure didn't exist which may cause a traffic the failure didn't exist which may cause a traffic hotspot.
hotspot.
o Absence of path visibility leaves transport protocols, such as o The absence of path visibility leaves transport protocols, such as
TCP, with a "blackbox" view of the network. Some TCP metrics, TCP, with a "blackbox" view of the network. Some TCP metrics,
such as SRTT, MSS, CWND and few others could be inferred and such as SRTT, MSS, CWND and few others could be inferred and
cached based on past history, but those apply to destinations, cached based on past history, but those apply to destinations,
regardless of the path that has been chosen to get there. Thus, regardless of the path that has been chosen to get there. Thus,
for instance, TCP is not capable of remembering "bad" paths, such for instance, TCP is not capable of remembering "bad" paths, such
as those that exhibited poor performance in the past. This means as those that exhibited poor performance in the past. This means
that every new connection will be established obliviously (memory- that every new connection will be established obliviously (memory-
less) with regards to the paths chosen before, or chosen by other less) with regards to the paths chosen before, or chosen by other
nodes. nodes.
skipping to change at page 6, line 33 skipping to change at page 6, line 27
the network devices. the network devices.
Further in this document, we are going to demonstrate how these Further in this document, we are going to demonstrate how these
problems could be addressed within the framework of Segment Routing. problems could be addressed within the framework of Segment Routing.
First, we will explain how to apply SR in the DC, for MPLS and IPv6 First, we will explain how to apply SR in the DC, for MPLS and IPv6
data-planes. data-planes.
4. Applying Segment Routing in the DC with MPLS dataplane 4. Applying Segment Routing in the DC with MPLS dataplane
4.1. BGP Prefix Segment 4.1. BGP Prefix Segment (BGP-Prefix-SID)
A BGP-Prefix Segment is a segment associated with a BGP prefix. A A BGP Prefix Segment is a segment associated with a BGP prefix. A
BGP-Prefix Segment is a network-wide instruction to forward the BGP Prefix Segment is a network-wide instruction to forward the
packet along the ECMP-aware best path to the related prefix packet along the ECMP-aware best path to the related prefix.
([I-D.ietf-idr-bgp-prefix-sid]).
The BGP Prefix Segment is defined as the BGP-Prefix-SID Attribute in
[I-D.ietf-idr-bgp-prefix-sid] which contains an index. Throughout
this document the BGP Prefix Segment Attribute is referred as the
BGP-Prefix-SID and the encoded index as the label-index.
In this document, we make the network design decision to assume that In this document, we make the network design decision to assume that
all the nodes are allocated the same SRGB, e.g. [16000, 23999]. This all the nodes are allocated the same SRGB (Segment Routing Global
is important to fulfill the recommendation for operational Block), e.g. [16000, 23999] This is important to fulfill the
simplification as explained in [I-D.ietf-spring-segment-routing]. recommendation for operational simplification as explained in
[I-D.ietf-spring-segment-routing].
Note well that the use of a common SRGB in all nodes is not a Note well that the use of a common SRGB in all nodes is not a
requirement, one could use a different SRGB at every node. However, requirement, one could use a different SRGB at every node. However,
this would make the operation of the DC fabric more complex as the this would make the operation of the DC fabric more complex as the
label allocated to the loopback of a remote switch is then different label allocated to the loopback of a remote node is then different at
at every node. This also may increase the complexity of the every node. This also may increase the complexity of the centralized
centralized controller. controller. More on the SRGB allocation scheme is described in
Section 9.
For illustration purpose, when considering an MPLS data-plane, we For illustration purpose, when considering an MPLS data-plane, we
assume that the segment index allocated to prefix 192.0.2.x/32 is X. assume that the label-index allocated to prefix 192.0.2.x/32 is X.
As a result, a local label 1600x is allocated for prefix 192.0.2.x/32 As a result, a local label 1600x is allocated for prefix 192.0.2.x/32
by each node throughout the DC fabric. by each node throughout the DC fabric.
When IPv6 data-plane is considered, we assume that Node X is When IPv6 data-plane is considered, we assume that Node X is
allocated IPv6 address (segment) 2001:DB8::X. allocated IPv6 address (segment) 2001:DB8::X.
4.2. eBGP Labeled Unicast (RFC3107) 4.2. eBGP Labeled Unicast (RFC3107)
Referring to Figure 1 and [RFC7938], the following design Referring to Figure 1 and [RFC7938], the following design
modifications are introduced: modifications are introduced:
o Each node peers with its neighbors via eBGP3107 session o Each node peers with its neighbors via eBGP3107 session
o The forwarding plane at Tier-2 and Tier-1 is MPLS. o The forwarding plane at Tier-2 and Tier-1 is MPLS.
o The forwarding plane at Tier-3 is either IP2MPLS (if the host o The forwarding plane at Tier-3 is either IP2MPLS (if the host
sends IP traffic) or MPLS2MPLS (if the host sends MPLS- sends IP traffic) or MPLS2MPLS (if the host sends MPLS-
encapsulated traffic). encapsulated traffic).
Figure 2 zooms on a path from server A to server Z within the Figure 2 zooms into a path from server A to server Z within the
topology of Figure 1. topology of Figure 1.
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+
+---------->|NODE | |NODE | |NODE | +---------->|NODE | |NODE | |NODE |
| | 4 |--+->| 7 |--+--| 10 |---+ | | 4 |--+->| 7 |--+--| 10 |---+
| +-----+ +-----+ +-----+ | | +-----+ +-----+ +-----+ |
| | | |
+-----+ +-----+ +-----+ +-----+
|NODE | |NODE | |NODE | |NODE |
| 1 | | 11 | | 1 | | 11 |
+-----+ +-----+ +-----+ +-----+
| | | |
A <- Servers -> Z A <- Servers -> Z
Figure 2: Path from A to Z via nodes 1, 4, 7, 10 and 11 Figure 2: Path from A to Z via nodes 1, 4, 7, 10 and 11
Referring to Figure 1 and Figure 2 and assuming the IP address, AS Referring to Figure 1 and Figure 2 and assuming the IP address, AS
and index allocation previously described, the following sections and label-index allocation previously described, the following
detail the control plane operation and the data plane states for the sections detail the control plane operation and the data plane states
prefix 192.0.2.11/32 (loopback of Node11) for the prefix 192.0.2.11/32 (loopback of Node11)
4.2.1. Control Plane 4.2.1. Control Plane
Node11 originates 192.0.2.11/32 in BGP and allocates to it the BGP- Node11 originates 192.0.2.11/32 in BGP and allocates to it a BGP-
Prefix Segment attribute (index11). Prefix-SID with label-index: index11) [I-D.ietf-idr-bgp-prefix-sid].
Node11 sends the following eBGP3107 update to Node10: Node11 sends the following eBGP3107 update to Node10:
. NLRI: 192.0.2.11/32 . IP Prefix: 192.0.2.11/32
. Label: Implicit-Null . Label: Implicit-Null
. Next-hop: Node11's interface address on the link to Node10 . Next-hop: Node11's interface address on the link to Node10
. AS Path: {11} . AS Path: {11}
. BGP-Prefix Attribute: Index 11 . BGP-Prefix-SID: Label-Index 11
Node10 receives the above update. As it is SR capable, Node10 is Node10 receives the above update. As it is SR capable, Node10 is
able to interpret the BGP-Prefix Attribute and hence understands that able to interpret the BGP-Prefix-SID and hence understands that it
it should allocate the label LOCAL-SRGB (16000) + "index" 11 (hence should allocate the label from its own SRGB block, offset by the
16011) to the NLRI instead of allocating an nondeterministic label Label-Index received in the BGP-Prefix-SID (16000+11 hence 16011) to
out of a dynamically allocated portion of the local label space. The the NLRI instead of allocating a non-deterministic label out of a
dynamically allocated portion of the local label space. The
implicit-null label in the NLRI tells Node10 that it is the implicit-null label in the NLRI tells Node10 that it is the
penultimate hop and MUST pop the top label on the stack before penultimate hop and MUST pop the top label on the stack before
forwarding traffic for this prefix to Node11. forwarding traffic for this prefix to Node11.
Then, Node10 sends the following eBGP3107 update to Node7: Then, Node10 sends the following eBGP3107 update to Node7:
. NLRI: 192.0.2.11/32 . IP Prefix: 192.0.2.11/32
. Label: 16011 . Label: 16011
. Next-hop: Node10's interface address on the link to Node7 . Next-hop: Node10's interface address on the link to Node7
. AS Path: {10, 11} . AS Path: {10, 11}
. BGP-Prefix Attribute: Index 11 . BGP-Prefix-SID: Label-Index 11
Node7 receives the above update. As it is SR capable, Node7 is able Node7 receives the above update. As it is SR capable, Node7 is able
to interpret the BGP-Prefix Attribute and hence allocates the local to interpret the BGP-Prefix-SID and hence allocates the local
(incoming) label 16011 (16000 + 11) to the NLRI (instead of (incoming) label 16011 (16000 + 11) to the NLRI (instead of
allocating a "dynamic" local label from its label manager). Node7 allocating a "dynamic" local label from its label manager). Node7
uses the label in the received eBGP3107 NLRI as the outgoing label uses the label in the received eBGP3107 NLRI as the outgoing label
(the index is only used to derive the local/incoming label). (the index is only used to derive the local/incoming label).
Node7 sends the following eBGP3107 update to Node4: Node7 sends the following eBGP3107 update to Node4:
. NLRI: 192.0.2.11/32 . IP Prefix: 192.0.2.11/32
. Label: 16011 . Label: 16011
. Next-hop: Node7's interface address on the link to Node4 . Next-hop: Node7's interface address on the link to Node4
. AS Path: {7, 10, 11} . AS Path: {7, 10, 11}
. BGP-Prefix Attribute: Index 11 . BGP-Prefix-SID: Label-Index 11
Node4 receives the above update. As it is SR capable, Node4 is able Node4 receives the above update. As it is SR capable, Node4 is able
to interpret the BGP-Prefix Attribute and hence allocates the local to interpret the BGP-Prefix-SID and hence allocates the local
(incoming) label 16011 to the NLRI (instead of allocating a "dynamic" (incoming) label 16011 to the NLRI (instead of allocating a "dynamic"
local label from its label manager). Node4 uses the label in the local label from its label manager). Node4 uses the label in the
received eBGP3107 NLRI as outgoing label (the index is only used to received eBGP3107 NLRI as outgoing label (the index is only used to
derive the local/incoming label). derive the local/incoming label).
Node4 sends the following eBGP3107 update to Node1: Node4 sends the following eBGP3107 update to Node1:
. NLRI: 192.0.2.11/32 . IP Prefix: 192.0.2.11/32
. Label: 16011 . Label: 16011
. Next-hop: Node4's interface address on the link to Node1 . Next-hop: Node4's interface address on the link to Node1
. AS Path: {4, 7, 10, 11} . AS Path: {4, 7, 10, 11}
. BGP-Prefix Attribute: Index 11 . BGP-Prefix-SID: Label-Index 11
Node1 receives the above update. As it is SR capable, Node1 is able Node1 receives the above update. As it is SR capable, Node1 is able
to interpret the BGP-Prefix Attribute and hence allocates the local to interpret the BGP-Prefix-SID and hence allocates the local
(incoming) label 16011 to the NLRI (instead of allocating a "dynamic" (incoming) label 16011 to the NLRI (instead of allocating a "dynamic"
local label from its label manager). Node1 uses the label in the local label from its label manager). Node1 uses the label in the
received eBGP3107 NLRI as outgoing label (the index is only used to received eBGP3107 NLRI as outgoing label (the index is only used to
derive the local/incoming label). derive the local/incoming label).
4.2.2. Data Plane 4.2.2. Data Plane
Referring to Figure 1Referring to Figure 1, and assuming all nodes Referring to Figure 1, and assuming all nodes apply the same
apply the same advertisement rules described above and all nodes have advertisement rules described above and all nodes have the same SRGB
the same SRGB (16000-23999), here are the IP/MPLS forwarding tables (16000-23999), here are the IP/MPLS forwarding tables for prefix
for prefix 192.0.2.11/32 at Nodes 1, 4, 7 and 10. 192.0.2.11/32 at Node1, Node4, Node7 and Node10.
----------------------------------------------- -----------------------------------------------
Incoming label | outgoing label | Outgoing Incoming label | outgoing label | Outgoing
or IP destination | | Interface or IP destination | | Interface
------------------+----------------+----------- ------------------+----------------+-----------
16011 | 16011 | ECMP{3, 4} 16011 | 16011 | ECMP{3, 4}
192.0.2.11/32 | 16011 | ECMP{3, 4} 192.0.2.11/32 | 16011 | ECMP{3, 4}
------------------+----------------+----------- ------------------+----------------+-----------
Figure 3: Node1 Forwarding Table Figure 3: Node1 Forwarding Table
skipping to change at page 10, line 29 skipping to change at page 10, line 29
16011 | POP | 11 16011 | POP | 11
192.0.2.11/32 | N/A | 11 192.0.2.11/32 | N/A | 11
------------------+----------------+----------- ------------------+----------------+-----------
Node10 Forwarding Table Node10 Forwarding Table
4.2.3. Network Design Variation 4.2.3. Network Design Variation
A network design choice could consist of switching all the traffic A network design choice could consist of switching all the traffic
through Tier-1 and Tier-2 as MPLS traffic. In this case, one could through Tier-1 and Tier-2 as MPLS traffic. In this case, one could
filter away the IP entries at Nodes 4, 7 and 10. This might be filter away the IP entries at Node4, Node7 and Node10. This might be
beneficial in order to optimize the forwarding table size. beneficial in order to optimize the forwarding table size.
A network design choice could consist in allowing the hosts to send A network design choice could consist in allowing the hosts to send
MPLS-encapsulated traffic (based on EPE use-case, MPLS-encapsulated traffic based on the Egress Peer Engineering (EPE)
[I-D.ietf-spring-segment-routing-central-epe]). For example, use-case as defined in [I-D.ietf-spring-segment-routing-central-epe].
applications at HostA would send their Z-destined traffic to Node1 For example, applications at HostA would send their Z-destined
with an MPLS label stack where the top label is 16011 and the next traffic to Node1 with an MPLS label stack where the top label is
label is an EPE peer segment at Node11 directing the traffic to Z. 16011 and the next label is an EPE peer segment
([I-D.ietf-spring-segment-routing-central-epe]) at Node11 directing
the traffic to Z.
4.2.4. Global BGP Prefix Segment through the fabric 4.2.4. Global BGP Prefix Segment through the fabric
When the previous design is deployed, the operator enjoys global BGP When the previous design is deployed, the operator enjoys global BGP-
prefix segment (label) allocation throughout the DC fabric. Prefix-SID and label allocation throughout the DC fabric.
A few examples follow: A few examples follow:
o Normal forwarding to Node11: a packet with top label 16011 o Normal forwarding to Node11: a packet with top label 16011
received by any switch in the fabric will be forwarded along the received by any node in the fabric will be forwarded along the
ECMP-aware BGP best-path towards Node11 and the label 16011 is ECMP-aware BGP best-path towards Node11 and the label 16011 is
penultimate-popped at Node10. penultimate-popped at Node10.
o Traffic-engineered path to Node11: an application on a host behind o Traffic-engineered path to Node11: an application on a host behind
Node1 might want to restrict its traffic to paths via the Spine Node1 might want to restrict its traffic to paths via the Spine
switch Node5. The application achieves this by sending its node Node5. The application achieves this by sending its packets
packets with a label stack of {16005, 16011}. BGP Prefix segment with a label stack of {16005, 16011}. BGP Prefix SID 16005 directs
16005 directs the packet up to Node5 along the path (Node1, Node3, the packet up to Node5 along the path (Node1, Node3, Node5). BGP-
Node5). BGP Prefix Segment 16011 then directs the packet down to Prefix-SID 16011 then directs the packet down to Node11 along the
Node11 along the path (Node5, Node9, Node11). path (Node5, Node9, Node11).
4.2.5. Incremental Deployments 4.2.5. Incremental Deployments
The design previously described can be deployed incrementally. Let The design previously described can be deployed incrementally. Let
us assume that Node7 does not support the BGP-Prefix Segment us assume that Node7 does not support the BGP-Prefix-SID and let us
attribute and let us show how the fabric connectivity is preserved. show how the fabric connectivity is preserved.
From a signaling viewpoint, nothing would change: if Node7 does not From a signaling viewpoint, nothing would change: even though Node7
understand the BGP-Prefix Segment attribute, it does propagate the does not support the BGP-Prefix-SID, it does propagate the attribute
attribute unmodified to its neighbors. unmodified to its neighbors.
From a label allocation viewpoint, the only difference is that Node7 From a label allocation viewpoint, the only difference is that Node7
would allocate a dynamic (random) label to the prefix 192.0.2.11/32 would allocate a dynamic (random) label to the prefix 192.0.2.11/32
(e.g. 123456) instead of the "hinted" label as instructed by the BGP (e.g. 123456) instead of the "hinted" label as instructed by the BGP-
Prefix Segment attribute. The neighbors of Node7 adapt automatically Prefix-SID. The neighbors of Node7 adapt automatically as they
as they always use the label in the BGP3107 NLRI as outgoing label. always use the label in the BGP3107 NLRI as outgoing label.
Node4 does understand the BGP-Prefix Segment attribute and hence Node4 does understand the BGP-Prefix-SID and hence allocates the
allocates the indexed label in the SRGB (16011) for 192.0.2.11/32. indexed label in the SRGB (16011) for 192.0.2.11/32.
As a result, all the data-plane entries across the network would be As a result, all the data-plane entries across the network would be
unchanged except the entries at Node7 and its neighbor Node4 as shown unchanged except the entries at Node7 and its neighbor Node4 as shown
in the figures below. in the figures below.
The key point is that the end-to-end LSP is preserved because the The key point is that the end-to-end Label Switched Path (LSP) is
outgoing label is always derived from the received label within the preserved because the outgoing label is always derived from the
BGP3107 NLRI. The index in the BGP Prefix SID is only used as a hint received label within the BGP3107 NLRI. The index in the BGP-Prefix-
on how to allocate the local label (the incoming label) but never for SID is only used as a hint on how to allocate the local label (the
the outgoing label. incoming label) but never for the outgoing label.
------------------------------------------ ------------------------------------------
Incoming label | outgoing | Outgoing Incoming label | outgoing | Outgoing
or IP destination | label | Interface or IP destination | label | Interface
-------------------+---------------------- -------------------+----------------------
12345 | 16011 | 10 12345 | 16011 | 10
Figure 7: Node7 Forwarding Table Figure 7: Node7 Forwarding Table
------------------------------------------ ------------------------------------------
Incoming label | outgoing | Outgoing Incoming label | outgoing | Outgoing
or IP destination | label | Interface or IP destination | label | Interface
-------------------+---------------------- -------------------+----------------------
16011 | 12345 | 7 16011 | 12345 | 7
Figure 8: Node4 Forwarding Table Figure 8: Node4 Forwarding Table
The BGP-Prefix Segment functionality can thus be deployed The BGP-Prefix-SID can thus be deployed incrementally one node at a
incrementally one node at a time. time.
When deployed together with a homogeneous SRGB (same SRGB across the When deployed together with a homogeneous SRGB (same SRGB across the
fabric), the operator incrementally enjoys the global prefix segment fabric), the operator incrementally enjoys the global prefix segment
benefits as the deployment progresses through the fabric. benefits as the deployment progresses through the fabric.
4.3. iBGP Labeled Unicast (RFC3107) 4.3. iBGP Labeled Unicast (RFC3107)
The same exact design as eBGP3107 is used with the following The same exact design as eBGP3107 is used with the following
modifications: modifications:
All switches share the same AS All nodes use the same AS number.
iBGP3107 reflection with nhop-self is used instead of eBGP3107 iBGP3107 reflection with next-hop-self is used instead of
eBGP3107.
For simple and efficient route propagation filtering, Nodes 5, 6, For simple and efficient route propagation filtering, Node5,
7 and 8 share the same Cluster ID, Nodes 3 and 4 share the same Node6, Node7 and Node8 use the same Cluster ID, Node3 and Node4
Cluster ID, Nodes 9 and 10 share the same Cluster ID. use the same Cluster ID, Node9 and Node10 use the same Cluster ID.
AIGP metric ([RFC7311]) is likely applied to the BGP prefix AIGP metric ([RFC7311]) is likely applied to the BGP-Prefix-SID as
segments as part of a large-scale multi-domain design such as part of a large-scale multi-domain design such as Seamless MPLS
Seamless MPLS [I-D.ietf-mpls-seamless-mpls]. [I-D.ietf-mpls-seamless-mpls].
The control-plane behavior is mostly the same as described in the The control-plane behavior is mostly the same as described in the
previous section: the only difference is that the eBGP3107 path previous section: the only difference is that the eBGP3107 path
propagation is simply replaced by an iBGP3107 path reflection with propagation is simply replaced by an iBGP3107 path reflection with
next-hop changed to self. next-hop changed to self.
The data-plane tables are exactly the same. The data-plane tables are exactly the same.
5. Applying Segment Routing in the DC with IPv6 dataplane 5. Applying Segment Routing in the DC with IPv6 dataplane
The design described in [RFC7938] is reused with one single The design described in [RFC7938] is reused with one single
modification. We highlight it using the example of the reachability modification. We highlight it using the example of the reachability
to Node11 via spine switch Node5. to Node11 via spine node Node5.
Spine5 originates 2001:DB8::5/128 with the attached BGP Prefix Node5 originates 2001:DB8::5/128 with the attached BGP-Prefix-SID
Attribute adverting the support of the Segment Routing extension advertising the support of the Segment Routing extension header (SRH,
header (SRH, [I-D.ietf-6man-segment-routing-header]) for IPv6 packets
destined to segment 2001:DB8::5.
Tor11 originates 2001:DB8::11/128 with the attached BGP Prefix [I-D.ietf-6man-segment-routing-header]) for IPv6 packets destined to
Attribute adverting the support of the Segment Routing extension segment 2001:DB8::5 ([I-D.ietf-idr-bgp-prefix-sid]).
header (SRH, [I-D.ietf-6man-segment-routing-header]) for IPv6 packets
destined to segment 2001:DB8::11. Tor11 originates 2001:DB8::11/128 with the attached BGP-Prefix-SID
advertising the support of the SRH for IPv6 packets destined to
segment 2001:DB8::11.
The control-plane and data-plane processing of all the other nodes in The control-plane and data-plane processing of all the other nodes in
the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and
2001:DB8::11 are installed in the FIB along the eBGP best-path to 2001:DB8::11 are installed in the FIB along the eBGP best-path to
Node5 (spine node) and Node11 (ToR node) respectively. Node5 (spine node) and Node11 (ToR node) respectively.
An application on HostA which needs to send traffic to HostZ via only An application on HostA which needs to send traffic to HostZ via only
Node5 (spine node) can do so by sending IPv6 packets with a SR Node5 (spine node) can do so by sending IPv6 packets with a SRH
extension header. The destination address and active segment is set extension header. The destination address and active segment is set
to 2001:DB8::5. The next and last segment is set to 2001:DB8::11. to 2001:DB8::5. The next and last segment is set to 2001:DB8::11.
The application must only use IPv6 addresses that have been The application must only use IPv6 addresses that have been
advertised as capable for SRv6 segment processing (e.g. for which the advertised as capable for SRv6 segment processing (e.g. for which the
BGP prefix segment capability has been advertised). How applications BGP prefix segment capability has been advertised). How applications
learn this (e.g.: centralized controller and orchestration) is learn this (e.g.: centralized controller and orchestration) is
outside the scope of this document. outside the scope of this document.
6. Communicating path information to the host 6. Communicating path information to the host
skipping to change at page 13, line 43 skipping to change at page 13, line 43
methods. Here, we note that one way could be using a centralized methods. Here, we note that one way could be using a centralized
controller: the controller either tells the hosts of the prefix-to- controller: the controller either tells the hosts of the prefix-to-
path mappings beforehand and updates them as needed (network event path mappings beforehand and updates them as needed (network event
driven push), or responds to the hosts making request for a path to driven push), or responds to the hosts making request for a path to
specific destination (host event driven pull). It is also possible specific destination (host event driven pull). It is also possible
to use a hybrid model, i.e., pushing some state from the controller to use a hybrid model, i.e., pushing some state from the controller
in response to particular network events, while the host pulls other in response to particular network events, while the host pulls other
state on demand. state on demand.
We note, that when disseminating network-related data to the end- We note, that when disseminating network-related data to the end-
hosts a trade-off is made to balance the amount of information vs the hosts a trade-off is made to balance the amount of information Vs.
level of visibility in the network state. This applies both to push the level of visibility in the network state. This applies both to
and pull models. In the extreme case, the host would request path push and pull models. In the extreme case, the host would request
information on every flow, and keep no local state at all. On the path information on every flow, and keep no local state at all. On
other end of the spectrum, information for every prefix in the the other end of the spectrum, information for every prefix in the
network along with available paths could be pushed and continuously network along with available paths could be pushed and continuously
updated on all hosts. updated on all hosts.
7. Addressing the open problems 7. Addressing the open problems
This section demonstrates how the problems describe above could be This section demonstrates how the problems describe above could be
solved using the segment routing concept. It is worth noting that solved using the segment routing concept. It is worth noting that
segment routing signaling and data-plane are only parts of the segment routing signaling and data-plane are only parts of the
solution. Additional enhancements, e.g. such as the centralized solution. Additional enhancements, e.g., such as the centralized
controller mentioned previously, and host networking stack support controller mentioned previously, and host networking stack support
are required to implement the proposed solutions. are required to implement the proposed solutions.
7.1. Per-packet and flowlet switching 7.1. Per-packet and flowlet switching
With the ability to choose paths on the host, one may go from per- With the ability to choose paths on the host, one may go from per-
flow load-sharing in the network to per-packet or per-flowlet (see flow load-sharing in the network to per-packet or per-flowlet (see
[KANDULA04] for information on flowlets). The host may select [KANDULA04] for information on flowlets). The host may select
different segment routing instructions either per packet, or per different segment routing instructions either per packet, or per
flowlet, and route them over different paths. This allows for flowlet, and route them over different paths. This allows for
solving the "elephant flow" problem in the data-center and avoiding solving the "elephant flow" problem in the data-center and avoiding
link imbalances. link imbalances.
Note that traditional ECMP routing could be easily simulated with on- Note that traditional ECMP routing could be easily simulated with on-
host path selection, using method proposed in VL2 (see host path selection, using method proposed in VL2 (see
[GREENBERG09]). The hosts would randomly pick a Tier-2 or Tier-1 [GREENBERG09]). The hosts would randomly pick a Tier-2 or Tier-1
device to "bounce" the packet off of, depending on whether the device to "bounce" the packet off of, depending on whether the
destination is under the same Tier-2 switches, or has to be reached destination is under the same Tier-2 nodes, or has to be reached
across Tier-1. The host would use a hash function that operates on across Tier-1. The host would use a hash function that operates on
per-flow invariants, to simulate per-flow load-sharing in the per-flow invariants, to simulate per-flow load-sharing in the
network. network.
Using Figure 1 as reference, let's illustrate this assuming that Using Figure 1 as reference, let us illustrate this concept assuming
HostA has an elephant flow to Z called Flow-f. that HostA has an elephant flow to HostZ called Flow-f.
Normally, a flow is hashed on to a single path. Let's assume HostA Normally, a flow is hashed on to a single path. Let's assume HostA
sends its packets associated with Flow-f with top label 16011 (the sends its packets associated with Flow-f with top label 16011 (the
label for the remote ToR, Node11, where HostZ is connected) and Node1 label for the remote ToR, Node11, where HostZ is connected) and Node1
would hash all the packets of Flow-F via the same nhop (e.g. Node3). would hash all the packets of Flow-F via the same nhop (e.g. Node3).
Similarly, let's assume that leaf Node3 would hash all the packets of Similarly, let's assume that leaf Node3 would hash all the packets of
Flow-F via the same next-hop (e.g.: spine switch Node1). This normal Flow-F via the same next-hop (e.g.: spine node Node1). This normal
operation would restrict the elephant flow on a small subset of the operation would restrict the elephant flow on a small subset of the
ECMP paths to HostZ and potentially create imbalance and congestion ECMP paths to HostZ and potentially create imbalance and congestion
in the fabric. in the fabric.
Leveraging the flowlet proposal, assuming A is made aware of 4 Leveraging the flowlet proposal, assuming HostA is made aware of 4
disjoint paths via intermediate segment 16005, 16006, 16007 and 16008 disjoint paths via intermediate segment 16005, 16006, 16007 and 16008
(the BGP prefix SID's of the 4 spine switches) and also made aware of (the BGP prefix SID's of the 4 spine nodes) and also made aware of
the prefix segment of the remote ToR connected to the destination the prefix segment of the remote ToR connected to the destination
(16011), then the application can break the elephant flow F into (16011), then the application can break the elephant flow F into
flowlets F1, F2, F3, F4 and associate each flowlet with one of the flowlets F1, F2, F3, F4 and associate each flowlet with one of the
following 4 label stacks: {16005, 16011}, {16006, 16011}, {16007, following 4 label stacks: {16005, 16011}, {16006, 16011}, {16007,
16011} and {16008, 16011}. This would spread the load of the elephant 16011} and {16008, 16011}. This would spread the load of the elephant
flow through all the ECMP paths available in the fabric and rebalance flow through all the ECMP paths available in the fabric and re-
the load. balance the load.
7.2. Performance-aware routing 7.2. Performance-aware routing
Knowing the path associated with flows/packets, the end host may Knowing the path associated with flows/packets, the end host may
deduce certain characteristics of the path on its own, and deduce certain characteristics of the path on its own, and
additionally use the information supplied with path information additionally use the information supplied with path information
pushed from the controller or received via pull request. The host pushed from the controller or received via pull request. The host
may further share its path observations with the centralized agent, may further share its path observations with the centralized agent,
so that the latter may keep up-to-date network health map to assist so that the latter may keep up-to-date network health map to assist
other hosts with this information. other hosts with this information.
For example, an application A.1 at HostA may pin a TCP flow destined For example, an application A.1 at HostA may pin a TCP flow destined
to HostZ via Spine switch Node5 using label stack {16005, 16011}. The to HostZ via Spine node Node5 using label stack {16005, 16011}. The
application A.1 may collect information on packet loss, deduced from application A.1 may collect information on packet loss, deduced from
TCP retransmissions and other signals (e.g. RTT increases). A.1 may TCP retransmissions and other signals (e.g. RTT increases). A.1 may
additionally publish this information to a centralized agent, e.g. additionally publish this information to a centralized agent, e.g.
after a flow completes, or periodically for longer lived flows. after a flow completes, or periodically for longer lived flows.
Next, using both local and/or global performance data, application Next, using both local and/or global performance data, application
A.1 as well as other applications sharing the same resources in the A.1 as well as other applications sharing the same resources in the
DC fabric may pick up the best path for the new flow, or update an DC fabric may pick up the best path for the new flow, or update an
existing path (e.g.: when informed of congestion on an existing existing path (e.g.: when informed of congestion on an existing
path). path).
One particularly interesting instance of performance-aware routing is One particularly interesting instance of performance-aware routing is
dynamic fault-avoidance. If some links or devices in the network dynamic fault-avoidance. If some links or devices in the network
start discarding packets due to a fault, the end-hosts could detect start discarding packets due to a fault, the end-hosts could detect
the path(s) being affected and steer their flows away from the the path(s) being affected and steer their flows away from the
problem spot. Similar logic applies to failure cases where packets problem spot. Similar logic applies to failure cases where packets
get completely black-holed, e.g. when a link goes down. get completely black-holed, e.g. when a link goes down.
For example, an application A.1 informed about 5 paths to Z {16005, For example, an application A.1 informed about 5 paths to Z {16005,
16011}, {16006, 16011}, {16007, 16011}, {16008, 16011} and {16011} 16011}, {16006, 16011}, {16007, 16011}, {16008, 16011} and {16011}
might use the latter one by default (for simplicity). When might use the last one by default (for simplicity). When performance
performance is degrading, A.1 might then start to pin TCP flows to is degrading, A.1 might then start to pin TCP flows to each of the 4
each of the 4 other paths (each via a distinct spine) and monitor the other paths (each via a distinct spine) and monitor the performance.
performance. It would then detect the faulty path and assign a It would then detect the faulty path and assign a negative preference
negative preference to the faulty path to avoid further flows using to the faulty path to avoid further flows using it. Gradually, over
it. Gradually, over time, it may re-assign flows on the faulty path time, it may re-assign flows on the faulty path to eventually detect
to eventually detect the resolution of the trouble and start reusing the resolution of the trouble and start reusing the path.
the path.
7.3. Non-oblivious routing
By leveraging Segment Routing, one avoids issues associated with By leveraging Segment Routing, one avoids issues associated with
oblivious ECMP hashing. For example, if in the topology depicted on oblivious ECMP hashing. For example, if in the topology depicted on
Figure 1 a link between spine switch Node5 and leaf node Node9 fails, Figure 1 a link between spine node Node5 and leaf node Node9 fails,
HostA may exclude the segment corresponding to Node5 from the prefix HostA may exclude the segment corresponding to Node5 from the prefix
matching the servers under Tier-2 devices Node9. In the push path matching the servers under Tier-2 devices Node9. In the push path
discovery model, the affected path mappings may be explicitly pushed discovery model, the affected path mappings may be explicitly pushed
to all the servers for the duration of the failure. The new mapping to all the servers for the duration of the failure. The new mapping
would instruct them to avoid the particular Tier-1 switch until the would instruct them to avoid the particular Tier-1 node until the
link has recovered. Alternatively, in pull path, the centralized link has recovered. Alternatively, in pull path, the centralized
controller may start steering new flows immediately after it controller may start steering new flows immediately after it
discovers the issue. Until then, the existing flows may recover discovers the issue. Until then, the existing flows may recover
using local detection of the path issues, as described in using local detection of the path issues, as described in
Section 7.2. Section 7.2.
7.4. Deterministic network probing 7.3. Deterministic network probing
Active probing is a well-known technique for monitoring network Active probing is a well-known technique for monitoring network
elements health, constituting of sending continuous packet streams elements' health, constituting of sending continuous packet streams
simulating network traffic to the hosts in the data-center. Segment simulating network traffic to the hosts in the data-center. Segment
routing makes possible to prescribe the exact paths that each probe routing makes possible to prescribe the exact paths that each probe
or series of probes would be taking toward their destination. This or series of probes would be taking toward their destination. This
allows for fast correlation and detection of failed paths, by allows for fast correlation and detection of failed paths, by
processing information from multiple actively probing agents. This processing information from multiple actively probing agents. This
complements the data collected from the hosts routing stacks as complements the data collected from the hosts routing stacks as
described inSection 7.2. described in Section 7.2.
For example, imagine a probe agent sending packets to all machines in For example, imagine a probe agent sending packets to all machines in
the data-center. For every host, it may send packets over each of the data-center. For every host, it may send packets over each of
the possible paths, knowing exactly which links and devices these the possible paths, knowing exactly which links and devices these
packets will be crossing. Correlating results for multiple packets will be crossing. Correlating results for multiple
destinations with the topological data, it may automatically isolate destinations with the topological data, it may automatically isolate
possible problem to a link or device in the network. possible problem to a link or device in the network.
8. Additional Benefits 8. Additional Benefits
8.1. MPLS Dataplane with operational simplicity 8.1. MPLS Dataplane with operational simplicity
As required by [RFC7938], no new signaling protocol is introduced. As required by [RFC7938], no new signaling protocol is introduced.
The Prefix Segment is a lightweight extension to BGP Labelled Unicast The BGP-Prefix-SID is a lightweight extension to BGP Labeled Unicast
(RFC3107 [RFC3107]). It applies either to eBGP or iBGP based (RFC3107 [RFC3107]). It applies either to eBGP or iBGP based
designs. designs.
Specifically, LDP and RSVP-TE are not used. These protocols would Specifically, LDP and RSVP-TE are not used. These protocols would
drastically impact the operational complexity of the Data Center and drastically impact the operational complexity of the Data Center and
would not scale. This is in line with the requirements expressed in would not scale. This is in line with the requirements expressed in
[RFC7938]. [RFC7938].
A key element of the operational simplicity is the deployment of the A key element of the operational simplicity is the deployment of the
design with a single and consistent SRGB across the DC fabric. design with a single and consistent SRGB across the DC fabric.
At every node in the fabric, the same label is associated to a given At every node in the fabric, the same label is associated to a given
BGP prefix segment and hence a notion of global prefix segment BGP-Prefix-SID and hence a notion of global prefix segment arises.
arises.
When a controller programs HostA to send traffic to HostZ via the When a controller programs HostA to send traffic to HostZ via the
normally available BGP ECMP paths, the controller uses label 16011 normally available BGP ECMP paths, the controller uses label 16011
associated with the ToR switch connected to the HostZ. The associated with the ToR node connected to the HostZ. The controller
controller does not need to pick the label based on the ToR that the does not need to pick the label based on the ToR that the source host
source host is connected to. is connected to.
In a classic BGP Labelled Unicast design applied to the DC fabric In a classic BGP Labeled Unicast design applied to the DC fabric
illustrated in Figure 1, the ToR Node1 connected to HostA would most illustrated in Figure 1, the ToR Node1 connected to HostA would most
likely allocate a different label for 192.0.2.11/32 than the one likely allocate a different label for 192.0.2.11/32 than the one
allocated by ToR Node2. As a consequence, the controller would need allocated by ToR Node2. As a consequence, the controller would need
to adapt the SR policy to each host, based on the ToR switch that to adapt the SR policy to each host, based on the ToR node that they
they are connected to. This adds state maintenance and are connected to. This adds state maintenance and synchronization
synchronization problems. All of this unnecessary complexity is problems. All of this unnecessary complexity is eliminated if a
eliminated if a single consistent SRGB is utilized across the fabric. single consistent SRGB is utilized across the fabric.
8.2. Minimizing the FIB table 8.2. Minimizing the FIB table
The designer may decide to switch all the traffic at Tier-1 and Tier- The designer may decide to switch all the traffic at Tier-1 and Tier-
2's based on MPLS, hence drastically decreasing the IP table size at 2's based on MPLS, hence drastically decreasing the IP table size at
these nodes. these nodes.
This is easily accomplished by encapsulating the traffic either This is easily accomplished by encapsulating the traffic either
directly at the host or at the source ToR switch by pushing the BGP- directly at the host or at the source ToR node by pushing the BGP-
Prefix Segment of the destination ToR for intra-DC traffic or border Prefix-SID of the destination ToR for intra-DC traffic or border node
switch for inter-DC or DC-to-outside-world traffic. for inter-DC or DC-to-outside-world traffic.
8.3. Egress Peer Engineering 8.3. Egress Peer Engineering
It is straightforward to combine the design illustrated in this It is straightforward to combine the design illustrated in this
document with the Egress Peer Engineering (EPE) use-case described in document with the Egress Peer Engineering (EPE) use-case described in
[I-D.ietf-spring-segment-routing-central-epe]. [I-D.ietf-spring-segment-routing-central-epe].
In such case, the operator is able to engineer its outbound traffic In such case, the operator is able to engineer its outbound traffic
on a per host-flow basis, without incurring any additional state at on a per host-flow basis, without incurring any additional state at
intermediate points in the DC fabric. intermediate points in the DC fabric.
For example, the controller only needs to inject a per-flow state on For example, the controller only needs to inject a per-flow state on
the HostA to force it to send its traffic destined to a specific the HostA to force it to send its traffic destined to a specific
Internet destination D via a selected border switch (say Node12 in Internet destination D via a selected border node (say Node12 in
Figure 1 instead of another border switch Node11) and a specific Figure 1 instead of another border node, Node11) and a specific
egress peer of Node12 (say peer AS 9999 of local PeerNode segment egress peer of Node12 (say peer AS 9999 of local PeerNode segment
9999 at Node12 instead of any other peer which provides a path to the 9999 at Node12 instead of any other peer which provides a path to the
destination D). Any packet matching this state at host A would be destination D). Any packet matching this state at host A would be
encapsulated with SR segment list (label stack) {16012, 9999}. 16012 encapsulated with SR segment list (label stack) {16012, 9999}. 16012
would steer the flow through the DC fabric, leveraging any ECMP, would steer the flow through the DC fabric, leveraging any ECMP,
along the best path to border switch Node12. Once the flow gets to along the best path to border node Node12. Once the flow gets to
border switch Node12, the active segment is 9999 (thanks to PHP on border node Node12, the active segment is 9999 (because of PHP on the
the upstream neighbor of Node12). This EPE PeerNode segment forces upstream neighbor of Node12). This EPE PeerNode segment forces
border switch Node12 to forward the packet to peer AS 9999, without border node Node12 to forward the packet to peer AS 9999, without any
any IP lookup at the border switch. There is no per-flow state for IP lookup at the border node. There is no per-flow state for this
this engineered flow in the DC fabric. A benefit of segment routing engineered flow in the DC fabric. A benefit of segment routing is
is the per-flow state is only required at the source. the per-flow state is only required at the source.
As well as allowing full traffic engineering control such a design As well as allowing full traffic engineering control such a design
also offers FIB table minimization benefits as the Internet- scale also offers FIB table minimization benefits as the Internet-scale FIB
FIB at border switch Node12 is not required if all FIB lookups are at border node Node12 is not required if all FIB lookups are avoided
avoided there by using EPE. there by using EPE.
8.4. Incremental Deployments
As explained in Section 4.2.5, this design can be deployed
incrementally.
8.5. Anycast 8.4. Anycast
The design presented in this document preserves the availability and The design presented in this document preserves the availability and
load-balancing properties of the base design presented in load-balancing properties of the base design presented in
[I-D.ietf-spring-segment-routing]. [I-D.ietf-spring-segment-routing].
For example, one could assign an anycast loopback 192.0.2.20/32 and For example, one could assign an anycast loopback 192.0.2.20/32 and
associate segment index 20 to it on the border switches 11 and 12 (in associate segment index 20 to it on the border Node11 and Node12 (in
addition to their node-specific loopbacks). Doing so, the EPE addition to their node-specific loopbacks). Doing so, the EPE
controller could express a default "go-to-the- Internet via any controller could express a default "go-to-the-Internet via any border
border switch" policy as segment list {16020}. Indeed, from any host node" policy as segment list {16020}. Indeed, from any host in the DC
in the DC fabric or from any ToR switch, 16020 steers the packet fabric or from any ToR node, 16020 steers the packet towards the
towards the border switches 11 or 12 leveraging ECMP where available border Node11 or Node12 leveraging ECMP where available along the
along the best paths to these switches. best paths to these nodes.
9. Preferred SRGB Allocation 9. Preferred SRGB Allocation
In the MPLS case, we do not recommend to use different SRGBs at each In the MPLS case, we recommend to use same SRGBs at each node.
node.
Different SRGBs in each node likely increase the complexity of the Different SRGBs in each node likely increase the complexity of the
solution both from an operation viewpoint and from a controller solution both from an operational viewpoint and from a controller
viewpoint. viewpoint.
From an operation viewpoint, it is much simpler to have the same From an operation viewpoint, it is much simpler to have the same
global label at every node for the same destination (the MPLS global label at every node for the same destination (the MPLS
troubleshooting is then similar to the IPv6 troubleshooting where troubleshooting is then similar to the IPv6 troubleshooting where
this global property is a given). this global property is a given).
From a controller viewpoint, this allows to construct simple policies From a controller viewpoint, this allows us to construct simple
applicable across the fabric. policies applicable across the fabric.
Let us consider two applications A and B respectively connected to Let us consider two applications A and B respectively connected to
ToR1 and ToR2. A has two flows FA1 and FA2 destined to Z. B has two ToR1 and ToR2. A has two flows FA1 and FA2 destined to Z. B has two
flows FB1 and FB2 destined to Z. The controller wants FA1 and FB1 to flows FB1 and FB2 destined to Z. The controller wants FA1 and FB1 to
be load-shared across the fabric while FA2 and FB2 must be be load-shared across the fabric while FA2 and FB2 must be
respectively steered via Spine5 and spine 8. respectively steered via Node5 and Node8.
Assuming a consistent unique SRGB across the fabric as described in Assuming a consistent unique SRGB across the fabric as described in
the document, the controller can simply do it by instructing A and B the document, the controller can simply do it by instructing A and B
to use {16011} respectively for FA1 and FB1 and by instructing A and to use {16011} respectively for FA1 and FB1 and by instructing A and
B to use {16005 16011} and {16008 16011} respectively for FA2 and B to use {16005 16011} and {16008 16011} respectively for FA2 and
FB2. FB2.
Let us assume a design where the SRGB is different at every node: Let us assume a design where the SRGB is different at every node and
SRGB of Node K starts at value K*1000 and the SRGB length is 1000 where the SRGB of each node is advertised using the Originator SRGB
(e.g. ToR1's SRGB is [1000, 1999], ToR2's SRGB is [2000, 2999]...). TLV of the BGP-Prefix-SID as defined in
[I-D.ietf-idr-bgp-prefix-sid]: SRGB of Node K starts at value K*1000
and the SRGB length is 1000 (e.g. ToR1's SRGB is [1000, 1999],
ToR2's SRGB is [2000, 2999], ...).
In this case, not only the controller would need to collect and store In this case, not only the controller would need to collect and store
all of these different SRGB's, furthermore it would need to adapt the all of these different SRGB's (e.g., through the Originator SRGB TLV
policy for each host. Indeed, the controller would instruct A to use of the BGP-Prefix-SID), furthermore it would need to adapt the policy
{1011} for FA1 while it would have to instruct B to use {2011} for for each host. Indeed, the controller would instruct A to use {1011}
FB1 (while with the same SRGB, both policies are the same {16011}). for FA1 while it would have to instruct B to use {2011} for FB1
(while with the same SRGB, both policies are the same {16011}).
Even worse, the controller would instruct A to use {1005, 5011} for Even worse, the controller would instruct A to use {1005, 5011} for
FA1 while it would instruct B to use {2011, 8011} for FB1 (while with FA1 while it would instruct B to use {2011, 8011} for FB1 (while with
the same SRGB, the second segment is the same across both policies: the same SRGB, the second segment is the same across both policies:
16011). When combining segments to create a policy, one need to 16011). When combining segments to create a policy, one need to
carefully update the label of each segment. This is obviously more carefully update the label of each segment. This is obviously more
error-prone, more complex and more difficult to troubleshoot. error-prone, more complex and more difficult to troubleshoot.
10. Alternative Options 10. IANA Considerations
In order to support all the requirements and get consensus, the BGP
Prefix SID attribute has been extended to allow this design.
Specifically, the ORIGINATOR_SRGB TLV in the BGP Prefix SID signals
the SRGB of the switch that originated the BGP Prefix Segment.
This allows to determine the local label allocated by any switch for This document does not make any IANA request.
any BGP Prefix Segment, despite the lack of a consistent unique SRGB
in the domain.
11. IANA Considerations 11. Manageability Considerations
TBD The design and deployment guidelines described in this document rely
on Segment Routing extensions defined in BGP by
[I-D.ietf-idr-bgp-prefix-sid]. This document doesn't introduce any
change in the manageability of the BGP-Prefix-SID attribute other
than the ones defined in [I-D.ietf-idr-bgp-prefix-sid].
12. Manageability Considerations In addition the SRGB allocation scheme recommended here is based on
the recommendation made in [I-D.ietf-spring-segment-routing] and no
other manageability aspects are introduced in this document.
TBD 12. Security Considerations
13. Security Considerations This document proposes to apply Segment Routing to a well known
scalability requirement expressed in [RFC7938] using the BGP-Prefix-
SID as defined in [I-D.ietf-idr-bgp-prefix-sid].
TBD The design does not introduce any additional security concerns from
what expressed in [RFC7938] and [I-D.ietf-idr-bgp-prefix-sid].
14. Acknowledgements 13. Acknowledgements
The authors would like to thank Benjamin Black, Arjun Sreekantiah, The authors would like to thank Benjamin Black, Arjun Sreekantiah,
Keyur Patel and Acee Lindem for their comments and review of this Keyur Patel, Acee Lindem and Anoop Ghanwani for their comments and
document. review of this document.
15. Contributors 14. Contributors
Gaya Nagarajan Gaya Nagarajan
Facebook Facebook
US US
Email: gaya@fb.com Email: gaya@fb.com
Dmitry Afanasiev Dmitry Afanasiev
Yandex Yandex
RU RU
Email: fl0w@yandex-team.ru Email: fl0w@yandex-team.ru
Tim Laberge Tim Laberge
Cisco Cisco
US US
Email: tlaberge@cisco.com Email: tlaberge@cisco.com
Edet Nkposong Edet Nkposong
Microsoft Salesforce.com Inc.
US US
Email: edetn@microsoft.com Email: enkposong@salesforce.com
Mohan Nanduri Mohan Nanduri
Microsoft Microsoft
US US
Email: mnanduri@microsoft.com Email: mnanduri@microsoft.com
James Uttaro James Uttaro
ATT ATT
US US
skipping to change at page 21, line 15 skipping to change at page 21, line 4
Microsoft Microsoft
US US
Email: mnanduri@microsoft.com Email: mnanduri@microsoft.com
James Uttaro James Uttaro
ATT ATT
US US
Email: ju1738@att.com Email: ju1738@att.com
Saikat Ray Saikat Ray
Unaffiliated Unaffiliated
US US
Email: raysaikat@gmail.com Email: raysaikat@gmail.com
16. References 15. References
16.1. Normative References 15.1. Normative References
[I-D.ietf-idr-bgp-prefix-sid]
Previdi, S., Filsfils, C., Lindem, A., Patel, K.,
Sreekantiah, A., Ray, S., and H. Gredler, "Segment Routing
Prefix SID extensions for BGP", draft-ietf-idr-bgp-prefix-
sid-04 (work in progress), December 2016.
[I-D.ietf-spring-segment-routing]
Filsfils, C., Previdi, S., Decraene, B., Litkowski, S.,
and R. Shakir, "Segment Routing Architecture", draft-ietf-
spring-segment-routing-11 (work in progress), February
2017.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997, DOI 10.17487/RFC2119, March 1997,
<http://www.rfc-editor.org/info/rfc2119>. <http://www.rfc-editor.org/info/rfc2119>.
[RFC3107] Rekhter, Y. and E. Rosen, "Carrying Label Information in [RFC3107] Rekhter, Y. and E. Rosen, "Carrying Label Information in
BGP-4", RFC 3107, DOI 10.17487/RFC3107, May 2001, BGP-4", RFC 3107, DOI 10.17487/RFC3107, May 2001,
<http://www.rfc-editor.org/info/rfc3107>. <http://www.rfc-editor.org/info/rfc3107>.
[RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A
Border Gateway Protocol 4 (BGP-4)", RFC 4271, Border Gateway Protocol 4 (BGP-4)", RFC 4271,
DOI 10.17487/RFC4271, January 2006, DOI 10.17487/RFC4271, January 2006,
<http://www.rfc-editor.org/info/rfc4271>. <http://www.rfc-editor.org/info/rfc4271>.
[RFC7311] Mohapatra, P., Fernando, R., Rosen, E., and J. Uttaro, [RFC7311] Mohapatra, P., Fernando, R., Rosen, E., and J. Uttaro,
"The Accumulated IGP Metric Attribute for BGP", RFC 7311, "The Accumulated IGP Metric Attribute for BGP", RFC 7311,
DOI 10.17487/RFC7311, August 2014, DOI 10.17487/RFC7311, August 2014,
<http://www.rfc-editor.org/info/rfc7311>. <http://www.rfc-editor.org/info/rfc7311>.
16.2. Informative References 15.2. Informative References
[GREENBERG09] [GREENBERG09]
Greenberg, A., Hamilton, J., Jain, N., Kadula, S., Kim, Greenberg, A., Hamilton, J., Jain, N., Kadula, S., Kim,
C., Lahiri, P., Maltz, D., Patel, P., and S. Sengupta, C., Lahiri, P., Maltz, D., Patel, P., and S. Sengupta,
"VL2: A Scalable and Flexible Data Center Network", 2009. "VL2: A Scalable and Flexible Data Center Network", 2009.
[I-D.ietf-6man-segment-routing-header] [I-D.ietf-6man-segment-routing-header]
Previdi, S., Filsfils, C., Field, B., Leung, I., Linkova, Previdi, S., Filsfils, C., Field, B., Leung, I., Linkova,
J., Aries, E., Kosugi, T., Vyncke, E., and D. Lebrun, J., Aries, E., Kosugi, T., Vyncke, E., and D. Lebrun,
"IPv6 Segment Routing Header (SRH)", draft-ietf-6man- "IPv6 Segment Routing Header (SRH)", draft-ietf-6man-
segment-routing-header-02 (work in progress), September segment-routing-header-05 (work in progress), February
2016. 2017.
[I-D.ietf-idr-bgp-prefix-sid]
Previdi, S., Filsfils, C., Lindem, A., Patel, K.,
Sreekantiah, A., Ray, S., and H. Gredler, "Segment Routing
Prefix SID extensions for BGP", draft-ietf-idr-bgp-prefix-
sid-03 (work in progress), June 2016.
[I-D.ietf-mpls-seamless-mpls] [I-D.ietf-mpls-seamless-mpls]
Leymann, N., Decraene, B., Filsfils, C., Konstantynowicz, Leymann, N., Decraene, B., Filsfils, C., Konstantynowicz,
M., and D. Steinberg, "Seamless MPLS Architecture", draft- M., and D. Steinberg, "Seamless MPLS Architecture", draft-
ietf-mpls-seamless-mpls-07 (work in progress), June 2014. ietf-mpls-seamless-mpls-07 (work in progress), June 2014.
[I-D.ietf-spring-segment-routing]
Filsfils, C., Previdi, S., Decraene, B., Litkowski, S.,
and R. Shakir, "Segment Routing Architecture", draft-ietf-
spring-segment-routing-09 (work in progress), July 2016.
[I-D.ietf-spring-segment-routing-central-epe] [I-D.ietf-spring-segment-routing-central-epe]
Filsfils, C., Previdi, S., Aries, E., Ginsburg, D., and D. Filsfils, C., Previdi, S., Aries, E., and D. Afanasiev,
Afanasiev, "Segment Routing Centralized BGP Peer "Segment Routing Centralized BGP Egress Peer Engineering",
Engineering", draft-ietf-spring-segment-routing-central- draft-ietf-spring-segment-routing-central-epe-04 (work in
epe-02 (work in progress), September 2016. progress), February 2017.
[KANDULA04] [KANDULA04]
Sinha, S., Kandula, S., and D. Katabi, "Harnessing TCP's Sinha, S., Kandula, S., and D. Katabi, "Harnessing TCP's
Burstiness with Flowlet Switching", 2004. Burstiness with Flowlet Switching", 2004.
[RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of [RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of
BGP for Routing in Large-Scale Data Centers", RFC 7938, BGP for Routing in Large-Scale Data Centers", RFC 7938,
DOI 10.17487/RFC7938, August 2016, DOI 10.17487/RFC7938, August 2016,
<http://www.rfc-editor.org/info/rfc7938>. <http://www.rfc-editor.org/info/rfc7938>.
 End of changes. 117 change blocks. 
268 lines changed or deleted 268 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/