draft-ietf-bess-evpn-overlay-10.txt   draft-ietf-bess-evpn-overlay-11.txt 
skipping to change at page 1, line 16 skipping to change at page 1, line 16
Juniper Juniper
N. Bitar N. Bitar
Nokia Nokia
R. Shekhar R. Shekhar
Juniper Juniper
J. Uttaro J. Uttaro
AT&T AT&T
W. Henderickx W. Henderickx
Nokia Nokia
Expires: May 8, 2018 December 8, 2017 Expires: July 12, 2018 January 12, 2018
A Network Virtualization Overlay Solution using EVPN A Network Virtualization Overlay Solution using EVPN
draft-ietf-bess-evpn-overlay-10 draft-ietf-bess-evpn-overlay-11
Abstract Abstract
This document specifies how Ethernet VPN (EVPN) can be used as a This document specifies how Ethernet VPN (EVPN) can be used as a
Network Virtualization Overlay (NVO) solution and explores the Network Virtualization Overlay (NVO) solution and explores the
various tunnel encapsulation options over IP and their impact on the various tunnel encapsulation options over IP and their impact on the
EVPN control-plane and procedures. In particular, the following EVPN control-plane and procedures. In particular, the following
encapsulation options are analyzed: VXLAN, NVGRE, and MPLS over GRE. encapsulation options are analyzed: Virtual Extensible LAN (VXLAN),
This specification is also applicable to GENEVE encapsulation; Network Virtualization using Generic Routing Encapsulation (NVGRE),
however, some incremental work is required which will be covered in a and MPLS over Generic Routing Encapsulation (GRE). This specification
separate document. This document also specifies new multi-homing is also applicable to Generic Network Virtualization Encapsulation
procedures for split-horizon filtering and mass-withdraw. It also (GENEVE) encapsulation; however, some incremental work is required
specifies EVPN route constructions for VxLAN/NvGRE encapsulations and which will be covered in a separate document. This document also
ASBR procedures for multi-homing NV Edge devices. specifies new multi-homing procedures for split-horizon filtering and
mass-withdraw. It also specifies EVPN route constructions for
VXLAN/NVGRE encapsulations and Autonomous System Boundary Router
(ASBR) procedures for multi-homing of Network Virtualization (NV)
Edge devices.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as other groups may also distribute working documents as
Internet-Drafts. Internet-Drafts.
skipping to change at page 2, line 4 skipping to change at page 2, line 9
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as other groups may also distribute working documents as
Internet-Drafts. Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/1id-abstracts.html http://www.ietf.org/1id-abstracts.html
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
Copyright and License Notice Copyright and License Notice
Copyright (c) 2017 IETF Trust and the persons identified as the Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Requirements Notation and Conventions . . . . . . . . . . . . . 5 2 Requirements Notation and Conventions . . . . . . . . . . . . . 5
3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 EVPN Features . . . . . . . . . . . . . . . . . . . . . . . . . 6 4 EVPN Features . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Encapsulation Options for EVPN Overlays . . . . . . . . . . . . 7 5 Encapsulation Options for EVPN Overlays . . . . . . . . . . . . 8
5.1 VXLAN/NVGRE Encapsulation . . . . . . . . . . . . . . . . . 7 5.1 VXLAN/NVGRE Encapsulation . . . . . . . . . . . . . . . . . 8
5.1.1 Virtual Identifiers Scope . . . . . . . . . . . . . . . 8 5.1.1 Virtual Identifiers Scope . . . . . . . . . . . . . . . 9
5.1.1.1 Data Center Interconnect with Gateway . . . . . . . 8 5.1.1.1 Data Center Interconnect with Gateway . . . . . . . 9
5.1.1.2 Data Center Interconnect without Gateway . . . . . . 9 5.1.1.2 Data Center Interconnect without Gateway . . . . . . 9
5.1.2 Virtual Identifiers to EVI Mapping . . . . . . . . . . . 9 5.1.2 Virtual Identifiers to EVI Mapping . . . . . . . . . . . 10
5.1.2.1 Auto Derivation of RT . . . . . . . . . . . . . . . 10 5.1.2.1 Auto Derivation of RT . . . . . . . . . . . . . . . 11
5.1.3 Constructing EVPN BGP Routes . . . . . . . . . . . . . 12 5.1.3 Constructing EVPN BGP Routes . . . . . . . . . . . . . 13
5.2 MPLS over GRE . . . . . . . . . . . . . . . . . . . . . . . 13 5.2 MPLS over GRE . . . . . . . . . . . . . . . . . . . . . . . 14
6 EVPN with Multiple Data Plane Encapsulations . . . . . . . . . 14 6 EVPN with Multiple Data Plane Encapsulations . . . . . . . . . 15
7 Single-Homing NVEs - NVE Residing in Hypervisor . . . . . . . . 15 7 Single-Homing NVEs - NVE Residing in Hypervisor . . . . . . . . 15
7.1 Impact on EVPN BGP Routes & Attributes for VXLAN/NVGRE 7.1 Impact on EVPN BGP Routes & Attributes for VXLAN/NVGRE
Encapsulation . . . . . . . . . . . . . . . . . . . . . . . 15 Encapsulation . . . . . . . . . . . . . . . . . . . . . . . 16
7.2 Impact on EVPN Procedures for VXLAN/NVGRE Encapsulation . . 16 7.2 Impact on EVPN Procedures for VXLAN/NVGRE Encapsulation . . 16
8 Multi-Homing NVEs - NVE Residing in ToR Switch . . . . . . . . 16 8 Multi-Homing NVEs - NVE Residing in ToR Switch . . . . . . . . 17
8.1 EVPN Multi-Homing Features . . . . . . . . . . . . . . . . 17 8.1 EVPN Multi-Homing Features . . . . . . . . . . . . . . . . 17
8.1.1 Multi-homed Ethernet Segment Auto-Discovery . . . . . . 17 8.1.1 Multi-homed Ethernet Segment Auto-Discovery . . . . . . 18
8.1.2 Fast Convergence and Mass Withdraw . . . . . . . . . . . 17 8.1.2 Fast Convergence and Mass Withdraw . . . . . . . . . . . 18
8.1.3 Split-Horizon . . . . . . . . . . . . . . . . . . . . . 17 8.1.3 Split-Horizon . . . . . . . . . . . . . . . . . . . . . 18
8.1.4 Aliasing and Backup-Path . . . . . . . . . . . . . . . . 17 8.1.4 Aliasing and Backup-Path . . . . . . . . . . . . . . . . 18
8.1.5 DF Election . . . . . . . . . . . . . . . . . . . . . . 18 8.1.5 DF Election . . . . . . . . . . . . . . . . . . . . . . 19
8.2 Impact on EVPN BGP Routes & Attributes . . . . . . . . . . . 19 8.2 Impact on EVPN BGP Routes & Attributes . . . . . . . . . . . 20
8.3 Impact on EVPN Procedures . . . . . . . . . . . . . . . . . 19 8.3 Impact on EVPN Procedures . . . . . . . . . . . . . . . . . 20
8.3.1 Split Horizon . . . . . . . . . . . . . . . . . . . . . 19 8.3.1 Split Horizon . . . . . . . . . . . . . . . . . . . . . 20
8.3.2 Aliasing and Backup-Path . . . . . . . . . . . . . . . . 20 8.3.2 Aliasing and Backup-Path . . . . . . . . . . . . . . . . 21
8.3.3 Unknown Unicast Traffic Designation . . . . . . . . . . 21 8.3.3 Unknown Unicast Traffic Designation . . . . . . . . . . 21
9 Support for Multicast . . . . . . . . . . . . . . . . . . . . . 21 9 Support for Multicast . . . . . . . . . . . . . . . . . . . . . 22
10 Data Center Interconnections - DCI . . . . . . . . . . . . . . 22 10 Data Center Interconnections - DCI . . . . . . . . . . . . . . 23
10.1 DCI using GWs . . . . . . . . . . . . . . . . . . . . . . . 22 10.1 DCI using GWs . . . . . . . . . . . . . . . . . . . . . . . 23
10.2 DCI using ASBRs . . . . . . . . . . . . . . . . . . . . . . 23 10.2 DCI using ASBRs . . . . . . . . . . . . . . . . . . . . . . 24
10.2.1 ASBR Functionality with Single-Homing NVEs . . . . . . 24 10.2.1 ASBR Functionality with Single-Homing NVEs . . . . . . 25
10.2.2 ASBR Functionality with Multi-Homing NVEs . . . . . . . 24 10.2.2 ASBR Functionality with Multi-Homing NVEs . . . . . . . 25
11 Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 26 11 Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 27
12 Security Considerations . . . . . . . . . . . . . . . . . . . 27 12 Security Considerations . . . . . . . . . . . . . . . . . . . 27
13 IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 13 IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28
14 References . . . . . . . . . . . . . . . . . . . . . . . . . . 27 14 References . . . . . . . . . . . . . . . . . . . . . . . . . . 28
14.1 Normative References . . . . . . . . . . . . . . . . . . . 27 14.1 Normative References . . . . . . . . . . . . . . . . . . . 28
14.2 Informative References . . . . . . . . . . . . . . . . . . 28 14.2 Informative References . . . . . . . . . . . . . . . . . . 29
Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 29 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 30
1 Introduction 1 Introduction
This document specifies how Ethernet VPN (EVPN) can be used as a This document specifies how Ethernet VPN (EVPN) [RFC7432] can be used
Network Virtualization Overlay (NVO) solution and explores the as a Network Virtualization Overlay (NVO) solution and explores the
various tunnel encapsulation options over IP and their impact on the various tunnel encapsulation options over IP and their impact on the
EVPN control-plane and procedures. In particular, the following EVPN control-plane and procedures. In particular, the following
encapsulation options are analyzed: VXLAN [RFC7348], NVGRE [RFC7637], encapsulation options are analyzed: Virtual Extensible LAN (VXLAN)
and MPLS over GRE [RFC4023]. This specification is also applicable to [RFC7348], Network Virtualization using Generic Routing Encapsulation
[GENEVE] encapsulation; however, some incremental work is required (NVGRE) [RFC7637], and MPLS over Generic Routing Encapsulation (GRE)
which will be covered in a separate document [EVPN-GENEVE]. This [RFC4023]. This specification is also applicable to Generic Network
document also specifies new multi-homing procedures for split-horizon Virtualization Encapsulation (GENEVE) encapsulation [GENEVE];
filtering and mass-withdraw. It also specifies EVPN route however, some incremental work is required which will be covered in a
constructions for VxLAN/NvGRE encapsulations and ASBR procedures for separate document [EVPN-GENEVE]. This document also specifies new
multi-homing NV Edge devices. multi-homing procedures for split-horizon filtering and mass-
withdraw. It also specifies EVPN route constructions for VXLAN/NVGRE
encapsulations and Autonomous System Boundary Router (ASBR)
procedures for multi-homing of Network Virtualization (NV) Edge
devices.
In the context of this document, a Network Virtualization Overlay In the context of this document, a Network Virtualization Overlay
(NVO) is a solution to address the requirements of a multi-tenant (NVO) is a solution to address the requirements of a multi-tenant
data center, especially one with virtualized hosts, e.g., Virtual data center, especially one with virtualized hosts, e.g., Virtual
Machines (VMs) or virtual workloads. The key requirements of such a Machines (VMs) or virtual workloads. The key requirements of such a
solution, as described in [RFC7364], are: solution, as described in [RFC7364], are:
- Isolation of network traffic per tenant - Isolation of network traffic per tenant
- Support for a large number of tenants (tens or hundreds of - Support for a large number of tenants (tens or hundreds of
thousands) thousands)
- Extending L2 connectivity among different VMs belonging to a given - Extending L2 connectivity among different VMs belonging to a given
tenant segment (subnet) across different PODs within a data center or tenant segment (subnet) across different Point of Deliveries (PODs)
between different data centers within a data center or between different data centers
- Allowing a given VM to move between different physical points of - Allowing a given VM to move between different physical points of
attachment within a given L2 segment attachment within a given L2 segment
The underlay network for NVO solutions is assumed to provide IP The underlay network for NVO solutions is assumed to provide IP
connectivity between NVO endpoints (NVEs). connectivity between NVO endpoints (NVEs).
This document describes how Ethernet VPN (EVPN) can be used as an NVO This document describes how Ethernet VPN (EVPN) can be used as an NVO
solution and explores applicability of EVPN functions and procedures. solution and explores applicability of EVPN functions and procedures.
In particular, it describes the various tunnel encapsulation options In particular, it describes the various tunnel encapsulation options
skipping to change at page 5, line 28 skipping to change at page 5, line 33
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP "OPTIONAL" in this document are to be interpreted as described in BCP
14 [RFC2119] [RFC8174] when, and only when, they appear in all 14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here. capitals, as shown here.
3 Terminology 3 Terminology
Most of the terminology used in this documents comes from [RFC7432] Most of the terminology used in this documents comes from [RFC7432]
and [RFC7365]. and [RFC7365].
VXLAN: Virtual Extensible LAN
GRE: Generic Routing Encapsulation
NVGRE: Network Virtualization using Generic Routing Encapsulation
GENEVE: Generic Network Virtualization Encapsulation
POD: Point of Delivery
NV: Network Virtualization
NVO: Network Virtualization Overlay NVO: Network Virtualization Overlay
NVE: Network Virtualization Endpoint NVE: Network Virtualization Endpoint
VNI: Virtual Network Identifier (for VXLAN) VNI: Virtual Network Identifier (for VXLAN)
VSID: Virtual Subnet Identifier (for NVGRE) VSID: Virtual Subnet Identifier (for NVGRE)
EVPN: Ethernet VPN EVPN: Ethernet VPN
EVI: An EVPN instance spanning the Provider Edge (PE) devices EVI: An EVPN instance spanning the Provider Edge (PE) devices
participating in that EVPN. participating in that EVPN
MAC-VRF: A Virtual Routing and Forwarding table for Media Access MAC-VRF: A Virtual Routing and Forwarding table for Media Access
Control (MAC) addresses on a PE. Control (MAC) addresses on a PE
IP-VRF: A Virtual Routing and Forwarding table for Internet Protocol
(IP) addresses on a PE
Ethernet Segment (ES): When a customer site (device or network) is Ethernet Segment (ES): When a customer site (device or network) is
connected to one or more PEs via a set of Ethernet links, then that connected to one or more PEs via a set of Ethernet links, then that
set of links is referred to as an 'Ethernet segment'. set of links is referred to as an 'Ethernet segment'.
Ethernet Segment Identifier (ESI): A unique non-zero identifier that Ethernet Segment Identifier (ESI): A unique non-zero identifier that
identifies an Ethernet segment is called an 'Ethernet Segment identifies an Ethernet segment is called an 'Ethernet Segment
Identifier'. Identifier'.
Ethernet Tag: An Ethernet tag identifies a particular broadcast Ethernet Tag: An Ethernet tag identifies a particular broadcast
skipping to change at page 6, line 22 skipping to change at page 6, line 41
Single-Active Redundancy Mode: When only a single PE, among all the Single-Active Redundancy Mode: When only a single PE, among all the
PEs attached to an Ethernet segment, is allowed to forward traffic PEs attached to an Ethernet segment, is allowed to forward traffic
to/from that Ethernet segment for a given VLAN, then the Ethernet to/from that Ethernet segment for a given VLAN, then the Ethernet
segment is defined to be operating in Single-Active redundancy mode. segment is defined to be operating in Single-Active redundancy mode.
All-Active Redundancy Mode: When all PEs attached to an Ethernet All-Active Redundancy Mode: When all PEs attached to an Ethernet
segment are allowed to forward known unicast traffic to/from that segment are allowed to forward known unicast traffic to/from that
Ethernet segment for a given VLAN, then the Ethernet segment is Ethernet segment for a given VLAN, then the Ethernet segment is
defined to be operating in All-Active redundancy mode. defined to be operating in All-Active redundancy mode.
PIM-SM: Protocol Independent Multicast - Sparse-Mode
PIM-SSM: Protocol Independent Multicast - Source Specific Multicast
Bidir PIM: Bidirectional PIM
4 EVPN Features 4 EVPN Features
EVPN was originally designed to support the requirements detailed in EVPN [RFC7432] was originally designed to support the requirements
[RFC7209] and therefore has the following attributes which directly detailed in [RFC7209] and therefore has the following attributes
address control plane scaling and ease of deployment issues. which directly address control plane scaling and ease of deployment
issues.
1) Control plane information is distributed with BGP and Broadcast 1) Control plane information is distributed with BGP and Broadcast
and Multicast traffic is sent using a shared multicast tree or with and Multicast traffic is sent using a shared multicast tree or with
ingress replication. ingress replication.
2) Control plane learning is used for MAC (and IP) addresses instead 2) Control plane learning is used for MAC (and IP) addresses instead
of data plane learning. The latter requires the flooding of unknown of data plane learning. The latter requires the flooding of unknown
unicast and ARP frames; whereas, the former does not require any unicast and Address Resolution Protocol (ARP) frames; whereas, the
flooding. former does not require any flooding.
3) Route Reflectors are used to reduce a full mesh of BGP sessions 3) Route Reflector (RR) is used to reduce a full mesh of BGP sessions
among PE devices to a single BGP session between a PE and the RR. among PE devices to a single BGP session between a PE and the RR.
Furthermore, RR hierarchy can be leveraged to scale the number of BGP Furthermore, RR hierarchy can be leveraged to scale the number of BGP
routes on the RR. routes on the RR.
4) Auto-discovery via BGP is used to discover PE devices 4) Auto-discovery via BGP is used to discover PE devices
participating in a given VPN, PE devices participating in a given participating in a given VPN, PE devices participating in a given
redundancy group, tunnel encapsulation types, multicast tunnel type, redundancy group, tunnel encapsulation types, multicast tunnel type,
multicast members, etc. multicast members, etc.
5) All-Active multihoming is used. This allows a given customer 5) All-Active multihoming is used. This allows a given customer
skipping to change at page 8, line 32 skipping to change at page 9, line 8
[RFC7637] encapsulation is based on GRE encapsulation and it mandates [RFC7637] encapsulation is based on GRE encapsulation and it mandates
the inclusion of the optional GRE Key field which carries the VSID. the inclusion of the optional GRE Key field which carries the VSID.
There is a one-to-one mapping between the VSID and the tenant VLAN There is a one-to-one mapping between the VSID and the tenant VLAN
ID, as described in [RFC7637] and the inclusion of an inner VLAN tag ID, as described in [RFC7637] and the inclusion of an inner VLAN tag
is prohibited. This mode of operation in [RFC7637] maps to VLAN Based is prohibited. This mode of operation in [RFC7637] maps to VLAN Based
Service in [RFC7432]. Service in [RFC7432].
As described in the next section there is no change to the encoding As described in the next section there is no change to the encoding
of EVPN routes to support VXLAN or NVGRE encapsulation except for the of EVPN routes to support VXLAN or NVGRE encapsulation except for the
use of the BGP Encapsulation extended community to indicate the use of the BGP Encapsulation extended community to indicate the
encapsulation type (e.g., VxLAN or NVGRE). However, there is encapsulation type (e.g., VXLAN or NVGRE). However, there is
potential impact to the EVPN procedures depending on where the NVE is potential impact to the EVPN procedures depending on where the NVE is
located (i.e., in hypervisor or TOR) and whether multi-homing located (i.e., in hypervisor or TOR) and whether multi-homing
capabilities are required. capabilities are required.
5.1.1 Virtual Identifiers Scope 5.1.1 Virtual Identifiers Scope
Although VNIs are defined as 24-bit globally unique values, there are Although VNIs are defined as 24-bit globally unique values, there are
scenarios in which it is desirable to use a locally significant value scenarios in which it is desirable to use a locally significant value
for VNI, especially in the context of data center interconnect: for VNI, especially in the context of data center interconnect:
skipping to change at page 10, line 4 skipping to change at page 10, line 26
+----+ |IP Fabric|---| | | |--|IP Fabric| +----+ +----+ |IP Fabric|---| | | |--|IP Fabric| +----+
+----+ | | +----+ +----+ | | +----+ +----+ | | +----+ +----+ | | +----+
|NVE2|--| | | | | |--|NVE4| |NVE2|--| | | | | |--|NVE4|
+----+ +---------+ +--------------+ +---------+ +----+ +----+ +---------+ +--------------+ +---------+ +----+
|<------ DC 1 -----> <---- DC2 ------>| |<------ DC 1 -----> <---- DC2 ------>|
Figure 2: Data Center Interconnect with ASBR Figure 2: Data Center Interconnect with ASBR
5.1.2 Virtual Identifiers to EVI Mapping 5.1.2 Virtual Identifiers to EVI Mapping
When the EVPN control plane is used in conjunction with VXLAN (or When the EVPN control plane is used in conjunction with VXLAN (or
NVGRE encapsulation), two options for mapping the VXLAN VNI (or NVGRE NVGRE encapsulation), just like [RFC7432] where two options existed
VSID) to an EVI are possible: for mapping broadcast domains (represented by VLAN IDs) to an EVI, in
here there are also two options for mapping broadcast domains
represented by VXLAN VNIs (or NVGRE VSIDs) to an EVI:
1. Option 1: Single Broadcast Domain per EVI 1. Option 1: Single Broadcast Domain per EVI
In this option, a single Ethernet broadcast domain (e.g., subnet) In this option, a single Ethernet broadcast domain (e.g., subnet)
represented by a VNI is mapped to a unique EVI. This corresponds to represented by a VNI is mapped to a unique EVI. This corresponds to
the VLAN Based service in [RFC7432], where a tenant-facing interface, the VLAN Based service in [RFC7432], where a tenant-facing interface,
logical interface (e.g., represented by a VLAN ID) or physical, gets logical interface (e.g., represented by a VLAN ID) or physical, gets
mapped to an EVPN instance (EVI). As such, a BGP RD and RT are needed mapped to an EVPN instance (EVI). As such, a BGP RD and RT are needed
per VNI on every NVE. The advantage of this model is that it allows per VNI on every NVE. The advantage of this model is that it allows
the BGP RT constraint mechanisms to be used in order to limit the the BGP RT constraint mechanisms to be used in order to limit the
skipping to change at page 14, line 4 skipping to change at page 14, line 38
described in section 8.2.2.2 of [TUNNEL-ENCAP] ("When a Valid VNI has described in section 8.2.2.2 of [TUNNEL-ENCAP] ("When a Valid VNI has
not been Signaled"). not been Signaled").
5.2 MPLS over GRE 5.2 MPLS over GRE
The EVPN data-plane is modeled as an EVPN MPLS client layer sitting The EVPN data-plane is modeled as an EVPN MPLS client layer sitting
over an MPLS PSN-tunnel server layer. Some of the EVPN functions over an MPLS PSN-tunnel server layer. Some of the EVPN functions
(split-horizon, aliasing, and backup-path) are tied to the MPLS (split-horizon, aliasing, and backup-path) are tied to the MPLS
client layer. If MPLS over GRE encapsulation is used, then the EVPN client layer. If MPLS over GRE encapsulation is used, then the EVPN
MPLS client layer can be carried over an IP PSN tunnel transparently. MPLS client layer can be carried over an IP PSN tunnel transparently.
Therefore, there is no impact to the EVPN procedures and associated Therefore, there is no impact to the EVPN procedures and associated
data-plane operation. data-plane operation.
The existing standards for MPLS over GRE encapsulation as defined by The existing standards for MPLS over GRE encapsulation as defined by
[RFC4023] can be used for this purpose; however, when it is used in [RFC4023] can be used for this purpose; however, when it is used in
conjunction with EVPN, it is recommended that the GRE key field be conjunction with EVPN, it is recommended that the GRE key field be
present and be used to provide a 32-bit entropy value only if the P present and be used to provide a 32-bit entropy value only if the P
nodes can perform ECMP hashing based on the GRE key; otherwise, the nodes can perform Equal-Cost Multipath (ECMP) hashing based on the
GRE header should not include the GRE key. The Checksum and Sequence GRE key; otherwise, the GRE header SHOULD NOT include the GRE key.
Number fields MUST NOT be included and the corresponding C and S bits The Checksum and Sequence Number fields MUST NOT be included and the
in the GRE Packet Header MUST be set to zero. A PE capable of corresponding C and S bits in the GRE Packet Header MUST be set to
supporting this encapsulation, should advertise its EVPN routes along zero. A PE capable of supporting this encapsulation, SHOULD advertise
with the Tunnel Encapsulation extended community indicating MPLS over its EVPN routes along with the Tunnel Encapsulation extended
GRE encapsulation as described in previous section. community indicating MPLS over GRE encapsulation as described in
previous section.
6 EVPN with Multiple Data Plane Encapsulations 6 EVPN with Multiple Data Plane Encapsulations
The use of the BGP Encapsulation extended community per [TUNNEL- The use of the BGP Encapsulation extended community per [TUNNEL-
ENCAP] allows each NVE in a given EVI to know each of the ENCAP] allows each NVE in a given EVI to know each of the
encapsulations supported by each of the other NVEs in that EVI. encapsulations supported by each of the other NVEs in that EVI.
i.e., each of the NVEs in a given EVI may support multiple data plane i.e., each of the NVEs in a given EVI may support multiple data plane
encapsulations. An ingress NVE can send a frame to an egress NVE encapsulations. An ingress NVE can send a frame to an egress NVE
only if the set of encapsulations advertised by the egress NVE forms only if the set of encapsulations advertised by the egress NVE forms
a non-empty intersection with the set of encapsulations supported by a non-empty intersection with the set of encapsulations supported by
the ingress NVE, and it is at the discretion of the ingress NVE which the ingress NVE, and it is at the discretion of the ingress NVE which
encapsulation to choose from this intersection. (As noted in encapsulation to choose from this intersection. (As noted in
section 5.1.3, if the BGP Encapsulation extended community is not section 5.1.3, if the BGP Encapsulation extended community is not
present, then the default MPLS encapsulation or a locally configured present, then the default MPLS encapsulation or a locally configured
encapsulation is assumed.) encapsulation is assumed.)
When a PE advertises multiple supported encapsulations, it MUST When a PE advertises multiple supported encapsulations, it MUST
advertise encapsulations that use the same EVPN procedures including advertise encapsulations that use the same EVPN procedures including
procedures associated with split-horizon filtering described in procedures associated with split-horizon filtering described in
section 8.3.1. For example, VxLAN and NvGRE (or MPLS and MPLS over section 8.3.1. For example, VXLAN and NVGRE (or MPLS and MPLS over
GRE) encapsulations use the same EVPN procedures and thus a PE can GRE) encapsulations use the same EVPN procedures and thus a PE can
advertise both of them and can support either of them or both of them advertise both of them and can support either of them or both of them
simultaneously. However, a PE MUST NOT advertise VxLAN and MPLS simultaneously. However, a PE MUST NOT advertise VXLAN and MPLS
encapsulations together because a) MPLS field of EVPN routes is set encapsulations together because (a) the MPLS field of EVPN routes is
to either a MPLS label for a VNI but not both and b) some of EVPN set to either an MPLS label or a VNI but not both and (b) some EVPN
procedures (such as split-horizon filtering) are different for procedures (such as split-horizon filtering) are different for
VxLAN/NvGRE and MPLS encapsulations. VXLAN/NVGRE and MPLS encapsulations.
An ingress node that uses shared multicast trees for sending An ingress node that uses shared multicast trees for sending
broadcast or multicast frames MAY maintain distinct trees for each broadcast or multicast frames MAY maintain distinct trees for each
different encapsulation type. different encapsulation type.
It is the responsibility of the operator of a given EVI to ensure It is the responsibility of the operator of a given EVI to ensure
that all of the NVEs in that EVI support at least one common that all of the NVEs in that EVI support at least one common
encapsulation. If this condition is violated, it could result in encapsulation. If this condition is violated, it could result in
service disruption or failure. The use of the BGP Encapsulation service disruption or failure. The use of the BGP Encapsulation
extended community provides a method to detect when this condition is extended community provides a method to detect when this condition is
skipping to change at page 16, line 50 skipping to change at page 17, line 38
In this section, we discuss the scenario where the NVEs reside in the In this section, we discuss the scenario where the NVEs reside in the
Top of Rack (ToR) switches AND the servers (where VMs are residing) Top of Rack (ToR) switches AND the servers (where VMs are residing)
are multi-homed to these ToR switches. The multi-homing NVE operate are multi-homed to these ToR switches. The multi-homing NVE operate
in All-Active or Single-Active redundancy mode. If the servers are in All-Active or Single-Active redundancy mode. If the servers are
single-homed to the ToR switches, then the scenario becomes similar single-homed to the ToR switches, then the scenario becomes similar
to that where the NVE resides on the hypervisor, as discussed in to that where the NVE resides on the hypervisor, as discussed in
Section 7, as far as the required EVPN functionality are concerned. Section 7, as far as the required EVPN functionality are concerned.
[RFC7432] defines a set of BGP routes, attributes and procedures to [RFC7432] defines a set of BGP routes, attributes and procedures to
support multi-homing. We first describe these functions and support multi-homing. We first describe these functions and
procedures, then discuss which of these are impacted by the VxLAN procedures, then discuss which of these are impacted by the VXLAN
(or NVGRE) encapsulation and what modifications are required. As it (or NVGRE) encapsulation and what modifications are required. As it
will be seen later in this section, the only EVPN procedure that is will be seen later in this section, the only EVPN procedure that is
impacted by non-MPLS overlay encapsulation (e.g., VxLAN or NVGRE) impacted by non-MPLS overlay encapsulation (e.g., VXLAN or NVGRE)
where it provides space for one ID rather than stack of labels, is where it provides space for one ID rather than stack of labels, is
that of split-horizon filtering for multi-homed Ethernet Segments that of split-horizon filtering for multi-homed Ethernet Segments
described in section 8.3.1. described in section 8.3.1.
8.1 EVPN Multi-Homing Features 8.1 EVPN Multi-Homing Features
In this section, we will recap the multi-homing features of EVPN to In this section, we will recap the multi-homing features of EVPN to
highlight the encapsulation dependencies. The section only describes highlight the encapsulation dependencies. The section only describes
the features and functions at a high-level. For more details, the the features and functions at a high-level. For more details, the
reader is to refer to [RFC7432]. reader is to refer to [RFC7432].
skipping to change at page 19, line 12 skipping to change at page 19, line 46
destination frames to a multi-homed host or VM, in case of all-active destination frames to a multi-homed host or VM, in case of all-active
redundancy. redundancy.
In NVEs where .1Q tagged frames are received from hosts, the DF In NVEs where .1Q tagged frames are received from hosts, the DF
election should be performed based on host VLAN IDs (VIDs) per election should be performed based on host VLAN IDs (VIDs) per
section 8.5 of [RFC7432]. Furthermore, multi-homing PEs of a given section 8.5 of [RFC7432]. Furthermore, multi-homing PEs of a given
Ethernet Segment MAY perform DF election using configured IDs such as Ethernet Segment MAY perform DF election using configured IDs such as
VNI, EVI, normalized VIDs, and etc. as along the IDs are configured VNI, EVI, normalized VIDs, and etc. as along the IDs are configured
consistently across the multi-homing PEs. consistently across the multi-homing PEs.
In GWs where VxLAN encapsulated frames are received, the DF election In GWs where VXLAN encapsulated frames are received, the DF election
is performed on VNIs. Again, it is assumed that for a given Ethernet is performed on VNIs. Again, it is assumed that for a given Ethernet
Segment, VNIs are unique and consistent (e.g., no duplicate VNIs Segment, VNIs are unique and consistent (e.g., no duplicate VNIs
exist). exist).
8.2 Impact on EVPN BGP Routes & Attributes 8.2 Impact on EVPN BGP Routes & Attributes
Since multi-homing is supported in this scenario, then the entire set Since multi-homing is supported in this scenario, then the entire set
of BGP routes and attributes defined in [RFC7432] are used. The of BGP routes and attributes defined in [RFC7432] are used. The
setting of the Ethernet Tag field in the MAC Advertisement, Ethernet setting of the Ethernet Tag field in the MAC Advertisement, Ethernet
AD per EVI, and Inclusive Multicast routes follows that of section AD per EVI, and Inclusive Multicast routes follows that of section
skipping to change at page 20, line 49 skipping to change at page 21, line 36
Segment MUST NOT be configured. Segment MUST NOT be configured.
8.3.2 Aliasing and Backup-Path 8.3.2 Aliasing and Backup-Path
The Aliasing and the Backup-Path procedures for VXLAN/NVGRE The Aliasing and the Backup-Path procedures for VXLAN/NVGRE
encapsulation are very similar to the ones for MPLS. In case of MPLS, encapsulation are very similar to the ones for MPLS. In case of MPLS,
Ethernet A-D route per EVI is used for Aliasing when the Ethernet A-D route per EVI is used for Aliasing when the
corresponding Ethernet Segment operates in All-Active multi-homing, corresponding Ethernet Segment operates in All-Active multi-homing,
and the same route is used for Backup-Path when the corresponding and the same route is used for Backup-Path when the corresponding
Ethernet Segment operates in Single-Active multi-homing. In case of Ethernet Segment operates in Single-Active multi-homing. In case of
VxLAN/NVGRE, the same route is used for the Aliasing and the Backup- VXLAN/NVGRE, the same route is used for the Aliasing and the Backup-
Path with the difference that the Ethernet Tag and VNI fields in Path with the difference that the Ethernet Tag and VNI fields in
Ethernet A-D per EVI route are set as described in section 5.1.3. Ethernet A-D per EVI route are set as described in section 5.1.3.
8.3.3 Unknown Unicast Traffic Designation 8.3.3 Unknown Unicast Traffic Designation
In EVPN, when an ingress PE uses ingress replication to flood unknown In EVPN, when an ingress PE uses ingress replication to flood unknown
unicast traffic to egress PEs, the ingress PE uses a different EVPN unicast traffic to egress PEs, the ingress PE uses a different EVPN
MPLS label (from the one used for known unicast traffic) to identify MPLS label (from the one used for known unicast traffic) to identify
such BUM traffic. The egress PEs use this label to identify such BUM such BUM traffic. The egress PEs use this label to identify such BUM
traffic and thus apply DF filtering for All-Active multi-homed sites. traffic and thus apply DF filtering for All-Active multi-homed sites.
skipping to change at page 21, line 30 skipping to change at page 22, line 17
address arrives on the ingress PE, it floods it via ingress address arrives on the ingress PE, it floods it via ingress
replication to all the egress PE(s) and since they are known to the replication to all the egress PE(s) and since they are known to the
egress PE(s), multiple copies is sent to the All-Active multi-homed egress PE(s), multiple copies is sent to the All-Active multi-homed
site. It should be noted that such transient packet duplication only site. It should be noted that such transient packet duplication only
happens when a) the destination host is multi-homed via All-Active happens when a) the destination host is multi-homed via All-Active
redundancy mode, b) flooding of unknown unicast is enabled in the redundancy mode, b) flooding of unknown unicast is enabled in the
network, c) ingress replication is used, and d) traffic for the network, c) ingress replication is used, and d) traffic for the
destination host is arrived on the ingress PE before it learns the destination host is arrived on the ingress PE before it learns the
host MAC address via BGP EVPN advertisement. If it is desired to host MAC address via BGP EVPN advertisement. If it is desired to
avoid occurrence of such transient packet duplication (however low avoid occurrence of such transient packet duplication (however low
probability that may be), then VxLAN-GPE encapsulation needs to be probability that may be), then VXLAN-GPE encapsulation needs to be
used between these PEs and the ingress PE needs to set the BUM used between these PEs and the ingress PE needs to set the BUM
Traffic Bit (B bit) [VXLAN-GPE] to indicate that this is an ingress- Traffic Bit (B bit) [VXLAN-GPE] to indicate that this is an ingress-
replicated BUM traffic. replicated BUM traffic.
9 Support for Multicast 9 Support for Multicast
The E-VPN Inclusive Multicast Ethernet Tag (IMET) route is used to The E-VPN Inclusive Multicast Ethernet Tag (IMET) route is used to
discover the multicast tunnels among the endpoints associated with a discover the multicast tunnels among the endpoints associated with a
given EVI (e.g., given VNI) for VLAN-based service and a given given EVI (e.g., given VNI) for VLAN-based service and a given
<EVI,VLAN> for VLAN-aware bundle service. All fields of this route is <EVI,VLAN> for VLAN-aware bundle service. All fields of this route is
set as described in section 5.1.3. The Originating router's IP set as described in section 5.1.3. The Originating router's IP
address field is set to the NVE's IP address. This route is tagged address field is set to the NVE's IP address. This route is tagged
with the PMSI Tunnel attribute, which is used to encode the type of with the PMSI Tunnel attribute, which is used to encode the type of
multicast tunnel to be used as well as the multicast tunnel multicast tunnel to be used as well as the multicast tunnel
identifier. The tunnel encapsulation is encoded by adding the BGP identifier. The tunnel encapsulation is encoded by adding the BGP
Encapsulation extended community as per section 5.1.1. For example, Encapsulation extended community as per section 5.1.1. For example,
the PMSI Tunnel attribute may indicate the multicast tunnel is of the PMSI Tunnel attribute may indicate the multicast tunnel is of
type PIM-SM; whereas, the BGP Encapsulation extended community may type Protocol Independent Multicast - Sparse-Mode (PIM-SM); whereas,
indicate the encapsulation for that tunnel is of type VxLAN. The the BGP Encapsulation extended community may indicate the
following tunnel types as defined in [RFC6514] can be used in the encapsulation for that tunnel is of type VXLAN. The following tunnel
PMSI tunnel attribute for VXLAN/NVGRE: types as defined in [RFC6514] can be used in the PMSI tunnel
attribute for VXLAN/NVGRE:
+ 3 - PIM-SSM Tree + 3 - PIM-SSM Tree
+ 4 - PIM-SM Tree + 4 - PIM-SM Tree
+ 5 - BIDIR-PIM Tree + 5 - Bidir-PIM Tree
+ 6 - Ingress Replication + 6 - Ingress Replication
Except for Ingress Replication, this multicast tunnel is used by the In case of VxLAN and NVGRE encapsulation with locally-assigned VNIs,
PE originating the route for sending multicast traffic to other PEs, just as in [RFC7432], each PE MUST advertise an IMET route to other
and is used by PEs that receive this route for receiving the traffic PEs in an EVPN instance for the multicast tunnel type that it uses
originated by hosts connected to the PE that originated the route. (i.e., ingress replication, PIM-SM, PIM-SSM, or Bidir-PIM tunnel).
However, for globally-assigned VNIs, each PE MUST advertise IMET
route to other PEs in an EVPN instance for ingress replication or
PIM-SSM tunnel, and MAY advertise IMET route for PIM-SM or Bidir-PIM
tunnel. In case of PIM-SM or Bidir-PIM tunnel, no information in the
IMET route is needed by the PE to setup these tunnels.
In the scenario where the multicast tunnel is a tree, both the In the scenario where the multicast tunnel is a tree, both the
Inclusive as well as the Aggregate Inclusive variants may be used. In Inclusive as well as the Aggregate Inclusive variants may be used. In
the former case, a multicast tree is dedicated to a VNI. Whereas, in the former case, a multicast tree is dedicated to a VNI. Whereas, in
the latter, a multicast tree is shared among multiple VNIs. For VNI- the latter, a multicast tree is shared among multiple VNIs. For VNI-
based service, the Aggregate Inclusive mode is accomplished by having based service, the Aggregate Inclusive mode is accomplished by having
the NVEs advertise multiple IMET routes with different Route Targets the NVEs advertise multiple IMET routes with different Route Targets
(one per VNI) but with the same tunnel identifier encoded in the PMSI (one per VNI) but with the same tunnel identifier encoded in the PMSI
tunnel attribute. For VNI-aware bundle service, the Aggregate tunnel attribute. For VNI-aware bundle service, the Aggregate
Inclusive mode is accomplished by having the NVEs advertise multiple Inclusive mode is accomplished by having the NVEs advertise multiple
skipping to change at page 23, line 21 skipping to change at page 24, line 16
in details in section 3.4 of [DCI-EVPN-OVERLAY]. in details in section 3.4 of [DCI-EVPN-OVERLAY].
10.2 DCI using ASBRs 10.2 DCI using ASBRs
This approach can be considered as the opposite of the first approach This approach can be considered as the opposite of the first approach
and it favors simplification at DCI devices over NVEs such that and it favors simplification at DCI devices over NVEs such that
larger MAC-VRF (and IP-VRF) tables need to be maintained on NVEs; larger MAC-VRF (and IP-VRF) tables need to be maintained on NVEs;
whereas, DCI devices don't need to maintain any MAC (and IP) whereas, DCI devices don't need to maintain any MAC (and IP)
forwarding tables. Furthermore, DCI devices do not need to terminate forwarding tables. Furthermore, DCI devices do not need to terminate
and process routes related to multi-homing but rather to relay these and process routes related to multi-homing but rather to relay these
messages for the establishment of an end-to-end LSP path. In other messages for the establishment of an end-to-end Label Switched Path
words, DCI devices in this approach operate similar to ASBRs for (LSP) path. In other words, DCI devices in this approach operate
inter-AS option B - section 10 of [RFC4364]. This requires locally similar to ASBRs for inter-AS option B - section 10 of [RFC4364].
assigned VNIs to be used just like downstream assigned MPLS VPN label This requires locally assigned VNIs to be used just like downstream
where for all practical purposes the VNIs function like 24-bit VPN assigned MPLS VPN label where for all practical purposes the VNIs
labels. This approach is equally applicable to data centers (or function like 24-bit VPN labels. This approach is equally applicable
Carrier Ethernet networks) with MPLS encapsulation. to data centers (or Carrier Ethernet networks) with MPLS
encapsulation.
In inter-AS option B, when ASBR receives an EVPN route from its DC In inter-AS option B, when ASBR receives an EVPN route from its DC
over iBGP and re-advertises it to other ASBRs, it re-advertises the over internal BGP (iBGP) and re-advertises it to other ASBRs, it re-
EVPN route by re-writing the BGP next-hops to itself, thus losing the advertises the EVPN route by re-writing the BGP next-hops to itself,
identity of the PE that originated the advertisement. This re-write thus losing the identity of the PE that originated the advertisement.
of BGP next-hop impacts the EVPN Mass Withdraw route (Ethernet A-D This re-write of BGP next-hop impacts the EVPN Mass Withdraw route
per ES) and its procedure adversely. However, it does not impact EVPN (Ethernet A-D per ES) and its procedure adversely. However, it does
Aliasing mechanism/procedure because when the Aliasing routes (Ether not impact EVPN Aliasing mechanism/procedure because when the
A-D per EVI) are advertised, the receiving PE first resolves a MAC Aliasing routes (Ether A-D per EVI) are advertised, the receiving PE
address for a given EVI into its corresponding <ES,EVI> and first resolves a MAC address for a given EVI into its corresponding
subsequently, it resolves the <ES,EVI> into multiple paths (and their <ES,EVI> and subsequently, it resolves the <ES,EVI> into multiple
associated next hops) via which the <ES,EVI> is reachable. Since paths (and their associated next hops) via which the <ES,EVI> is
Aliasing and MAC routes are both advertised per EVI basis and they reachable. Since Aliasing and MAC routes are both advertised per EVI
use the same RD and RT (per EVI), the receiving PE can associate them basis and they use the same RD and RT (per EVI), the receiving PE can
together on a per BGP path basis (e.g., per originating PE) and thus associate them together on a per BGP path basis (e.g., per
perform recursive route resolution - e.g., a MAC is reachable via an originating PE) and thus perform recursive route resolution - e.g., a
<ES,EVI> which in turn, is reachable via a set of BGP paths, thus the MAC is reachable via an <ES,EVI> which in turn, is reachable via a
MAC is reachable via the set of BGP paths. Since on a per EVI basis, set of BGP paths, thus the MAC is reachable via the set of BGP paths.
the association of MAC routes and the corresponding Aliasing route is Since on a per EVI basis, the association of MAC routes and the
fixed and determined by the same RD and RT, there is no ambiguity corresponding Aliasing route is fixed and determined by the same RD
when the BGP next hop for these routes is re-written as these routes and RT, there is no ambiguity when the BGP next hop for these routes
pass through ASBRs - i.e., the receiving PE may receive multiple is re-written as these routes pass through ASBRs - i.e., the
Aliasing routes for the same EVI from a single next hop (a single receiving PE may receive multiple Aliasing routes for the same EVI
ASBR), and it can still create multiple paths toward that <ES, EVI>. from a single next hop (a single ASBR), and it can still create
multiple paths toward that <ES, EVI>.
However, when the BGP next hop address corresponding to the However, when the BGP next hop address corresponding to the
originating PE is re-written, the association between the Mass originating PE is re-written, the association between the Mass
Withdraw route (Ether A-D per ES) and its corresponding MAC routes Withdraw route (Ether A-D per ES) and its corresponding MAC routes
cannot be made based on their RDs and RTs because the RD for Mass cannot be made based on their RDs and RTs because the RD for Mass
Withdraw route is different than the one for the MAC routes. Withdraw route is different than the one for the MAC routes.
Therefore, the functionality needed at the ASBRs and the receiving Therefore, the functionality needed at the ASBRs and the receiving
PEs depends on whether the Mass Withdraw route is originated and PEs depends on whether the Mass Withdraw route is originated and
whether there is a need to handle route resolution ambiguity for this whether there is a need to handle route resolution ambiguity for this
route. The following two subsections describe the functionality route. The following two subsections describe the functionality
skipping to change at page 27, line 26 skipping to change at page 28, line 23
choose a particular tunnel for a particular payload type may lead to choose a particular tunnel for a particular payload type may lead to
user data packets getting misrouted, misdelivered, and/or dropped. user data packets getting misrouted, misdelivered, and/or dropped.
More broadly, the security considerations for the transport of IP More broadly, the security considerations for the transport of IP
reachability information using BGP are discussed in [RFC4271] and reachability information using BGP are discussed in [RFC4271] and
[RFC4272], and are equally applicable for the extensions described [RFC4272], and are equally applicable for the extensions described
in this document. in this document.
13 IANA Considerations 13 IANA Considerations
IANA has allocated the following BGP Tunnel Encapsulation Attribute This document requests the following BGP Tunnel Encapsulation
Tunnel Types: Attribute Tunnel Types from IANA and they have already been
allocated. The IANA registry needs to point to this document.
8 VXLAN Encapsulation 8 VXLAN Encapsulation
9 NVGRE Encapsulation 9 NVGRE Encapsulation
10 MPLS Encapsulation 10 MPLS Encapsulation
11 MPLS in GRE Encapsulation 11 MPLS in GRE Encapsulation
12 VXLAN GPE Encapsulation 12 VXLAN GPE Encapsulation
14 References 14 References
14.1 Normative References 14.1 Normative References
skipping to change at page 28, line 9 skipping to change at page 29, line 9
February 2014 February 2014
[RFC7348] Mahalingam, M., et al, "VXLAN: A Framework for Overlaying [RFC7348] Mahalingam, M., et al, "VXLAN: A Framework for Overlaying
Virtualized Layer 2 Networks over Layer 3 Networks", RFC 7348, August Virtualized Layer 2 Networks over Layer 3 Networks", RFC 7348, August
2014 2014
[RFC7637] Garg, P., et al., "NVGRE: Network Virtualization using [RFC7637] Garg, P., et al., "NVGRE: Network Virtualization using
Generic Routing Encapsulation", RFC 7637, September, 2015 Generic Routing Encapsulation", RFC 7637, September, 2015
[TUNNEL-ENCAP] Rosen et al., "The BGP Tunnel Encapsulation [TUNNEL-ENCAP] Rosen et al., "The BGP Tunnel Encapsulation
Attribute", draft-ietf-idr-tunnel-encaps-03, work in progress, May Attribute", draft-ietf-idr-tunnel-encaps-08, work in progress,
31, 2016. January 11, 2018.
[VXLAN-GPE] Maino et al., "Generic Protocol Extension for VXLAN",
draft-ietf-nvo3-vxlan-gpe-03, work in progress October 25, 2016.
[RFC4023] T. Worster et al., "Encapsulating MPLS in IP or Generic [RFC4023] T. Worster et al., "Encapsulating MPLS in IP or Generic
Routing Encapsulation (GRE)", RFC 4023, March 2005 Routing Encapsulation (GRE)", RFC 4023, March 2005
14.2 Informative References 14.2 Informative References
[RFC7209] Sajassi et al., "Requirements for Ethernet VPN (EVPN)", RFC [RFC7209] Sajassi et al., "Requirements for Ethernet VPN (EVPN)", RFC
7209, May 2014 7209, May 2014
[RFC4272] S. Murphy, "BGP Security Vulnerabilities Analysis.", [RFC4272] S. Murphy, "BGP Security Vulnerabilities Analysis.",
January 2006. January 2006.
[RFC7364] Narten et al., "Problem Statement: Overlays for Network [RFC7364] Narten et al., "Problem Statement: Overlays for Network
Virtualization", RFC 7364, October 2014. Virtualization", RFC 7364, October 2014.
[RFC7365] Lasserre et al., "Framework for DC Network Virtualization", [RFC7365] Lasserre et al., "Framework for DC Network Virtualization",
RFC 7365, October 2014. RFC 7365, October 2014.
[DCI-EVPN-OVERLAY] Rabadan et al., "Interconnect Solution for EVPN [DCI-EVPN-OVERLAY] Rabadan et al., "Interconnect Solution for EVPN
Overlay networks", draft-ietf-bess-dci-evpn-overlay-04, work in Overlay networks", draft-ietf-bess-dci-evpn-overlay-05, work in
progress, February 29, 2016. progress, July 18, 2017.
[RFC4271] Y. Rekhter, Ed., T. Li, Ed., S. Hares, Ed., "A Border [RFC4271] Y. Rekhter, Ed., T. Li, Ed., S. Hares, Ed., "A Border
Gateway Protocol 4 (BGP-4)", January 2006. Gateway Protocol 4 (BGP-4)", January 2006.
[RFC4364] Rosen, E., et al, "BGP/MPLS IP Virtual Private Networks [RFC4364] Rosen, E., et al, "BGP/MPLS IP Virtual Private Networks
(VPNs)", RFC 4364, February 2006. (VPNs)", RFC 4364, February 2006.
[RFC6514] R. Aggarwal et al., "BGP Encodings and Procedures for [RFC6514] R. Aggarwal et al., "BGP Encodings and Procedures for
Multicast in MPLS/BGP IP VPNs", RFC 6514, February 2012 Multicast in MPLS/BGP IP VPNs", RFC 6514, February 2012
[VXLAN-GPE] Maino et al., "Generic Protocol Extension for VXLAN",
draft-ietf-nvo3-vxlan-gpe-05, work in progress October 30, 2017.
[GENEVE] J. Gross et al., "Geneve: Generic Network Virtualization [GENEVE] J. Gross et al., "Geneve: Generic Network Virtualization
Encapsulation", draft-ietf-nvo3-geneve-05, September 2017 Encapsulation", draft-ietf-nvo3-geneve-05, September 2017
[EVPN-GENEVE] S. Boutros et al., "EVPN control plane for Geneve", [EVPN-GENEVE] S. Boutros et al., "EVPN control plane for Geneve",
draft-boutros-bess-evpn-geneve-00.txt, June 2017 draft-boutros-bess-evpn-geneve-00.txt, June 2017
Contributors Contributors
S. Salam S. Salam
K. Patel K. Patel
D. Rao D. Rao
 End of changes. 45 change blocks. 
132 lines changed or deleted 175 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/