draft-lasserre-nvo3-framework-02.txt   draft-lasserre-nvo3-framework-03.txt 
Internet Engineering Task Force Marc Lasserre
Internet Draft Florin Balus
Intended status: Informational Alcatel-Lucent
Expires: December 2012
Thomas Morin
France Telecom Orange
Nabil Bitar
Verizon
Yakov Rekhter
Juniper
June 18, 2012 Internet Engineering Task Force Marc Lasserre
Internet Draft Florin Balus
Framework for DC Network Virtualization Intended status: Informational Alcatel-Lucent
draft-lasserre-nvo3-framework-02.txt Expires: January 2013
Thomas Morin
Status of this Memo France Telecom Orange
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at Nabil Bitar
http://www.ietf.org/ietf/1id-abstracts.txt Verizon
The list of Internet-Draft Shadow Directories can be accessed at Yakov Rekhter
http://www.ietf.org/shadow.html Juniper
This Internet-Draft will expire on December 18, 2012. July 9, 2012
Copyright Notice Framework for DC Network Virtualization
draft-lasserre-nvo3-framework-03.txt
Copyright (c) 2012 IETF Trust and the persons identified as the Status of this Memo
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This Internet-Draft is submitted in full conformance with the
Provisions Relating to IETF Documents provisions of BCP 78 and BCP 79.
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document.
Abstract Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Several IETF drafts relate to the use of overlay networks to support Internet-Drafts are draft documents valid for a maximum of six
large scale virtual data centers. This draft provides a framework months and may be updated, replaced, or obsoleted by other documents
for Network Virtualization over L3 (NVO3) and is intended to help at any time. It is inappropriate to use Internet-Drafts as
plan a set of work items in order to provide a complete solution reference material or to cite them other than as "work in progress."
set. It defines a logical view of the main components with the
intention of streamlining the terminology and focusing the solution
set.
Table of Contents This Internet-Draft will expire on January 9, 2013.
1. Introduction...................................................3 Copyright Notice
1.1. Conventions used in this document.........................4
1.2. General terminology.......................................4
1.3. DC network architecture...................................6
1.4. Tenant networking view....................................8
2. Reference Models...............................................9
2.1. Generic Reference Model...................................9
2.2. NVE Reference Model......................................11
2.3. NVE Service Types........................................12
2.3.1. L2 NVE providing Ethernet LAN-like service..........13
2.3.2. L3 NVE providing IP/VRF-like service................13
3. Functional components.........................................13
3.1. Generic service virtualization components................13
3.1.1. Virtual Access Points (VAPs)........................14
3.1.2. Virtual Network Instance (VNI)......................14
3.1.3. Overlay Modules and VN Context......................15
3.1.4. Tunnel Overlays and Encapsulation options...........16
3.1.5. Control Plane Components............................16
3.1.5.1. Auto-provisioning/Service discovery...............16
3.1.5.2. Address advertisement and tunnel mapping..........17
3.1.5.3. Tunnel management.................................17
3.2. Service Overlay Topologies...............................17
4. Key aspects of overlay networks...............................18
4.1. Pros & Cons..............................................18
4.2. Overlay issues to consider...............................19
4.2.1. Data plane vs Control plane driven..................19
4.2.2. Coordination between data plane and control plane...19
4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM)
traffic....................................................20
4.2.4. Path MTU............................................20
4.2.5. NVE location trade-offs.............................21
4.2.6. Interaction between network overlays and underlays..22
5. Security Considerations.......................................22
6. IANA Considerations...........................................22
7. References....................................................23
7.1. Normative References.....................................23
7.2. Informative References...................................23
8. Acknowledgments...............................................23
1. Introduction Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document provides a framework for Data Center Network This document is subject to BCP 78 and the IETF Trust's Legal
Virtualization over L3 tunnels. This framework is intended to aid in Provisions Relating to IETF Documents
standardizing protocols and mechanisms to support large scale (http://trustee.ietf.org/license-info) in effect on the date of
network virtualization for data centers. publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License.
Several IETF drafts relate to the use of overlay networks for data Abstract
centers.
[NVOPS] defines the rationale for using overlay networks in order to Several IETF drafts relate to the use of overlay networks to support
build large data center networks. The use of virtualization leads to large scale virtual data centers. This draft provides a framework
a very large number of communication domains and end systems to cope for Network Virtualization over L3 (NVO3) and is intended to help
with. Existing virtual network models used for data center networks plan a set of work items in order to provide a complete solution
have known limitations, specifically in the context of multiple set. It defines a logical view of the main components with the
tenants. These issues can be summarized as: intention of streamlining the terminology and focusing the solution
set.
o Limited VLAN space Table of Contents
o FIB explosion due to handling of large number of MACs/IP 1. Introduction...................................................3
addresses 1.1. Conventions used in this document.........................4
1.2. General terminology.......................................4
1.3. DC network architecture...................................6
1.4. Tenant networking view....................................7
2. Reference Models...............................................8
2.1. Generic Reference Model...................................8
2.2. NVE Reference Model......................................10
2.3. NVE Service Types........................................11
2.3.1. L2 NVE providing Ethernet LAN-like service..........11
2.3.2. L3 NVE providing IP/VRF-like service................11
3. Functional components.........................................11
3.1. Generic service virtualization components................12
3.1.1. Virtual Access Points (VAPs)........................12
3.1.2. Virtual Network Instance (VNI)......................12
3.1.3. Overlay Modules and VN Context......................13
3.1.4. Tunnel Overlays and Encapsulation options...........14
3.1.5. Control Plane Components............................14
3.1.5.1. Auto-provisioning/Service discovery...............14
3.1.5.2. Address advertisement and tunnel mapping..........15
3.1.5.3. Tunnel management.................................15
3.2. Service Overlay Topologies...............................16
4. Key aspects of overlay networks...............................16
4.1. Pros & Cons..............................................16
4.2. Overlay issues to consider...............................17
4.2.1. Data plane vs Control plane driven..................17
4.2.2. Coordination between data plane and control plane...18
4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM)
traffic....................................................18
4.2.4. Path MTU............................................19
4.2.5. NVE location trade-offs.............................19
4.2.6. Interaction between network overlays and underlays..20
5. Security Considerations.......................................21
6. IANA Considerations...........................................21
7. References....................................................21
7.1. Normative References.....................................21
7.2. Informative References...................................21
8. Acknowledgments...............................................22
o Spanning Tree limitations 1. Introduction
o Excessive ARP handling This document provides a framework for Data Center Network
o Broadcast storms Virtualization over L3 tunnels. This framework is intended to aid in
standardizing protocols and mechanisms to support large scale
network virtualization for data centers.
o Inefficient Broadcast/Multicast handling Several IETF drafts relate to the use of overlay networks for data
centers.
o Limited mobility/portability support [NVOPS] defines the rationale for using overlay networks in order to
build large data center networks. The use of virtualization leads to
a very large number of communication domains and end systems to cope
with.
o Lack of service auto-discovery [OVCPREQ] describes the requirements for a control plane protocol
required by overlay border nodes to exchange overlay mappings.
Overlay techniques have been used in the past to address some of This document provides reference models and functional components of
these issues. data center overlay networks as well as a discussion of technical
issues that have to be addressed in the design of standards and
mechanisms for large scale data centers.
[OVCPREQ] describes the requirements for a control plane protocol 1.1. Conventions used in this document
required by overlay border nodes to exchange overlay mappings.
This document provides reference models and functional components of The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
data center overlay networks as well as a discussion of technical "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
issues that have to be addressed in the design of standards and document are to be interpreted as described in RFC-2119 [RFC2119].
mechanisms for large scale data centers.
1.1. Conventions used in this document In this document, these words will appear with that interpretation
only when in ALL CAPS. Lower case uses of these words are not to be
interpreted as carrying RFC-2119 significance.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 1.2. General terminology
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC-2119 [RFC2119].
In this document, these words will appear with that interpretation This document uses the following terminology:
only when in ALL CAPS. Lower case uses of these words are not to be
interpreted as carrying RFC-2119 significance.
1.2. General terminology NVE: Network Virtualization Edge. It is a network entity that sits
on the edge of the NVO3 network. It implements network
virtualization functions that allow for L2 and/or L3 tenant
separation and for hiding tenant addressing information (MAC and IP
addresses). An NVE could be implemented as part of a virtual switch
within a hypervisor, a physical switch or router, a Network Service
Appliance or even be embedded within an End Station.
This document uses the following terminology: VN: Virtual Network. This is a virtual L2 or L3 domain that belongs
a tenant.
VN: Virtual Network. This is a virtual L2 or L3 domain that belongs VNI: Virtual Network Instance. This is one instance of a virtual
a tenant. overlay network. Two Virtual Networks are isolated from one another
and may use overlapping addresses.
VNI: Virtual Network Instance. This is one instance of a virtual Virtual Network Context or VN Context: Field that is part of the
overlay network. Two Virtual Networks are isolated from one another overlay encapsulation header which allows the encapsulated frame to
and may use overlapping addresses. be delivered to the appropriate virtual network endpoint by the
egress NVE. The egress NVE uses this field to determine the
appropriate virtual network context in which to process the packet.
This field MAY be an explicit, unique (to the administrative domain)
virtual network identifier (VNID) or MAY express the necessary
context information in other ways (e.g. a locally significant
identifier).
Virtual Network Context or VN Context: Field that is part of the VNID: Virtual Network Identifier. In the case where the VN context
overlay encapsulation header which allows the encapsulated frame to has global significance, this is the ID value that is carried in
be delivered to the appropriate virtual network endpoint by the each data packet in the overlay encapsulation that identifies the
egress NVE. The egress NVE uses this field to determine the Virtual Network the packet belongs to.
appropriate virtual network context in which to process the packet.
This field MAY be an explicit, unique (to the administrative domain)
virtual network identifier (VNID) or MAY express the necessary
context information in other ways (e.g. a locally significant
identifier).
VNID: Virtual Network Identifier. In the case where the VN context Underlay or Underlying Network: This is the network that provides
has global significance, this is the ID value that is carried in the connectivity between NVEs. The Underlying Network can be
each data packet in the overlay encapsulation that identifies the completely unaware of the overlay packets. Addresses within the
Virtual Network the packet belongs to. Underlying Network are also referred to as "outer addresses" because
they exist in the outer encapsulation. The Underlying Network can
use a completely different protocol (and address family) from that
of the overlay.
NVE: Network Virtualization Edge. It is a network entity that sits Data Center (DC): A physical complex housing physical servers,
on the edge of the NVO3 network. It implements network network switches and routers, Network Service Appliances and
virtualization functions that allow for L2 and/or L3 tenant networked storage. The purpose of a Data Center is to provide
separation and for hiding tenant addressing information (MAC and IP application and/or compute and/or storage services. One such service
addresses). An NVE could be implemented as part of a virtual switch is virtualized data center services, also known as Infrastructure as
within a hypervisor, a physical switch or router, a Network Service a Service.
Appliance or even be embedded within an End Station.
Underlay or Underlying Network: This is the network that provides Virtual Data Center or Virtual DC: A container for virtualized
the connectivity between NVEs. The Underlying Network can be compute, storage and network services. Managed by a single tenant, a
completely unaware of the overlay packets. Addresses within the Virtual DC can contain multiple VNs and multiple Tenant End Systems
Underlying Network are also referred to as "outer addresses" because that are connected to one or more of these VNs.
they exist in the outer encapsulation. The Underlying Network can
use a completely different protocol (and address family) from that
of the overlay.
Data Center (DC): A physical complex housing physical servers, VM: Virtual Machine. Several Virtual Machines can share the
network switches and routers, Network Service Appliances and resources of a single physical computer server using the services of
networked storage. The purpose of a Data Center is to provide a Hypervisor (see below definition).
application and/or compute and/or storage services. One such service
is virtualized data center services, also known as Infrastructure as
a Service.
Virtual Data Center or Virtual DC: A container for virtualized Hypervisor: Server virtualization software running on a physical
compute, storage and network services. Managed by a single tenant, a compute server that hosts Virtual Machines. The hypervisor provides
Virtual DC can contain multiple VNs and multiple Tenant End Systems shared compute/memory/storage and network connectivity to the VMs
that are connected to one or more of these VNs. that it hosts. Hypervisors often embed a Virtual Switch (see below).
VM: Virtual Machine. Several Virtual Machines can share the Virtual Switch: A function within a Hypervisor (typically
resources of a single physical computer server using the services of implemented in software) that provides similar services to a
a Hypervisor (see below definition). physical Ethernet switch. It switches Ethernet frames between VMs'
virtual NICs within the same physical server, or between a VM and a
physical NIC card connecting the server to a physical Ethernet
switch. It also enforces network isolation between VMs that should
not communicate with each other.
Hypervisor: Server virtualization software running on a physical Tenant: A customer who consumes virtualized data center services
compute server that hosts Virtual Machines. The hypervisor provides offered by a cloud service provider. A single tenant may consume one
shared compute/memory/storage and network connectivity to the VMs or more Virtual Data Centers hosted by the same cloud service
that it hosts. Hypervisors often embed a Virtual Switch (see below). provider.
Virtual Switch: A function within a Hypervisor (typically Tenant End System: It defines an end system of a particular tenant,
implemented in software) that provides similar services to a which can be for instance a virtual machine (VM), a non-virtualized
physical Ethernet switch. It switches Ethernet frames between VMs' server, or a physical appliance.
virtual NICs within the same physical server, or between a VM and a
physical NIC card connecting the server to a physical Ethernet
switch. It also enforces network isolation between VMs that should
not communicate with each other.
Tenant: A customer who consumes virtualized data center services ELAN: MEF ELAN, multipoint to multipoint Ethernet service
offered by a cloud service provider. A single tenant may consume one EVPN: Ethernet VPN as defined in [EVPN]
or more Virtual Data Centers hosted by the same cloud service
provider.
Tenant End System: It defines an end system of a particular tenant, 1.3. DC network architecture
which can be for instance a virtual machine (VM), a non-virtualized
server, or a physical appliance.
ELAN: MEF ELAN, multipoint to multipoint Ethernet service A generic architecture for Data Centers is depicted in Figure 1:
EVPN: Ethernet VPN as defined in [EVPN] ,---------.
,' `.
( IP/MPLS WAN )
`. ,'
`-+------+'
+--+--+ +-+---+
|DC GW|+-+|DC GW|
+-+---+ +-----+
| /
.--. .--.
( ' '.--.
.-.' Intra-DC '
( network )
( .'-'
'--'._.'. )\ \
/ / '--' \ \
/ / | | \ \
+---+--+ +-`.+--+ +--+----+
| ToR | | ToR | | ToR |
+-+--`.+ +-+-`.-+ +-+--+--+
.' \ .' \ .' `.
__/_ _i./ i./_ _\__
'--------' '--------' '--------' '--------'
: End : : End : : End : : End :
: Device : : Device : : Device : : Device :
'--------' '--------' '--------' '--------'
1.3. DC network architecture Figure 1 : A Generic Architecture for Data Centers
A generic architecture for Data Centers is depicted in Figure 1: An example of multi-tier DC network architecture is presented in
this figure. It provides a view of physical components inside a DC.
,---------. A cloud network is composed of intra-Data Center (DC) networks and
,' `. network services, and, inter-DC network and network connectivity
( IP/MPLS WAN ) services. Depending upon the scale, DC distribution, operations
`. ,' model, Capex and Opex aspects, DC networking elements can act as
`-+------+' strict L2 switches and/or provide IP routing capabilities, including
+--+--+ +-+---+ also service virtualization.
|DC GW|+-+|DC GW|
+-+---+ +-----+
| /
.--. .--.
( ' '.--.
.-.' Intra-DC '
( network )
( .'-'
'--'._.'. )\ \
/ / '--' \ \
/ / | | \ \
+---+--+ +-`.+--+ +--+----+
| ToR | | ToR | | ToR |
+-+--`.+ +-+-`.-+ +-+--+--+
.' \ .' \ .' `.
__/_ _i./ i./_ _\__
'--------' '--------' '--------' '--------'
: End : : End : : End : : End :
: Device : : Device : : Device : : Device :
'--------' '--------' '--------' '--------'
Figure 1 : A Generic Architecture for Data Centers In some DC architectures, it is possible that some tier layers
provide L2 and/or L3 services, are collapsed, and that Internet
connectivity, inter-DC connectivity and VPN support are handled by a
smaller number of nodes. Nevertheless, one can assume that the
functional blocks fit with the architecture above.
An example of multi-tier DC network architecture is presented in The following components can be present in a DC:
this figure. It provides a view of physical components inside a DC.
A cloud network is composed of intra-Data Center (DC) networks and o End Device: a DC resource to which the networking service is
network services, and, inter-DC network and network connectivity provided. End Device may be a compute resource (server or
services. Depending upon the scale, DC distribution, operations server blade), storage component or a network appliance
model, Capex and Opex aspects, DC networking elements can act as (firewall, load-balancer, IPsec gateway). Alternatively, the
strict L2 switches and/or provide IP routing capabilities, including End Device may include software based networking functions used
also service virtualization. to interconnect multiple hosts. An example of soft networking
is the virtual switch in the server blades, used to
interconnect multiple virtual machines (VMs). End Device may be
single or multi-homed to the Top of Rack switches (ToRs).
In some DC architectures, it is possible that some tier layers o Top of Rack (ToR): Hardware-based Ethernet switch aggregating
provide L2 and/or L3 services, are collapsed, and that Internet all Ethernet links from the End Devices in a rack representing
connectivity, inter-DC connectivity and VPN support are handled by a the entry point in the physical DC network for the hosts. ToRs
smaller number of nodes. Nevertheless, one can assume that the may also provide routing functionality, virtual IP network
functional blocks fit with the architecture above. connectivity, or Layer2 tunneling over IP for instance. ToRs
are usually multi-homed to switches in the Intra-DC network.
Other deployment scenarios may use an intermediate Blade Switch
before the ToR or an EoR (End of Row) switch to provide similar
function as a ToR.
The following components can be present in a DC: o Intra-DC Network: High capacity network composed of core
switches aggregating multiple ToRs. Core switches are usually
Ethernet switches but can also support routing capabilities.
o End Device: a DC resource to which the networking service is o DC GW: Gateway to the outside world providing DC Interconnect
provided. End Device may be a compute resource (server or and connectivity to Internet and VPN customers. In the current
server blade), storage component or a network appliance DC network model, this may be simply a Router connected to the
(firewall, load-balancer, IPsec gateway). Alternatively, the Internet and/or an IPVPN/L2VPN PE. Some network implementations
End Device may include software based networking functions used may dedicate DC GWs for different connectivity types (e.g., a
to interconnect multiple hosts. An example of soft networking DC GW for Internet, and another for VPN).
is the virtual switch in the server blades, used to
interconnect multiple virtual machines (VMs). End Device may be
single or multi-homed to the Top of Rack switches (ToRs).
o Top of Rack (ToR): Hardware-based Ethernet switch aggregating 1.4. Tenant networking view
all Ethernet links from the End Devices in a rack representing
the entry point in the physical DC network for the hosts. ToRs
may also provide routing functionality, virtual IP network
connectivity, or Layer2 tunneling over IP for instance. ToRs
are usually multi-homed to switches in the Intra-DC network.
Other deployment scenarios may use an intermediate Blade Switch
before the ToR or an EoR (End of Row) switch to provide similar
function as a ToR.
o Intra-DC Network: High capacity network composed of core The DC network architecture is used to provide L2 and/or L3 service
switches aggregating multiple ToRs. Core switches are usually connectivity to each tenant. An example is depicted in Figure 2:
Ethernet switches but can also support routing capabilities.
o DC GW: Gateway to the outside world providing DC Interconnect +----- L3 Infrastructure ----+
and connectivity to Internet and VPN customers. In the current | |
DC network model, this may be simply a Router connected to the ,--+-'. ;--+--.
Internet and/or an IPVPN/L2VPN PE. Some network implementations ..... Rtr1 )...... . Rtr2 )
may dedicate DC GWs for different connectivity types (e.g., a | '-----' | '-----'
DC GW for Internet, and another for VPN). | Tenant1 |LAN12 Tenant1|
|LAN11 ....|........ |LAN13
'':'''''''':' | | '':'''''''':'
,'. ,'. ,+. ,+. ,'. ,'.
(VM )....(VM ) (VM )... (VM ) (VM )....(VM )
`-' `-' `-' `-' `-' `-'
1.4. Tenant networking view Figure 2 : Logical Service connectivity for a single tenant
The DC network architecture is used to provide L2 and/or L3 service In this example one or more L3 contexts and one or more LANs (e.g.,
connectivity to each tenant. An example is depicted in Figure 2: one per application type) running on DC switches are assigned for DC
tenant 1.
+----- L3 Infrastructure ----+ For a multi-tenant DC, a virtualized version of this type of service
| | connectivity needs to be provided for each tenant by the Network
,--+-'. ;--+--. Virtualization solution.
..... Rtr1 )...... . Rtr2 )
| '-----' | '-----'
| Tenant1 |LAN12 Tenant1|
|LAN11 ....|........ |LAN13
'':'''''''':' | | '':'''''''':'
,'. ,'. ,+. ,+. ,'. ,'.
(VM ) .. (VM ) (VM ) .. (VM ) (VM ) .. (VM )
`-' `-' `-' `-' `-' `-'
Figure 2 : Logical Service connectivity for a single tenant 2. Reference Models
In this example one or more L3 contexts and one or more LANs (e.g., 2.1. Generic Reference Model
one per Application) running on DC switches are assigned for DC
tenant 1.
For a multi-tenant DC, a virtualized version of this type of service The following diagram shows a DC reference model for network
connectivity needs to be provided for each tenant by the Network virtualization using Layer3 overlays where edge devices provide a
Virtualization solution. logical interconnect between Tenant End Systems that belong to
specific tenant network.
2. Reference Models +--------+ +--------+
| Tenant | | Tenant |
| End +--+ +---| End |
| System | | | | System |
+--------+ | ................... | +--------+
| +-+--+ +--+-+ |
| | NV | | NV | |
+--|Edge| |Edge|--+
+-+--+ +--+-+
/ . L3 Overlay . \
+--------+ / . Network . \ +--------+
| Tenant +--+ . . +----| Tenant |
| End | . . | End |
| System | . +----+ . | System |
+--------+ .....| NV |........ +--------+
|Edge|
+----+
|
|
+--------+
| Tenant |
| End |
| System |
+--------+
2.1. Generic Reference Model Figure 3 : Generic reference model for DC network virtualization
over a Layer3 infrastructure
The following diagram shows a DC reference model for network The functional components in this picture do not necessarily map
virtualization using Layer3 overlays where edge devices provide a directly with the physical components described in Figure 1.
logical interconnect between Tenant End Systems that belong to
specific tenant network.
+--------+ +--------+ For example, an End Device can be a server blade with VMs and
| Tenant | | Tenant | virtual switch, i.e. the VM is the Tenant End System and the NVE
| End +--+ +---| End | functions may be performed by the virtual switch and/or the
| System | | | | System | hypervisor.
+--------+ | ................... | +--------+
| +-+--+ +--+-+ |
| | NV | | NV | |
+--|Edge| |Edge|--+
+-+--+ +--+-+
/ . L3 Overlay . \
+--------+ / . Network . \ +--------+
| Tenant +--+ . . +----| Tenant |
| End | . . | End |
| System | . +----+ . | System |
+--------+ .....| NV |........ +--------+
|Edge|
+----+
|
|
+--------+
| Tenant |
| End |
| System |
+--------+
Figure 3 : Generic reference model for DC network virtualization Another example is the case where an End Device can be a traditional
over a Layer3 infrastructure physical server (no VMs, no virtual switch), i.e. the server is the
Tenant End System and the NVE functions may be performed by the ToR.
Other End Devices in this category are Physical Network Appliances
or Storage Systems.
The functional components in this picture do not necessarily map A Tenant End System attaches to a Network Virtualization Edge (NVE)
directly with the physical components described in Figure 1. node, either directly or via a switched network (typically
Ethernet).
For example, an End Device can be a server blade with VMs and The NVE implements network virtualization functions that allow for
virtual switch, i.e. the VM is the Tenant End System and the NVE L2 and/or L3 tenant separation and for hiding tenant addressing
functions may be performed by the virtual switch and/or the information (MAC and IP addresses), tenant-related control plane
hypervisor. activity and service contexts from the Routed Backbone nodes.
Another example is the case where an End Device can be a traditional Core nodes utilize L3 techniques to interconnect NVE nodes in
physical server (no VMs, no virtual switch), i.e. the server is the support of the overlay network. These devices perform forwarding
Tenant End System and the NVE functions may be performed by the ToR. based on outer L3 tunnel header, and generally do not maintain per
Other End Devices in this category are Physical Network Appliances tenant-service state albeit some applications (e.g., multicast) may
or Storage Systems. require control plane or forwarding plane information that pertain
to a tenant, group of tenants, tenant service or a set of services
that belong to one or more tunnels. When such tenant or tenant-
service related information is maintained in the core, overlay
virtualization provides knobs to control that information.
A Tenant End System attaches to a Network Virtualization Edge (NVE) 2.2. NVE Reference Model
node, either directly or via a switched network (typically
Ethernet).
The NVE implements network virtualization functions that allow for The NVE is composed of a tenant service instance that Tenant End
L2 and/or L3 tenant separation and for hiding tenant addressing Systems interface with and an overlay module that provides tunneling
information (MAC and IP addresses), tenant-related control plane overlay functions (e.g. encapsulation/decapsulation of tenant
activity and service contexts from the Routed Backbone nodes. traffic from/to the tenant forwarding instance, tenant
identification and mapping, etc), as described in figure 4:
Core nodes utilize L3 techniques to interconnect NVE nodes in +------- L3 Network ------+
support of the overlay network. These devices perform forwarding | |
based on outer L3 tunnel header, and generally do not maintain per | Tunnel Overlay |
tenant-service state albeit some applications (e.g., multicast) may +------------+---------+ +---------+------------+
require control plane or forwarding plane information that pertain | +----------+-------+ | | +---------+--------+ |
to a tenant, group of tenants, tenant service or a set of services | | Overlay Module | | | | Overlay Module | |
that belong to one or more tunnels. When such tenant or tenant- | +---------+--------+ | | +---------+--------+ |
service related information is maintained in the core, overlay | |VN context| | VN context| |
virtualization provides knobs to control the magnitude of that | | | | | |
information. | +--------+-------+ | | +--------+-------+ |
| | |VNI| . |VNI| | | | |VNI| . |VNI| |
NVE1 | +-+------------+-+ | | +-+-----------+--+ | NVE2
| | VAPs | | | | VAPs | |
+----+------------+----+ +----+------------+----+
| | | |
-------+------------+-----------------+------------+-------
| | Tenant | |
| | Service IF | |
Tenant End Systems Tenant End Systems
2.2. NVE Reference Model Figure 4 : Generic reference model for NV Edge
The NVE is composed of a tenant service instance that Tenant End Note that some NVE functions (e.g. data plane and control plane
Systems interface with and an overlay module that provides tunneling functions) may reside in one device or may be implemented separately
overlay functions (e.g. encapsulation/decapsulation of tenant in different devices.
traffic from/to the tenant forwarding instance, tenant
identification and mapping, etc), as described in figure 4:
+------- L3 Network ------+ For example, the NVE functionality could reside solely on the End
| | Devices, on the ToRs or on both the End Devices and the ToRs. In the
| Tunnel Overlay | latter case we say that the the End Device NVE component acts as the
+------------+---------+ +---------+------------+ NVE Spoke, and ToRs act as NVE hubs. Tenant End Systems will
| +----------+-------+ | | +---------+--------+ | interface with the tenant service instances maintained on the NVE
| | Overlay Module | | | | Overlay Module | | spokes, and tenant service instances maintained on the NVE spokes
| +---------+--------+ | | +---------+--------+ | will interface with the tenant service instances maintained on the
| |VN context| | VN context| | NVE hubs.
| | | | | |
| +--------+-------+ | | +--------+-------+ |
| | VNI | | | | VNI | |
NVE1 | +-+------------+-+ | | +-+-----------+--+ | NVE2
| | VAPs | | | | VAPs | |
+----+------------+----+ +----+------------+----+
| | | |
-------+------------+-----------------+------------+-------
| | Tenant | |
| | Service IF | |
Tenant End Systems Tenant End Systems
Figure 4 : Generic reference model for NV Edge 2.3. NVE Service Types
Note that some NVE functions (e.g. data plane and control plane NVE components may be used to provide different types of virtualized
functions) may reside in one device or may be implemented separately service connectivity. This section defines the service types and
in different devices. associated attributes
For example, the NVE functionality could reside solely on the End 2.3.1. L2 NVE providing Ethernet LAN-like service
Devices, on the ToRs or on both the End Devices and the ToRs. In the
latter case we say that the the End Device NVE component acts as the
NVE Spoke, and ToRs act as NVE hubs. Tenant End Systems will
interface with the tenant service instances maintained on the NVE
spokes, and tenant service instances maintained on the NVE spokes
will interface with the tenant service instances maintained on the
NVE hubs.
2.3. NVE Service Types L2 NVE implements Ethernet LAN emulation (ELAN), an Ethernet based
multipoint service where the Tenant End Systems appear to be
interconnected by a LAN environment over a set of L3 tunnels. It
provides per tenant virtual switching instance with MAC addressing
isolation and L3 tunnel encapsulation across the core.
NVE components may be used to provide different types of virtualized 2.3.2. L3 NVE providing IP/VRF-like service
service connectivity. This section defines the service types and
associated attributes
2.3.1. L2 NVE providing Ethernet LAN-like service Virtualized IP routing and forwarding is similar from a service
definition perspective with IETF IP VPN (e.g., BGP/MPLS IPVPN and
IPsec VPNs). It provides per tenant routing instance with addressing
isolation and L3 tunnel encapsulation across the core.
L2 NVE implements Ethernet LAN emulation (ELAN), an Ethernet based 3. Functional components
multipoint service where the Tenant End Systems appear to be
interconnected by a LAN environment over a set of L3 tunnels. It
provides per tenant virtual switching instance with MAC addressing
isolation and L3 tunnel encapsulation across the core.
2.3.2. L3 NVE providing IP/VRF-like service This section breaks down the Network Virtualization architecture
into functional components to make it easier to discuss solution
options for different modules.
Virtualized IP routing and forwarding is similar from a service This version of the document gives an overview of generic functional
definition perspective with IETF IP VPN (e.g., BGP/MPLS IPVPN and components that are shared between L2 and L3 service types. Details
IPsec VPNs). It provides per tenant routing instance with addressing specific for each service type will be added in future revisions.
isolation and L3 tunnel encapsulation across the core.
3. Functional components 3.1. Generic service virtualization components
This section breaks down the Network Virtualization architecture A Network Virtualization solution is built around a number of
into functional components to make it easier to discuss solution functional components as depicted in Figure 5:
options for different modules.
This version of the document gives an overview of generic functional +------- L3 Network ------+
components that are shared between L2 and L3 service types. Details | |
specific for each service type will be added in future revisions. | Tunnel Overlay |
+------------+--------+ +--------+------------+
| +----------+------+ | | +------+----------+ |
| | Overlay Module | | | | Overlay Module | |
| +--------+--------+ | | +--------+--------+ |
| |VN Context| | |VN Context|
| | | | | |
| +-------+-------+ | | +-------+-------+ |
| ||VNI| ... |VNI|| | | ||VNI| ... |VNI|| |
NVE1 | +-+-----------+-+ | | +-+-----------+-+ | NVE2
| | VAPs | | | | VAPs | |
+----+-----------+----+ +----+-----------+----+
| | | |
-----+-----------+-----------------+-----------+-----
| | Tenant | |
| | Service IF | |
Tenant End Systems Tenant End Systems
3.1. Generic service virtualization components Figure 5 : Generic reference model for NV Edge
A Network Virtualization solution is built around a number of 3.1.1. Virtual Access Points (VAPs)
functional components as depicted in Figure 5:
+------- L3 Network ------+ Tenant End Systems are connected to the VNI Instance through Virtual
| | Access Points (VAPs). The VAPs can be in reality physical ports on a
| Tunnel Overlay | ToR or virtual ports identified through logical interface
+------------+--------+ +--------+------------+ identifiers (VLANs, internal VSwitch Interface ID leading to a VM).
| +----------+------+ | | +------+----------+ |
| | Overlay Module | | | | Overlay Module | |
| +--------+--------+ | | +--------+--------+ |
| |VN Context| | |VN Context|
| | | | | |
| +-------+-------+ | | +------+-------+ |
| | VNI | | | | VNI | |
NVE1 | +-+-----------+-+ | | +-+----------+-+ | NVE2
| | VAPs | | | | VAPs | |
+----+-----------+----+ +----+-----------+----+
| | | |
-------+-----------+-----------------+-----------+-------
| | Tenant | |
| | Service IF | |
Tenant End Systems Tenant End Systems
Figure 5 : Generic reference model for NV Edge 3.1.2. Virtual Network Instance (VNI)
3.1.1. Virtual Access Points (VAPs) The VNI represents a set of configuration attributes defining access
and tunnel policies and (L2 and/or L3) forwarding functions.
Tenant End Systems are connected to the Tenant Instance through Per tenant FIB tables and control plane protocol instances are used
Virtual Access Points (VAPs). The VAPs can be in reality physical to maintain separate private contexts between tenants. Hence tenants
ports on a ToR or virtual ports identified through logical interface are free to use their own addressing schemes without concerns about
identifiers (VLANs, internal VSwitch Interface ID leading to a VM). address overlapping with other tenants.
3.1.2. Virtual Network Instance (VNI) 3.1.3. Overlay Modules and VN Context
The VNI represents a set of configuration attributes defining access Mechanisms for identifying each tenant service are required to allow
and tunnel policies and (L2 and/or L3) forwarding functions. the simultaneous overlay of multiple tenant services over the same
underlay L3 network topology. In the data plane, each NVE, upon
sending a tenant packet, must be able to encode the VN Context for
the destination NVE in addition to the L3 tunnel source address
identifying the source NVE and the tunnel destination L3 address
identifying the destination NVE. This allows the destination NVE to
identify the tenant service instance and therefore appropriately
process and forward the tenant packet.
Per tenant FIB tables and control plane protocol instances are used The Overlay module provides tunneling overlay functions: tunnel
to maintain separate private contexts between tenants. Hence tenants initiation/termination, encapsulation/decapsulation of frames from
are free to use their own addressing schemes without concerns about VAPs/L3 Backbone and may provide for transit forwarding of IP
address overlapping with other tenants. traffic (e.g., transparent tunnel forwarding).
3.1.3. Overlay Modules and VN Context In a multi-tenant context, the tunnel aggregates frames from/to
different VNIs. Tenant identification and traffic demultiplexing are
based on the VN Context (e.g. VNID).
Mechanisms for identifying each tenant service are required to allow The following approaches can been considered:
the simultaneous overlay of multiple tenant services over the same
underlay L3 network topology. In the data plane, each NVE, upon
sending a tenant packet, must be able to encode the VN Context for
the destination NVE in addition to the L3 tunnel source address
identifying the source NVE and the tunnel destination L3 address
identifying the destination NVE. This allows the destination NVE to
identify the tenant service instance and therefore appropriately
process and forward the tenant packet.
The Overlay module provides tunneling overlay functions: tunnel o One VN Context per Tenant: A globally unique (on a per-DC
initiation/termination, encapsulation/decapsulation of frames from administrative domain) VNID is used to identify the related
VAPs/L3 Backbone and may provide for transit forwarding of IP Tenant instances. An example of this approach is the use of
traffic (e.g., transparent tunnel forwarding). IEEE VLAN or ISID tags to provide virtual L2 domains.
In a multi-tenant context, the tunnel aggregates frames from/to o One VN Context per VNI: A per-tenant local value is
different VNIs. Tenant identification and traffic demultiplexing are automatically generated by the egress NVE and usually
based on the VN Context (e.g. VNID). distributed by a control plane protocol to all the related
NVEs. An example of this approach is the use of per VRF MPLS
labels in IP VPN [RFC4364].
The following approaches can been considered: o One VN Context per VAP: A per-VAP local value is assigned and
usually distributed by a control plane protocol. An example of
this approach is the use of per CE-PE MPLS labels in IP VPN
[RFC4364].
o One VN Context per Tenant: A globally unique (on a per-DC Note that when using one VN Context per VNI or per VAP, an
administrative domain) VNID is used to identify the related additional global identifier may be used by the control plane to
Tenant instances. An example of this approach is the use of identify the Tenant context.
IEEE VLAN or ISID tags to provide virtual L2 domains.
o One VN Context per VNI: A per-tenant local value is 3.1.4. Tunnel Overlays and Encapsulation options
automatically generated by the egress NVE and usually
distributed by a control plane protocol to all the related
NVEs. An example of this approach is the use of per VRF MPLS
labels in IP VPN [RFC4364].
o One VN Context per VAP: A per-VAP local value is assigned and Once the VN context is added to the frame, a L3 Tunnel encapsulation
usually distributed by a control plane protocol. An example of is used to transport the frame to the destination NVE. The backbone
this approach is the use of per CE-PE MPLS labels in IP VPN devices do not usually keep any per service state, simply forwarding
[RFC4364]. the frames based on the outer tunnel header.
Note that when using one VN Context per VNI or per VAP, an Different IP tunneling options (GRE/L2TP/IPSec) and tunneling
additional global identifier may be used by the control plane to options (BGP VPN, PW, VPLS) are available for both Ethernet and IP
identify the Tenant context. formats.
3.1.4. Tunnel Overlays and Encapsulation options 3.1.5. Control Plane Components
Once the VN context is added to the frame, a L3 Tunnel encapsulation Control plane components may be used to provide the following
is used to transport the frame to the destination NVE. The backbone capabilities:
devices do not usually keep any per service state, simply forwarding
the frames based on the outer tunnel header.
Different IP tunneling options (GRE/L2TP/IPSec) and tunneling . Auto-provisioning/Service discovery
options (BGP VPN, PW, VPLS) are available for both Ethernet and IP
formats.
3.1.5. Control Plane Components . Address advertisement and tunnel mapping
Control plane components may be used to provide the following . Tunnel management
capabilities:
. Auto-provisioning/Service discovery A control plane component can be an on-net control protocol or a
management control entity.
. Address advertisement and tunnel mapping 3.1.5.1. Auto-provisioning/Service discovery
. Tunnel management NVEs must be able to select the appropriate VNI for each Tenant End
System. This is based on state information that is often provided by
external entities. For example, in a VM environment, this
information is provided by compute management systems, since these
are the only entities that have visibility on which VM belongs to
which tenant.
A control plane component can be an on-net control protocol or a A mechanism for communicating this information between Tenant End
management control entity. Systems and the local NVE is required. As a result the VAPs are
created and mapped to the appropriate Tenant Instance.
3.1.5.1. Auto-provisioning/Service discovery Depending upon the implementation, this control interface can be
implemented using an auto-discovery protocol between Tenant End
Systems and their local NVE or through management entities.
NVEs must be able to select the appropriate VNI for each Tenant End When a protocol is used, appropriate security and authentication
System. This is based on state information that is often provided by mechanisms to verify that Tenant End System information is not
external entities. For example, in a VM environment, this spoofed or altered are required. This is one critical aspect for
information is provided by compute management systems, since these providing integrity and tenant isolation in the system.
are the only entities that have visibility on which VM belongs to
which tenant.
A mechanism for communicating this information between Tenant End Another control plane protocol can also be used to advertize NVE
Systems and the local NVE is required. As a result the VAPs are tenant service instance (tenant and service type provided to the
created and mapped to the appropriate Tenant Instance. tenant) to other NVEs. Alternatively, management control entities
can also be used to perform these functions.
Depending upon the implementation, this control interface can be 3.1.5.2. Address advertisement and tunnel mapping
implemented using an auto-discovery protocol between Tenant End
Systems and their local NVE or through management entities.
When a protocol is used, appropriate security and authentication As traffic reaches an ingress NVE, a lookup is performed to
mechanisms to verify that Tenant End System information is not determine which tunnel the packet needs to be sent to. It is then
spoofed or altered are required. This is one critical aspect for encapsulated with a tunnel header containing the destination address
providing integrity and tenant isolation in the system. of the egress overlay node. Intermediate nodes (between the ingress
and egress NVEs) switch or route traffic based upon the outer
destination address.
Another control plane protocol can also be used to advertize NVE One key step in this process consists of mapping a final destination
tenant service instance (tenant and service type provided to the address to the proper tunnel. NVEs are responsible for maintaining
tenant) to other NVEs. Alternatively, management control entities such mappings in their lookup tables. Several ways of populating
can also be used to perform these functions. these lookup tables are possible: control plane driven, management
plane driven, or data plane driven.
3.1.5.2. Address advertisement and tunnel mapping When a control plane protocol is used to distribute address
advertisement and tunneling information, the auto-
provisioning/Service discovery could be accomplished by the same
protocol. In this scenario, the auto-provisioning/Service discovery
could be combined with (be inferred from) the address advertisement
and tunnel mapping. Furthermore, a control plane protocol that
carries both MAC and IP addresses eliminates the need for ARP, and
hence addresses one of the issues with explosive ARP handling.
As traffic reaches an ingress NVE, a lookup is performed to 3.1.5.3. Tunnel management
determine which tunnel the packet needs to be sent to. It is then
encapsulated with a tunnel header containing the destination address
of the egress overlay node. Intermediate nodes (between the ingress
and egress NVEs) switch or route traffic based upon the outer
destination address.
One key step in this process consists of mapping a final destination A control plane protocol may be required to exchange tunnel state
address to the proper tunnel. NVEs are responsible for maintaining information. This may include setting up tunnels and/or providing
such mappings in their lookup tables. Several ways of populating tunnel state information.
these lookup tables are possible: control plane driven, management
plane driven, or data plane driven.
When a control plane protocol is used to distribute address This applies to both unicast and multicast tunnels.
advertisement and tunneling information, the auto-
provisioning/Service discovery could be accomplished by the same
protocol. In this scenario, the auto-provisioning/Service discovery
could be combined with (be inferred from) the address advertisement
and tunnel mapping. Furthermore, a control plane protocol that
carries both MAC and IP addresses eliminates the need for ARP, and
hence addresses one of the issues with explosive ARP handling.
3.1.5.3. Tunnel management For instance, it may be necessary to provide active/standby status
information between NVEs, up/down status information,
pruning/grafting information for multicast tunnels, etc.
A control plane protocol may be required to exchange tunnel state 3.2. Service Overlay Topologies
information. This may include setting up tunnels and/or providing
tunnel state information.
This applies to both unicast and multicast tunnels. A number of service topologies may be used to optimize the service
connectivity and to address NVE performance limitations.
For instance, it may be necessary to provide active/standby status The topology described in Figure 3 suggests the use of a tunnel mesh
information between NVEs, up/down status information, between the NVEs where each tenant instance is one hop away from a
pruning/grafting information for multicast tunnels, etc. service processing perspective. Partial mesh topologies and an NVE
hierarchy may be used where certain NVEs may act as service transit
points.
3.2. Service Overlay Topologies 4. Key aspects of overlay networks
A number of service topologies may be used to optimize the service The intent of this section is to highlight specific issues that
connectivity and to address NVE performance limitations. proposed overlay solutions need to address.
The topology described in Figure 3 suggests the use of a tunnel mesh 4.1. Pros & Cons
between the NVEs where each tenant instance is one hop away from a
service processing perspective. Partial mesh topologies and an NVE
hierarchy may be used where certain NVEs may act as service transit
points.
4. Key aspects of overlay networks An overlay network is a layer of virtual network topology on top of
the physical network.
The intent of this section is to highlight specific issues that Overlay networks offer the following key advantages:
proposed overlay solutions need to address.
4.1. Pros & Cons o Unicast tunneling state management is handled at the edge of
the network. Intermediate transport nodes are unaware of such
state. Note that this is not the case when multicast is enabled
in the core network.
An overlay network is a layer of virtual network topology on top of o Tunnels are used to aggregate traffic and hence offer the
the physical network. advantage of minimizing the amount of forwarding state required
within the underlay network
Overlay networks offer the following key advantages: o Decoupling of the overlay addresses (MAC and IP) used by VMs
from the underlay network. This offers a clear separation
between addresses used within the overlay and the underlay
networks and it enables the use of overlapping addresses spaces
by Tenant End Systems
o Unicast tunneling state management is handled at the edge of o Support of a large number of virtual network identifiers
the network. Intermediate transport nodes are unaware of such
state. Note that this is not the case when multicast is enabled
in the core network.
o Tunnels are used to aggregate traffic and hence offer the Overlay networks also create several challenges:
advantage of minimizing the amount of forwarding state required
within the underlay network
o Decoupling of the overlay addresses (MAC and IP) used by VMs o Overlay networks have no controls of underlay networks and lack
from the underlay network. This offers a clear separation critical network information
between addresses used within the overlay and the underlay o Overlays typically probe the network to measure link
networks and it enables the use of overlapping addresses spaces properties, such as available bandwidth or packet loss
by Tenant End Systems rate. It is difficult to accurately evaluate network
properties. It might be preferable for the underlay
network to expose usage and performance information.
o Support of a large number of virtual network identifiers o Miscommunication between overlay and underlay networks can lead
to an inefficient usage of network resources.
Overlay networks also create several challenges: o Fairness of resource sharing and collaboration among end-nodes
in overlay networks are two critical issues
o Overlay networks have no controls of underlay networks and lack o When multiple overlays co-exist on top of a common underlay
critical network information network, the lack of coordination between overlays can lead to
performance issues.
o Overlays typically probe the network to measure link o Overlaid traffic may not traverse firewalls and NAT devices.
properties, such as available bandwidth or packet loss
rate. It is difficult to accurately evaluate network
properties. It might be preferable for the underlay
network to expose usage and performance information.
o Miscommunication between overlay and underlay networks can lead o Multicast service scalability. Multicast support may be
to an inefficient usage of network resources. required in the overlay network to address for each tenant
flood containment or efficient multicast handling.
o Fairness of resource sharing and collaboration among end-nodes o Hash-based load balancing may not be optimal as the hash
in overlay networks are two critical issues algorithm may not work well due to the limited number of
combinations of tunnel source and destination addresses
o When multiple overlays co-exist on top of a common underlay 4.2. Overlay issues to consider
network, the lack of coordination between overlays can lead to
performance issues.
o Overlaid traffic may not traverse firewalls and NAT devices. 4.2.1. Data plane vs Control plane driven
o Multicast service scalability. Multicast support may be In the case of an L2NVE, it is possible to dynamically learn MAC
required in the overlay network to address for each tenant addresses against VAPs. It is also possible that such addresses be
flood containment or efficient multicast handling. known and controlled via management or a control protocol for both
L2NVEs and L3NVEs.
o Load balancing may not be optimal as the hash algorithm may not Dynamic data plane learning implies that flooding of unknown
work well due to the limited number of combinations of tunnel destinations be supported and hence implies that broadcast and/or
source and destination addresses multicast be supported. Multicasting in the core network for dynamic
learning may lead to significant scalability limitations. Specific
forwarding rules must be enforced to prevent loops from happening.
This can be achieved using a spanning tree, a shortest path tree, or
a split-horizon mesh.
4.2. Overlay issues to consider It should be noted that the amount of state to be distributed is
dependent upon network topology and the number of virtual machines.
4.2.1. Data plane vs Control plane driven Different forms of caching can also be utilized to minimize state
distribution between the various elements.
Dynamic (data plane) learning implies that flooding of unknown 4.2.2. Coordination between data plane and control plane
destinations be supported and hence implies that broadcast and/or
multicast be supported. Multicasting in the core network for dynamic
learning can lead to significant scalability limitations. Specific
forwarding rules must be enforced to prevent loops from happening.
This can be achieved using a spanning tree protocol or a shortest
path tree, or using a split-horizon mesh.
It should be noted that the amount of state to be distributed is a For an L2 NVE, the NVE needs to be able to determine MAC addresses
function of the number of virtual machines. Different forms of of the end systems present on a VAP (for instance, dataplane
caching can also be utilized to minimize state distribution between learning may be relied upon for this purpose). For an L3 NVE, the
the various elements. NVE needs to be able to determine IP addresses of the end systems
present on a VAP.
4.2.2. Coordination between data plane and control plane In both cases, coordination with the NVE control protocol is needed
such that when the NVE determines that the set of addresses behind a
VAP has changed, it triggers the local NVE control plane to
distribute this information to its peers.
Often a combination of data plane and control based learning is 4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) traffic
necessary. Learning is applied towards end-user facing ports whereas
distribution is used on the tunnel ports. Coordination between the
learning engine and the control protocol is needed such that when a
new address gets learned or an old address is removed, it triggers
the local control plane to distribute this information to its peers.
4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) traffic There are two techniques to support packet replication needed for
broadcast, unknown unicast and multicast:
There are two techniques to support packet replication needed for o Ingress replication
broadcast, unknown unicast and multicast:
o Ingress replication o Use of core multicast trees
o Use of core multicast trees There is a bandwidth vs state trade-off between the two approaches.
Depending upon the degree of replication required (i.e. the number
of hosts per group) and the amount of multicast state to maintain,
trading bandwidth for state is of consideration.
There is a bandwidth vs state trade-off between the two approaches. When the number of hosts per group is large, the use of core
Depending upon the degree of replication required (i.e. the number multicast trees may be more appropriate. When the number of hosts is
of hosts per group) and the amount of multicast state to maintain, small (e.g. 2-3), ingress replication may not be an issue.
trading bandwidth for state is of consideration.
When the number of hosts per group is large, the use of core Depending upon the size of the data center network and hence the
multicast trees may be more appropriate. When the number of hosts is number of (S,G) entries, but also the duration of multicast flows,
small (e.g. 2-3), ingress replication may not be an issue. the use of core multicast trees can be a challenge.
Depending upon the size of the data center network and hence the When flows are well known, it is possible to pre-provision such
number of (S,G) entries, but also the duration of multicast flows, multicast trees. However, it is often difficult to predict
the use of core multicast trees can be a challenge. application flows ahead of time, and hence programming of (S,G)
entries for short-lived flows could be impractical.
When flows are well known, it is possible to pre-provision such A possible trade-off is to use in the core shared multicast trees as
multicast trees. However, it is often difficult to predict opposed to dedicated multicast trees.
application flows ahead of time, and hence programming of (S,G)
entries for short-lived flows could be impractical.
A possible trade-off is to use in the core shared multicast trees as 4.2.4. Path MTU
opposed to dedicated multicast trees.
4.2.4. Path MTU When using overlay tunneling, an outer header is added to the
original frame. This can cause the MTU of the path to the egress
tunnel endpoint to be exceeded.
When using overlay tunneling, an outer header is added to the In this section, we will only consider the case of an IP overlay.
original frame. This can cause the MTU of the path to the egress
tunnel endpoint to be exceeded.
In this section, we will only consider the case of an IP overlay. It is usually not desirable to rely on IP fragmentation for
performance reasons. Ideally, the interface MTU as seen by a Tenant
End System is adjusted such that no fragmentation is needed. TCP
will adjust its maximum segment size accordingly.
It is usually not desirable to rely on IP fragmentation for It is possible for the MTU to be configured manually or to be
performance reasons. Ideally, the interface MTU as seen by a Tenant discovered dynamically. Various Path MTU discovery techniques exist
End System is adjusted such that no fragmentation is needed. TCP in order to determine the proper MTU size to use:
will adjust its maximum segment size accordingly.
It is possible for the MTU to be configured manually or to be o Classical ICMP-based MTU Path Discovery [RFC1191] [RFC1981]
discovered dynamically. Various Path MTU discovery techniques exist
in order to determine the proper MTU size to use:
o Classical ICMP-based MTU Path Discovery [RFC1191] [RFC1981] o Tenant End Systems rely on ICMP messages to discover the
MTU of the end-to-end path to its destination. This method
is not always possible, such as when traversing middle
boxes (e.g. firewalls) which disable ICMP for security
reasons
o Tenant End Systems rely on ICMP messages to discover the o Extended MTU Path Discovery techniques such as defined in
MTU of the end-to-end path to its destination. This method [RFC4821]
is not always possible, such as when traversing middle
boxes (e.g. firewalls) which disable ICMP for security
reasons
o Extended MTU Path Discovery techniques such as defined in It is also possible to rely on the overlay layer to perform
[RFC4821] segmentation and reassembly operations without relying on the Tenant
End Systems to know about the end-to-end MTU. The assumption is that
some hardware assist is available on the NVE node to perform such
SAR operations. However, fragmentation by the overlay layer can lead
to performance and congestion issues due to TCP dynamics and might
require new congestion avoidance mechanisms from then underlay
network [FLOYD].
It is also possible to rely on the overlay layer to perform Finally, the underlay network may be designed in such a way that the
segmentation and reassembly operations without relying on the Tenant MTU can accommodate the extra tunnel overhead.
End Systems to know about the end-to-end MTU. The assumption is that
some hardware assist is available on the NVE node to perform such
SAR operations. However, fragmentation by the overlay layer can lead
to performance and congestion issues due to TCP dynamics and might
require new congestion avoidance mechanisms from then underlay
network [FLOYD].
Finally, the underlay network may be designed in such a way that the 4.2.5. NVE location trade-offs
MTU can accommodate the extra tunnel overhead.
4.2.5. NVE location trade-offs In the case of DC traffic, traffic originated from a VM is native
Ethernet traffic. This traffic can be switched by a local VM switch
or ToR switch and then by a DC gateway. The NVE function can be
embedded within any of these elements.
In the case of DC traffic, traffic originated from a VM is native There are several criteria to consider when deciding where the NVE
Ethernet traffic. This traffic can be switched by a local VM switch processing boundary happens:
or ToR switch and then by a DC gateway. The NVE function can be
embedded within any of these elements.
There are several criteria to consider when deciding where the NVE o Processing and memory requirements
processing boundary happens:
o Processing and memory requirements o Datapath (e.g. lookups, filtering,
encapsulation/decapsulation)
o Datapath (e.g. lookups, filtering, o Control plane processing (e.g. routing, signaling, OAM)
encapsulation/decapsulation)
o Control plane processing (e.g. routing, signaling, OAM) o FIB/RIB size
o FIB/RIB size o Multicast support
o Multicast support o Routing protocols
o Routing protocols o Packet replication capability
o Packet replication capability
o Fragmentation support o Fragmentation support
o QoS transparency o QoS transparency
o Resiliency o Resiliency
4.2.6. Interaction between network overlays and underlays 4.2.6. Interaction between network overlays and underlays
When multiple overlays co-exist on top of a common underlay network, When multiple overlays co-exist on top of a common underlay network,
this can cause some performance issues. These overlays have this can cause some performance issues. These overlays have
partially overlapping paths and nodes. partially overlapping paths and nodes.
Each overlay is selfish by nature in that it sends traffic so as to Each overlay is selfish by nature in that it sends traffic so as to
optimize its own performance without considering the impact on other optimize its own performance without considering the impact on other
overlays, unless the underlay tunnels are traffic engineered on a overlays, unless the underlay tunnels are traffic engineered on a
per overlay basis so as to avoid sharing underlay resources. per overlay basis so as to avoid sharing underlay resources.
Better visibility between overlays and underlays can be achieved by Better visibility between overlays and underlays can be achieved by
providing mechanisms to exchange information about: providing mechanisms to exchange information about:
o Performance metrics (throughput, delay, loss, jitter) o Performance metrics (throughput, delay, loss, jitter)
o Cost metrics o Cost metrics
5. Security Considerations 5. Security Considerations
The tenant to overlay mapping function can introduce significant The tenant to overlay mapping function can introduce significant
security risks if appropriate protocols are not used that can security risks if appropriate protocols are not used that can
support mutual authentication. support mutual authentication.
No other new security issues are introduced beyond those described No other new security issues are introduced beyond those described
already in the related L2VPN and L3VPN RFCs. already in the related L2VPN and L3VPN RFCs.
6. IANA Considerations 6. IANA Considerations
IANA does not need to take any action for this draft. IANA does not need to take any action for this draft.
7. References 7. References
7.1. Normative References 7.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
7.2. Informative References 7.2. Informative References
[NVOPS] Narten, T. et al, "Problem Statement : Overlays for Network [NVOPS] Narten, T. et al, "Problem Statement : Overlays for Network
Virtualization", draft-narten-nvo3-overlay-problem- Virtualization", draft-narten-nvo3-overlay-problem-
statement (work in progress) statement (work in progress)
[OVCPREQ] Kreeger, L. et al, "Network Virtualization Overlay Control [OVCPREQ] Kreeger, L. et al, "Network Virtualization Overlay Control
Protocol Requirements", draft-kreeger-nvo3-overlay-cp Protocol Requirements", draft-kreeger-nvo3-overlay-cp
(work in progress) (work in progress)
[FLOYD] Sally Floyd, Allyn Romanow, "Dynamics of TCP Traffic over [FLOYD] Sally Floyd, Allyn Romanow, "Dynamics of TCP Traffic over
ATM Networks", IEEE JSAC, V. 13 N. 4, May 1995 ATM Networks", IEEE JSAC, V. 13 N. 4, May 1995
[RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private
Networks (VPNs)", RFC 4364, February 2006. Networks (VPNs)", RFC 4364, February 2006.
[RFC1191] Mogul, J. "Path MTU Discovery", RFC1191, November 1990 [RFC1191] Mogul, J. "Path MTU Discovery", RFC1191, November 1990
[RFC1981] McCann, J. et al, "Path MTU Discovery for IPv6", RFC1981, [RFC1981] McCann, J. et al, "Path MTU Discovery for IPv6", RFC1981,
August 1996 August 1996
[RFC4821] Mathis, M. et al, "Packetization Layer Path MTU [RFC4821] Mathis, M. et al, "Packetization Layer Path MTU
Discovery", RFC4821, March 2007 Discovery", RFC4821, March 2007
8. Acknowledgments 8. Acknowledgments
In addition to the authors the following people have contributed to In addition to the authors the following people have contributed to
this document: this document:
Dimitrios Stiliadis, Rotem Salomonovitch, Alcatel-Lucent Dimitrios Stiliadis, Rotem Salomonovitch, Alcatel-Lucent
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Authors' Addresses Authors' Addresses
Marc Lasserre Marc Lasserre
Alcatel-Lucent Alcatel-Lucent
Email: marc.lasserre@alcatel-lucent.com Email: marc.lasserre@alcatel-lucent.com
Florin Balus Florin Balus
Alcatel-Lucent Alcatel-Lucent
777 E. Middlefield Road 777 E. Middlefield Road
Mountain View, CA, USA 94043 Mountain View, CA, USA 94043
Email: florin.balus@alcatel-lucent.com Email: florin.balus@alcatel-lucent.com
Thomas Morin Thomas Morin
France Telecom Orange France Telecom Orange
Email: thomas.morin@orange.com Email: thomas.morin@orange.com
Nabil Bitar Nabil Bitar
Verizon Verizon
40 Sylvan Road 40 Sylvan Road
Waltham, MA 02145 Waltham, MA 02145
Email: nabil.bitar@verizon.com Email: nabil.bitar@verizon.com
Yakov Rekhter Yakov Rekhter
Juniper Juniper
Email: yakov@juniper.net Email: yakov@juniper.net
 End of changes. 221 change blocks. 
741 lines changed or deleted 726 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/