draft-ietf-nvo3-use-case-00.txt   draft-ietf-nvo3-use-case-01.txt 
Network working group L. Yong Network Working Group L. Yong
Internet Draft Huawei Internet Draft Huawei
Category: Informational M. Toy Category: Informational M. Toy
Comcast Comcast
A. Isaac A. Isaac
Bloomberg Bloomberg
V. Manral V. Manral
Hewlett-Packard Hewlett-Packard
L. Dunbar L. Dunbar
Huawei Huawei
Expires: August 2013 February 15, 2013 Expires: November 2013 May 1, 2013
Use Cases for DC Network Virtualization Overlays Use Cases for DC Network Virtualization Overlays
draft-ietf-nvo3-use-case-00 draft-ietf-nvo3-use-case-01
Abstract Abstract
This draft describes the general NVO3 use cases. The work intention This document describes the DC NVO3 use cases that may be
is to help validate the NVO3 framework and requirements as along potentially deployed in various data centers and apply to different
with the development of the solutions. applications. An application in a DC may be a combination of some
use cases described here.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with This Internet-Draft is submitted to IETF in full conformance with
the provisions of BCP 78 and BCP 79. the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
skipping to change at page 1, line 46 skipping to change at page 1, line 47
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on August, 2013. This Internet-Draft will expire on November, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 29 skipping to change at page 2, line 29
Conventions used in this document Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC-2119 [RFC2119]. document are to be interpreted as described in RFC-2119 [RFC2119].
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Terminology....................................................4 1.1. Contributors..............................................4
3. Basic Virtual Networks in a Data Center........................4 1.2. Terminology...............................................4
4. Interconnecting DC Virtual Network and External Networks.......6 2. Basic Virtual Networks in a Data Center........................5
4.1. DC Virtual Network Access via Internet....................7 3. Interconnecting DC Virtual Network and External Networks.......6
4.2. DC Virtual Network and WAN VPN Interconnection............7 3.1. DC Virtual Network Access via Internet....................6
5. DC Applications Using NVO3.....................................9 3.2. DC VN and Enterprise Sites interconnected via SP WAN......7
5.1. Supporting Multi Technologies in a Data Center...........10 4. DC Applications Using NVO3.....................................8
5.2. Tenant Virtual Network with Bridging/Routing.............10 4.1. Supporting Multi Technologies and Applications in a DC....9
5.3. Virtual Data Center (VDC)................................11 4.2. Tenant Network with Multi-Subnets or across multi DCs.....9
5.4. Federating NV03 Domains..................................13 4.3. Virtual Data Center (vDC)................................11
6. OAM Considerations............................................13 5. OAM Considerations............................................13
7. Summary.......................................................13 6. Summary.......................................................13
8. Security Considerations.......................................14 7. Security Considerations.......................................14
9. IANA Considerations...........................................14 8. IANA Considerations...........................................14
10. Acknowledgements.............................................15 9. Acknowledgements..............................................14
11. References...................................................15 10. References...................................................14
11.1. Normative References....................................15 10.1. Normative References....................................14
11.2. Informative References..................................16 10.2. Informative References..................................15
Authors' Addresses...............................................16 Authors' Addresses...............................................15
1. Introduction 1. Introduction
Compute Virtualization has dramatically and quickly changed IT Server Virtualization has changed IT industry in terms of efficiency,
industry in terms of efficiency, cost, and the speed in providing a cost, and the speed in providing a new applications and/or services.
new applications and/or services. However the problems in today's However the problems in today's data center networks hinder the
data center hinder the support of an elastic cloud service and support of an elastic cloud service and dynamic virtual tenant
dynamic virtual tenant networks [NVO3PRBM]. The goal of DC Network networks [NVO3PRBM]. The goal of DC Network Virtualization Overlays,
Virtualization Overlays, i.e. NVO3, is to decouple tenant system i.e. NVO3, is to decouple the communication among tenant systems
communication networking from DC physical networks and to allow one from DC physical networks and to allow one physical network
physical network infrastructure to provide: 1) traffic isolation infrastructure to provide: 1) traffic isolation among tenant virtual
among virtual networks over the same physical network; 2) networks over the same physical network; 2) independent address
independent address space in each virtual network and address space in each virtual network and address isolation from the
isolation from the infrastructure's; 3) Flexible VM placement and infrastructure's; 3) Flexible VM placement and move from one server
move from one server to another without any physical network to another without any of the physical network limitations. These
limitation. These characteristics will help address the issues in characteristics will help address the issues that hinder true
the data centers [NVO3PRBM]. virtualization in the data centers [NVO3PRBM].
Although NVO3 may enable a true virtual environment where VMs and Although NVO3 enables a true virtualization environment, the NVO3
network service appliances communicate, the NVO3 solution has to solution has to address the communication between a virtual network
address the communication between a virtual network and one physical and a physical network. This is because 1) many DCs that need to
network. This is because 1) many traditional DCs exist and will not provide network virtualization are currently running over physical
disappear any time soon; 2) a lot of DC applications serve to networks, the migration will be in steps; 2) a lot of DC
Internet and/or cooperation users on physical networks; 3) some applications are served to Internet users which run directly on
applications like Big Data analytics which are CPU bound may not physical networks; 3) some applications are CPU bound like Big Data
want the virtualization capability. analytics and may not need the virtualization capability.
This document is to describe general NVO3 use cases that apply to This document is to describe general NVO3 use cases that apply to
various data center networks to ensure nvo3 framework and solutions various data centers. Three types of the use cases described here
can meet the demands. Three types of the use cases described here
are: are:
o A virtual network connects many tenant systems within a Data o A virtual network connects many tenant systems within a Data
Center and form one L2 or L3 communication domain. A virtual Center and form one L2 or L3 communication domain. A virtual
network segregates its traffic from others and allows the VMs in network segregates its traffic from others and allows the VMs in
the network moving from one server to another. The case may be the network moving from one server to another. The case may be
used for DC internal applications that constitute the DC East- used for DC internal applications that constitute the DC East-
West traffic. West traffic.
o A DC provider offers a secure DC service to an enterprise o A DC provider offers a secure DC service to an enterprise
customer and/or Internet users. In these cases, the enterprise customer and/or Internet users. In these cases, the enterprise
customer may use a traditional VPN provided by a carrier or an customer may use a traditional VPN provided by a carrier or an
IPsec tunnel over Internet connecting to an overlay virtual IPsec tunnel over Internet connecting to a NVO3 network within a
network offered by a Data Center provider. This is mainly provider DC. This is mainly constitutes DC North-South traffic.
constitutes DC North-South traffic.
o A DC provider uses NVO3 to design a variety of cloud applications o A DC provider may use NVO3 and other network technologies for a
that make use of the network service appliance, virtual compute, tenant network, construct different topologies or zones for a
tenant network, and may design a variety of cloud applications
that may require the network service appliance, virtual compute,
storage, and networking. In this case, the NVO3 provides the storage, and networking. In this case, the NVO3 provides the
virtual networking functions for the applications. networking functions for the applications.
The document uses the architecture reference model and terminologies
defined in [NVO3FRWK] to describe the use cases.
2. Terminology The document uses the architecture reference model defined in
[NVO3FRWK] to describe the use cases.
This document uses the terminologies defined in [NVO3FRWK], 1.1. Contributors
[RFC4364]. Some additional terms used in the document are listed
here.
CUG: Closed User Group Vinay Bannai
PayPal
2211 N. First St,
San Jose, CA 95131
Phone: +1-408-967-7784
Email: vbannai@paypal.com
L2 VNI: L2 Virtual Network Instance Ram Krishnan
Brocade Communications
San Jose, CA 95134
Phone: +1-408-406-7890
Email: ramk@brocade.com
L3 VNI: L3 Virtual Network Instance 1.2. Terminology
ARP: Address Resolution Protocol This document uses the terminologies defined in [NVO3FRWK],
[RFC4364]. Some additional terms used in the document are listed
here.
CPE: Customer Premise Equipment CPE: Customer Premise Equipment
DNS: Domain Name Service DMZ: Demilitarized Zone
DMZ: DeMilitarized Zone DNS: Domain Name Service
NAT: Network Address Translation NAT: Network Address Translation
VNIF: Virtual Network Interconnection Interface VIRB: Virtual Integrated Routing/Bridging
3. Basic Virtual Networks in a Data Center Note that a virtual network in this document is a network
virtualization overlay instance.
2. Basic Virtual Networks in a Data Center
A virtual network may exist within a DC. The network enables a A virtual network may exist within a DC. The network enables a
communication among tenant systems (TSs) that are in a Closed User communication among Tenant Systems (TSs) that are in a Closed User
Group (CUG). A TS may be a physical server or virtual machine (VM) Group (CUG). A TS may be a physical server or virtual machine (VM)
on a server. A virtual network has a unique virtual network on a server. The network virtual edge (NVE) may co-exist with Tenant
identifier (may be local or global unique) for switches/routers to Systems, i.e. on an end-device, or exist on a different device, e.g.
properly differentiate it from other virtual networks. The CUGs are a top of rack switch (ToR). A virtual network has a unique virtual
formed so that proper policies can be applied when the TSs in one network identifier (may be local or global unique) for an NVE to
CUG communicate with the TSs in other CUGs. properly differentiate it from other virtual networks.
Figure 1 depicts this case by using the framework model.[NVO3FRWK]
NVE1 and NVE2 are two network virtual edges and each may exist on a
server or ToR. Each NVE may be the member of one or more virtual
networks. Each virtual network may be L2 or L3 basis. In this
illustration, three virtual networks with VN context Ta, Tn, and Tm
are shown. The VN 'Ta' terminates on both NVE1 and NVE2; The VN 'Tn'
terminates on NVE1 and the VN 'Tm' at NVE2 only. If an NVE is a
member of a VN, one or more virtual network instances (VNI) (i.e.
routing and forwarding table) exist on the NVE. Each NVE has one
overlay module to perform frame encapsulation/decapsulation and
tunneling initiation/termination. In this scenario, a tunnel between
NVE1 and NVE2 is necessary for the virtual network Ta.
A TS attaches to a virtual network (VN) via a virtual access point The TSs attached to the same NVE are not necessary in the same CUG,
(VAP) on an NVE. One TS may participate in one or more virtual i.e. in the same virtual network. The multiple CUGs can be
networks via VAPs; one NVE may be configured with multiple VAPs for constructed in a way so that the policies are enforced when the TSs
a VN. Furthermore if individual virtual networks use different in one CUG communicate with the TSs in other CUGs. An NVE provides
address spaces, the TS participating in all of them will be the reachbility for Tenant Systems in a CUG, and may also have the
configured with multiple addresses as well. A TS as a gateway is an policies and provide the reachbility for Tenant Systems in different
example for this. In addition, multiple TSs may use one VAP to CUGs (See section 4.2). Furthermore in a DC operators may construct
attach to a VN. For example, VMs are on a server and NVE is on ToR, many tenant networks that have no communication at all. In this
then some VMs may attach to NVE via a VLAN. case, each tenant network may use its own address space. Note that
one tenant network may contain one or more CUGs.
A VNI on an NVE is a routing and forwarding table that caches and/or A Tenant System may also be configured with multiple addresses and
maintains the mapping of a tenant system and its attached NVE. The participate in multiple virtual networks, i.e. use different address
table entry may be updated by the control plane, data plane, in different virtual network. For examples, a TS is NAT GW; or a TS
management plane, or the combination of them. is a firewall server for multiple CUGs.
+------- L3 Network ------+ Network Virtualization Overlay in this context means the virtual
| Tunnel Overlay | networks over DC infrastructure network via a tunnel, i.e. a tunnel
+------------+--------+ +--------+-------------+ between any pair of NVEs. This architecture decouples tenant system
| +----------+------+ | | +------+----------+ | address schema from the infrastructure address space, which brings a
| | Overlay Module | | | | Overlay Module | | great flexibility for VM placement and mobility. This also makes the
| +---+---------+---+ | | +--+----------+---+ | transit nodes in the infrastructure not aware of the existence of
| |Ta |Tn | | |Ta |Tm | the virtual networks. One tunnel may carry the traffic belonging to
| +--+---+ +--+---+ | | +-+----+ +--+---+ | different virtual networks; a virtual network identifier is used for
| | VNIa |..| VNIn | | | | VNIa |..| VNIm | | traffic segregation in a tunnel.
NVE1 | ++----++ ++----++ | | ++----++ ++----++ | NVE2
| |VAPs| |VAPs| | | |VAPs| |VAPs| |
+---+----+----+----+--+ +---+----+----+----+---+
| | | | | | | |
------+----+----+----+------ -----+----+----+----+-----
| .. | | .. | | .. | | .. |
| | | | | | | |
Tenant systems Tenant systems
Figure 1 NVO3 for Tenant System Networking
One virtual network may have many NVE members and interconnect A virtual network may be an L2 or L3 domain. An NVE may be a member
several thousands of TSs (as a matter of policy), the capability of of several virtual networks each of which is in L2 or L3. A virtual
supporting a lot of TSs per tenant instance and TS mobility is network may carry unicast traffic and/or broadcast/multicast/unknown
critical for NVO3 solution no matter where an NVE resides. traffic from/to tenant systems. An NVE may use p2p tunnels or a p2mp
tunnel to transport broadcast or multicast traffic, or may use other
mechanisms [NVO3MCAST].
It is worth to mention two distinct cases here. The first is when TS It is worth to mention two distinct cases here. The first is that TS
and NVE are co-located on a same physical device, which means that and NVE are co-located on a same end device, which means that the
the NVE is aware of the TS state at any time via internal API. The NVE can be made aware of the TS state at any time via internal API.
second is when TS and NVE are remotely connected, i.e. connected via
a switched network or point-to-point link. In this case, a protocol
is necessary for NVE to know TS state.
Note that if all NVEs are co-located with TSs in a CUG, the The second is that TS and NVE are remotely connected, i.e. connected
communication in the CUG is in a true virtual environment. If a TS via a switched network or point-to-point link. In this case, a
connects to a NVE remotely, the communication from this TS to other protocol is necessary for NVE to know TS state.
TSs in the CUG is not in a true virtual environment. The packets
to/from this TS are directly carried over a physical network, i.e.
on the wire. This may require some necessary configuration on the
physical network to facilitate the communication.
Individual virtual networks may use its own address space and the One virtual network may have many NVE members each of which many TSs
space is isolated from DC infrastructure. This eliminates the route may attach to. TS dynamic placement and mobility results in frequent
reconfiguration in the DC underlying network when VMs move. Note changes in the TS and NVE bindings. The TS reachbility update
that the NVO3 solutions still have to address VM move in the overlay mechanism MUST be fast enough to not cause any service interruption.
network, i.e. the TS/NVE association change when a VM moves. The capability of supporting a lot of TSs in a tenant network and a
lot of tenant networks is critical for NVO3 solution.
If a virtual network spans across multiple DC sites, one design is If a virtual network spans across multiple DC sites, one design is
to allow the corresponding NVO3 instance seamlessly span across to allow the corresponding NVO3 instance seamlessly span across
those sites without DC gateway routers' termination. In this case, those sites without DC gateway routers' termination. In this case,
the tunnel between a pair of NVEs may in turn be tunneled over other the tunnel between a pair of NVEs may in turn be tunneled over other
intermediate tunnels over the Internet or other WANs, or the intra intermediate tunnels over the Internet or other WANs, or the intra
DC and inter DC tunnels are stitched together to form an end-to-end DC and inter DC tunnels are stitched together to form an end-to-end
tunnel between two NVEs in different DCs. virtual network across DCs. The latter is described in section 3.2.
Section 4.2 describes other options.
4. Interconnecting DC Virtual Network and External Networks 3. Interconnecting DC Virtual Network and External Networks
For customers (an enterprise or individuals) who want to utilize the For customers (an enterprise or individuals) who want to utilize the
DC provider's compute and storage resources to run their DC provider's compute and storage resources to run their
applications, they need to access their systems hosted in a DC applications, they need to access their systems hosted in a DC
through Carrier WANs or Internet. A DC provider may use an NVO3 through Internet or Service Providers' WANs. A DC provider may
virtual network for such customer to access their systems; then it, construct an NVO3 network which all the resources designated for a
in turn, becomes the case of interconnecting DC virtual network and customer connect to and allow the customer to access their systems
external networks. Two cases are described here. via the network. This, in turn, becomes the case of interconnecting
a DC NVO3 network and external networks via Internet or WANs. Two
cases are described here.
4.1. DC Virtual Network Access via Internet 3.1. DC Virtual Network Access via Internet
A user or an enterprise customer connects to a DC virtual network A user or an enterprise customer connects securely to a DC virtual
via Internet but securely. Figure 2 illustrates this case. An L3 network via Internet. Figure 1 illustrates this case. A virtual
virtual network is configured on NVE1 and NVE2 and two NVEs are network is configured on NVE1 and NVE2 and two NVEs are connected
connected via an L3 tunnel in the Data Center. A set of tenant via an L3 tunnel in the Data Center. A set of tenant systems are
systems attach to NVE1. The NVE2 connects to one (may be more) TS attached to NVE1 on a server. The NVE2 resides on a DC Gateway
that runs the VN gateway and NAT applications (known as network device. NVE2 terminates the tunnel and uses the VNID on the packet
service appliance). A user or customer can access their systems via to pass the packet to the corresponding VN GW entity on the DC GW. A
Internet by using IPsec tunnel [RFC4301]. The encrypted tunnel is user or customer can access their systems, i.e. TS1 or TSn, in the
established between the VN GW and the user machine or CPE at DC via Internet by using IPsec tunnel [RFC4301]. The IPsec tunnel is
enterprise location. The VN GW provides authentication scheme and between the VN GW and the user or CPE at enterprise edge location.
encryption. Note that VN GW function may be performed by a network The VN GW provides IPsec functionality such as authentication scheme
service appliance device or on a DC GW. and encryption, as well as the mapping to the right virtual network
entity on the DC GW. Note that 1) some VN GW functions such as
firewall and load balancer may also be performed by locally attached
network appliance devices; 2) The virtual network in DC may use
different address space than external users, then VN GW serves the
NAT function.
+--------------+ +----------+ Server+---------------+
| +------+ | | Firewall | TS | TS1 TSn |
+----+(OM)+L3 VNI+--+-+ NAT | (VN GW) | |...| |
| | +------+ | +----+-----+ | +-+---+-+ | External User
L3 Tunnel +--------------+ ^ | | NVE1 | | +-----+
| NVE2 |IPsec Tunnel | +---+---+ | | PC |
+--------+---------+ .--. .--. +------+--------+ +--+--+
| +------+-------+ | ( :' '.--. | *
| |Overlay Module| | .-.' : ) L3 Tunnel *
| +------+-------+ | ( Internet ) | *
| +-----+------+ | ( : / DC GW +------+---------+ .--. .--.
| | L3 VNI | | '-' : '-' | +---+---+ | ( '* '.--.
NVE1 | +-+--------+-+ | \../+\.--/' | | NVE2 | | .-.' * )
+----+--------+----+ | | +---+---+ | ( * Internet )
| ... | V | +---+---+. | ( * /
Tenant Systems User Access | | VNGW1 * * * * * * * * '-' '-'
| +-------+ | | IPsec \../ \.--/'
| +--------+ | Tunnel
+----------------+
DC Provider Site DC Provider Site
OM: Overlay Module; Figure 1 DC Virtual Network Access via Internet
Figure 2 DC Virtual Network Access via Internet 3.2. DC VN and Enterprise Sites interconnected via SP WAN
4.2. DC Virtual Network and WAN VPN Interconnection An Enterprise company would lease some DC provider compute resources
to run some applications. For example, the company may run its web
applications at DC provider sites but run backend applications in
their own DCs. The Web applications and backend applications need to
communicate privately. DC provider may construct a NVO3 network to
connect all VMs running the Enterprise Web applications. The
enterprise company may buy a p2p private tunnel such as VPWS from a
SP to interconnect its site and the NVO3 network in provider DC site.
A protocol is necessary for exchanging the reachability between two
peering points and the traffic are carried over the tunnel. If an
enterprise has multiple sites, it may buy multiple p2p tunnels to
form a mesh interconnection among the sites and DC provider site.
This requires each site peering with all other sites for route
distribution.
A DC Provider and Carrier may build a VN and VPN independently and Another way to achieve multi-site interconnection is to use Service
interconnect the VN and VPN at the DC GW and PE for an enterprise Provider (SP) VPN services, in which each site only peers with SP PE
customer. Figure 3 depicts this case in an L3 overlay (L2 overlay is site. A DC Provider and VPN SP may build a NVO3 network (VN) and VPN
the same). The DC provider constructs an L3 VN between the NVE1 on a independently. The VN provides the networking for all the related
server and the NVE2 on the DC GW in the DC site; the carrier TSes within the provider DC. The VPN interconnects several
constructs an L3VPN between PE1 and PE2 in its IP/MPLS network. An enterprise sites, i.e. VPN sites. The DC provider and VPN SP further
Ethernet Interface physically connects the DC GW and PE2 devices. connect the VN and VPN at the DC GW/ASBR and SP PE/ASBR. Several
The local VLAN over the Ethernet interface [VRF-LITE] is configured options for the interconnection of the VN and VPN are described in
to connect the L3VNI/NVE2 and VRF, which makes the interconnection RFC4364 [RFC4364]. In Option A with VRF-LITE [VRF-LITE], both DC GW
between the L3 VN in the DC and the L3VPN in IP/MPLS network. An and SP PE maintain the routing/forwarding table, and perform the
Ethernet Interface may be used between PE1 and CE to connect the table lookup in forwarding. In Option B, DC GW and SP PE do not
L3VPN and enterprise physical networks. maintain the forwarding table, it only maintains the VN and VPN
identifier mapping, and exchange the identifier on the packet in the
forwarding process. In option C, DC GW and SP PE use the same
identifier for VN and VPN, and just perform the tunnel stitching,
i.e. change the tunnel end points. Each option has pros/cons (see
RFC4364) and has been deployed in SP networks depending on the
applications. The BGP protocols may be used in these options for
route distribution. Note that if the provider DC is the SP Data
Center, the DC GW and PE in this case may be on one device.
This configuration allows the enterprise networks communicating to This configuration allows the enterprise networks communicating to
the tenant systems attached to the L3 VN without interfering with DC the tenant systems attached to the VN in a provider DC without
provider underlying physical networks and other overlay networks in interfering with DC provider underlying physical networks and other
the DC. The enterprise may use its own address space on the tenant virtual networks in the DC. The enterprise may use its own address
systems attached to the L3 VN. The DC provider can manage the VMs space on the tenant systems attached to the VN. The DC provider can
and storage attached to the L3 VN for the enterprise customer. The manage the VMs and storage attachment to the VN for the enterprise
enterprise customer can determine and run their applications on the customer. The enterprise customer can determine and run their
VMs. From the L3 VN perspective, an end point in the enterprise applications on the VMs. See section 4 for more.
location appears as the end point associating to the NVE2. The NVE2
on the DC GW has to perform both the GRE tunnel termination [RFC4797]
and the local VLAN termination and forward the packets in between.
The DC provider and Carrier negotiate the local VLAN ID used on the
Ethernet interface.
This configuration makes the L3VPN over the WANs only has the
reachbility to the TS in the L3 VN. It does not have the
reachability of DC physical networks and other VNs in the DC.
However, the L3VPN has the reachbility of enterprise networks. Note
that both the DC provider and enterprise may have multiple network
locations connecting to the L3VPN.
The eBGP protocol can be used between DC GW and PE2 for the route
population in between. In fact, this is like the Option A in
[RFC4364]. This configuration can work with any NVO3 solution. The
eBGP, OSPF, or other can be used between PE1 and CE for the route
population.
+-----------------+ +-------------+
| +----------+ | | +-------+ |
NVE2 | | L3 VNI +---+===========+-+ VRF | |
| +----+-----+ | VLAN | +---+---+ | PE2
| | | | | |
| +-----+-------+ | /+-----+-------+--\
| |Overly Module| | ( : '
| +-------------+ | { : }
+--------+--------+ { : LSP Tunnel }
| ; : ;
|IP Tunnel { IP/MPLS Network }
| \ : /
+--------+---------+ +----+------+ -
| +------+-------+ | | +--+---+ | '
| |Overlay Module| | | | VRF | |
| +------+-------+ | | +--+---+ | PE1
| |Ta | | | |
| +-----+------+ | +----+------+
| | L3 VNI | | |
NVE1 | +-+--------+-+ | |
| | VAPs | | CE Site
+----+--------+----+
| ... | Enterprise Site
Tenant systems
DC Provider Site
Figure 3 L3 VNI and L3VPN interconnection across multi networks
If an enterprise only has one location, it may use P2P VPWS [RFC4664]
or L2TP [RFC5641] to connect one DC provider site. In this case, one
edge connects to a physical network and another edge connects to an
overlay network.
Various alternatives can be configured between DC GW and SP PE to
achieve the same capability. Option B, C, or D in RFC4364 [RFC4364]
can be used and the characteristics of each option are described
there.
The interesting feature in this use case is that the L3 VN and The interesting feature in this use case is that the VN and compute
compute resource are managed by the DC provider. The DC operator can resource are managed by the DC provider. The DC operator can place
place them at any location without notifying the enterprise and them at any location without notifying the enterprise and WAN SP
carrier because the DC physical network is completely isolated from because the DC physical network is completely isolated from the
the carrier and enterprise network. Furthermore, the DC operator may carrier and enterprise network. Furthermore, the DC operator may
move the VMs assigned to the enterprise from one sever to another in move the VMs assigned to the enterprise from one sever to another in
the DC without the enterprise customer awareness, i.e. no impact on the DC without the enterprise customer awareness, i.e. no impact on
the enterprise 'live' applications running these resources. Such the enterprise 'live' applications running these resources. Such
advanced feature brings some requirements for NVO3 [NVO3PRBM]. advanced features bring DC providers great benefits in serving these
kinds of applications but also add some requirements for NVO3
[NVO3PRBM].
5. DC Applications Using NVO3 4. DC Applications Using NVO3
NVO3 brings DC operators the flexibility to design different NVO3 brings DC operators the flexibility in designing and deploying
applications in a true virtual environment (or nearly true) without different applications in an end-to-end virtualization environment,
worrying about physical network configuration in the Data Center. DC where the operators not need worry about the constraints of the
operators may build several virtual networks and interconnect them physical network configuration in the Data Center. DC provider may
directly to form a tenant virtual network and implement the use NVO3 in various ways and also use it in the conjunction with
communication rules, i.e. policy between different virtual networks; physical networks in DC for many reasons. This section highlights
or may allocate some VMs to run tenant applications and some to run some use cases but not limits to.
network service application such as Firewall and DNS for the tenant.
Several use cases are given in this section.
5.1. Supporting Multi Technologies in a Data Center 4.1. Supporting Multi Technologies and Applications in a DC
Most likely servers deployed in a large data center are rolled in at Most likely servers deployed in a large data center are rolled in at
different times and may have different capacities/features. Some different times and may have different capacities/features. Some
servers may be virtualized, some may not; some may be equipped with servers may be virtualized, some may not; some may be equipped with
virtual switches, some may not. For the ones equipped with virtual switches, some may not. For the ones equipped with
hypervisor based virtual switches, some may support VxLAN [VXLAN] hypervisor based virtual switches, some may support VxLAN [VXLAN]
encapsulation, some may support NVGRE encapsulation [NVGRE], and encapsulation, some may support NVGRE encapsulation [NVGRE], and
some may not support any types of encapsulation. To construct a some may not support any types of encapsulation. To construct a
tenant virtual network among these servers and the ToRs, it may use tenant virtual network among these servers and the ToR switches, it
two virtual networks and a gateway to allow different may construct one virtual network overlay and one virtual network
implementations working together. For example, one virtual network w/o overlay, or two virtual networks overlay with different
uses VxLAN encapsulation and another virtual network uses implementations. For example, one virtual network overlay uses VxLAN
traditional VLAN. encapsulation and another virtual network w/o overlay uses
traditional VLAN or another virtual network overlay uses NVGRE.
The gateway entity, either on VMs or standalone one, participates in The gateway device or virtual gateway on a device may be used. The
to both virtual networks, and maps the services and identifiers and gateway participates in to both virtual networks. It performs the
changes the packet encapsulations. packet encapsulation/decapsulation and may also perform address
mapping or translation, and etc.
5.2. Tenant Virtual Network with Bridging/Routing A data center may be also constructed with multi-tier zones. Each
zone has different access permissions and run different applications.
For example, the three-tier zone design has a front zone (Web tier)
with Web applications, a mid zone (application tier) with service
applications such as payment and booking, and a back zone (database
tier) with Data. External users are only able to communicate with
the web application in the front zone. In this case, the
communication between the zones MUST pass through the security
GW/firewall. The network virtualization may be used in each zone. If
individual zones use the different implementations, the GW needs to
support these implementations as well.
A tenant virtual network may span across multiple Data Centers. DC 4.2. Tenant Network with Multi-Subnets or across multi DCs
operator may want to use L2VN within a DC and L3VN outside DCs for a
tenant network. This is very similar to today's DC physical network
configuration. L2 bridging has the simplicity and endpoint awareness
while L3 routing has advantages in policy based routing, aggregation,
and scalability. For this configuration, the virtual L2/L3 gateway
function is necessary to interconnect L2VN and L3VN in each DC.
Figure 4 illustrates this configuration.
Figure 4 depicts two DC sites. The site A constructs an L2VN that A tenant network may contain multiple subnets. DC operators may
terminates on NVE1, NVE2, and GW1. An L3VN is configured between the construct multiple tenant networks. The access policy for inter-
GW1 at site A and the GW2 at site Z. An internal Virtual Network subnets is often necessary. To benefit the policy management, the
Interconnection Interface (VNIF) connects to L2VNI and L3VNI on GW1. policies may be placed at some designated gateway devices only. Such
Thus the GW1 is the members of the L2VN and L3VN. The L2VNI is the design requires the inter-subnet traffic MUST be sent to one of the
MAC/NVE mapping table and the L3VNI is IP prefix/NVE mapping table. gateways first for the policy checking. However this may cause
Note that a VNI also has the mapping of TS and VAP at the local NVE. traffic hairpin on the gateway in a DC. It is desirable that an NVE
The site Z has the similar configuration. A packet coming to the GW1 can hold some policy and be able to forward inter-subnet traffic
from L2VN will be descapulated and converted into an IP packet and directly. To reduce NVE burden, the hybrid design may be deployed,
then encapsulated and sent to the site Z. The Gateway uses ARP i.e. an NVE can perform forwarding for the selected inter-subnets
protocol to obtain MAC/IP address mapping. and the designated GW performs for the rest. For example, each NVE
performs inter-subnet forwarding for a tenant, and the designated GW
is used for inter-subnet traffic from/to the different tenant
networks.
Note that both the L2VN and L3VN in the figure are carried by the A tenant network may span across multiple Data Centers in distance.
tunnels supported by the underlying networks which are not shown in DC operators may want an L2VN within each DC and L3VN between DCs
the figure. for a tenant network. L2 bridging has the simplicity and endpoint
awareness while L3 routing has advantages in policy based routing,
aggregation, and scalability. For this configuration, the virtual
L2/L3 gateway can be implemented on DC GW device. Figure 2
illustrates this configuration.
+------------+ +-----------+ Figure 2 depicts two DC sites. The site A constructs an L2VN with
GW1| +-----+ | '''''''''''''''' | +-----+ |GW2 NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers where the
tenant systems are created. NVE3 resides on the DC GW device. The
site Z has similar configuration with NVE3 and NVE4 on the servers
and NVE6 on the DC GW. An L3VN is configured between the NVE5 at
site A and the NVE6 at site Z. An internal Virtual Integrated
Routing and Bridging (VIRB) is used between L2VNI and L3VNI on NVE5
and NVE6. The L2VNI is the MAC/NVE mapping table and the L3VNI is
the IP prefix/NVE mapping table. A packet to the NVE5 from L2VN will
be decapsulated and converted into an IP packet and then
encapsulated and sent to the site Z.
Note that both the L2VNs and L3VN in Figure 2 are encapsulated and
carried over within DC and across WAN networks, respectively.
NVE5/DCGW+------------+ +-----------+NVE6/DCGW
| +-----+ | '''''''''''''''' | +-----+ |
| |L3VNI+----+' L3VN '+---+L3VNI| | | |L3VNI+----+' L3VN '+---+L3VNI| |
| +--+--+ | '''''''''''''''' | +--+--+ | | +--+--+ | '''''''''''''''' | +--+--+ |
| |VNIF | | VNIF| | | |VIRB | | VIRB| |
| +--+--+ | | +--+--+ | | +--+---+ | | +---+--+ |
| |L2VNI| | | |L2VNI| | | |L2VNIs| | | |L2VNIs| |
| +--+--+ | | +--+--+ | | +--+---+ | | +---+--+ |
+----+-------+ +------+----+ +----+-------+ +------+----+
''''|'''''''''' ''''''|''''''' ''''|'''''''''' ''''''|'''''''
' L2VN ' ' L2VN ' ' L2VN ' ' L2VN '
NVE1 ''/'''''''''\'' NVE2 NVE3 '''/'''''''\'' NVE4 NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S
+-----+---+ +----+----+ +------+--+ +----+----+ +-----+---+ +----+----+ +------+--+ +----+----+
| +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ | | +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ |
| |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| |
| ++---++ | | ++---++ | | ++---++ | | ++---++ | | ++---++ | | ++---++ | | ++---++ | | ++---++ |
+--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+
|...| |...| |...| |...| |...| |...| |...| |...|
Tenant Systems Tenant Systems Tenant Systems Tenant Systems
DC Site A DC Site Z DC Site A DC Site Z
Figure 4 Tenant Virtual Network with Bridging/Routing Figure 2 Tenant Virtual Network with Bridging/Routing
5.3. Virtual Data Center (VDC) 4.3. Virtual Data Center (vDC)
Enterprise DC's today may often use several routers, switches, and Enterprise DC's today may often use several routers, switches, and
service devices to construct its internal network, DMZ, and external network appliance devices to construct its internal network, DMZ,
network access. A DC Provider may offer a virtual DC to an and external network access. A DC Provider may offer a virtual DC
enterprise customer to run enterprise applications such as service to an enterprise customer and run enterprise applications
website/emails. Instead of using many hardware devices, with the such as website/emails as well. Instead of using many hardware
overlay and virtualization technology of NVO3, DC operators can devices to do it, with the network virtualization overlay
build them on top of a common network infrastructure for many technology, DC operators may build such vDCs on top of a common
customers and run service applications per customer basis. The network infrastructure for many such customers and run network
service applications may include firewall, gateway, DNS, load service applications per a vDC basis. The net service applications
balancer, NAT, etc. such as firewall, DNS, load balancer can be designed per vDC. The
network virtualization overlay further enables potential for vDC
mobility when customer moves to different locations because tenant
systems and net appliances configuration can be completely decouple
from the infrastructure network.
Figure 5 below illustrates this scenario. For the simple Figure 3 below illustrates one scenario. For the simple
illustration, it only shows the L3VN or L2VN as virtual and overlay illustration, it only shows the L3VN or L2VN as virtual and overlay
routers or switches. In this case, DC operators construct several L2 routers or switches. In this case, DC operators construct several L2
VNs (L2VNx, L2VNy, L2VNz in Figure 5) to group the end tenant VNs (L2VNx, L2VNy, L2VNz) in Figure 3 to group the end tenant
systems together per application basis, create an L3VNa for the systems together per application basis, create an L3VNa for the
internal routing. A server or VM runs firewall/gateway applications internal routing. A net device (may be a VM or server) runs
and connects to the L3VNa and Internet. A VPN tunnel is also built firewall/gateway applications and connects to the L3VNa and
between the gateway and enterprise router. The design runs Internet. A load Balancer (LB) is used in L2VNx. A VPWS p2p tunnel
Enterprise Web/Mail/Voice applications at the provider DC site; lets is also built between the gateway and enterprise router. The design
the users at Enterprise site to access the applications via the VPN runs Enterprise Web/Mail/Voice applications at the provider DC site;
tunnel and Internet via a gateway at the Enterprise site; let lets the users at Enterprise site to access the applications via the
VPN tunnel and Internet via a gateway at the Enterprise site; let
Internet users access the applications via the gateway in the Internet users access the applications via the gateway in the
provider DC. The enterprise operators can also use the VPN tunnel or provider DC.
IPsec over Internet to access the vDC for the management purpose.
The firewall/gateway provides application-level and packet-level
gateway function and/or NAT function.
The Enterprise customer decides which applications are accessed by The Enterprise customer decides which applications are accessed by
intranet only and which by both intranet and extranet; DC operators intranet only and which by both intranet and extranet; DC operators
then design and configure the proper security policy and gateway then design and configure the proper security policy and gateway
function. DC operators may further set different QoS levels for the function. Furthermore DC operators may use multi-zones in a vDC for
different applications for a customer. the security and/or set different QoS levels for the different
applications based on customer applications.
This application requires the NVO3 solution to provide the DC This use case requires the NVO3 solution to provide the DC operator
operator an easy way to create NVEs and VNIs for any design and to an easy way to create a VN and NVEs for any design and to quickly
quickly assign TSs to a VNI, easily place and configure policies on assign TSs to a VNI on a NVE they attach to, easily to set up
an NVE, and support VM mobility. virtual topology and place or configure policies on an NVE or VMs
that run net services, and support VM mobility. Furthermore, DC
operator needs to view the tenant network topology and know the
tenant node capability and is able to configure a net service on the
tenant node. DC provider may further let a tenant to manage the vDC
itself.
Internet ^ Internet Internet ^ Internet
| |
^ +-+----+ ^ +-+----+
| | GW | | | GW |
| +--+---+ | +--+---+
| | | |
+-------+--------+ +-+----+ +-------+--------+ +-+----+
|FireWall/Gateway+---VPN Tunnel---+Router| |FireWall/Gateway+--- VPWS/MPLS---+Router|
+-------+--------+ +-+--+-+ +-------+--------+ +-+--+-+
| | | | | |
...+... |..| ...+... |..|
+-----: L3VNa :--------+ LANs +-----: L3VNa :--------+ LANs
| ....... | +-+-+ ....... |
| | | Enterprise Site |LB | | | Enterprise Site
+-+-+ | |
...+... ...+... ...+... ...+... ...+... ...+...
: L2VNx : : L2VNy : : L2VNz : : L2VNx : : L2VNy : : L2VNz :
....... ....... ....... ....... ....... .......
|..| |..| |..| |..| |..| |..|
| | | | | | | | | | | |
Web Apps Mail Apps VoIP Apps Web Apps Mail Apps VoIP Apps
Provider DC Site Provider DC Site
* firewall/gateway may run on a server or VMs firewall/gateway and Load Balancer (LB) may run on a server or VMs
Figure 5 Virtual Data Center by Using NVO3
5.4. Federating NV03 Domains
Two general cases are 1) Federating AS managed by a single operator; Figure 3 Virtual Data Center by Using NVO3
2) Federating AS managed by different Operators. The detail will be
described in next version.
6. OAM Considerations 5. OAM Considerations
NVO3 brings the ability for a DC provider to segregate tenant NVO3 brings the ability for a DC provider to segregate tenant
traffic. A DC provider needs to manage and maintain NVO3 instances. traffic. A DC provider needs to manage and maintain NVO3 instances.
Similarly, the tenant needs to be informed about tunnel failures Similarly, the tenant needs to be informed about underlying network
impacting tenant applications. failures impacting tenant applications or the tenant network is able
to detect both overlay and underlay network failures and builds some
resiliency mechanisms.
Various OAM and SOAM tools and procedures are defined in [IEEE Various OAM and SOAM tools and procedures are defined in [IEEE
802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for 802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for
L2 and L3 networks, and for user, including continuity check, L2 and L3 networks, and for user, including continuity check,
loopback, link trace, testing, alarms such as AIS/RDI, and on-demand loopback, link trace, testing, alarms such as AIS/RDI, and on-demand
and periodic measurements. These procedures may apply to tenant and periodic measurements. These procedures may apply to tenant
overlay networks and tenants not only for proactive maintenance, but overlay networks and tenants not only for proactive maintenance, but
also to ensure support of Service Level Agreements (SLAs). also to ensure support of Service Level Agreements (SLAs).
As the tunnel traverses different networks, OAM messages need to be As the tunnel traverses different networks, OAM messages need to be
translated at the edge of each network to ensure end-to-end OAM. translated at the edge of each network to ensure end-to-end OAM.
It is important that failures at lower layers which do not affect 6. Summary
NVo3 instance are to be suppressed.
7. Summary
The document describes some basic potential use cases of NVO3. The
combination of these cases should give operators flexibility and
capability to design more sophisticated cases for various purposes.
The key requirements for NVO3 are 1) traffic segregation; 2)
supporting a large scale number of virtual networks in a common
infrastructure; 3) supporting highly distributed virtual network
with sparse memberships 3) VM mobility 4) auto or easy to construct
a NVE and its associated TS; 5) Security 6) NVO3 Management
[NVO3PRBM].
Difference between other overlay network technologies and NVO3 is
that the client edges of the NVO3 network are individual and
virtualized hosts, not network sites or LANs. NVO3 enables these
virtual hosts communicating in a true virtual environment without
constraints in physical networks.
NVO3 allows individual tenant virtual networks to use their own The document describes some general potential use cases of NVO3 in
address space and isolates the space from the network infrastructure. DCs. The combination of these cases should give operators
The approach not only segregates the traffic from multi tenants on a flexibility and capability to design more sophisticated cases for
common infrastructure but also makes VM placement and move easier. various purposes.
DC services may vary from infrastructure as a service (IaaS), DC services may vary from infrastructure as a service (IaaS),
platform as a service (PaaS), to software as a service (SaaS), in platform as a service (PaaS), to software as a service (SaaS), in
which the network virtual overlay is just a portion of an which the network virtualization overlay is just a portion of an
application service. NVO3 decouples the services from DC network application service. NVO3 decouples the service
infrastructure configuration. construction/configurations from the DC network infrastructure
configuration, and helps deployment of higher level services over
the application.
NVO3's underlying network provides the tunneling between NVEs so NVO3's underlying network provides the tunneling between NVEs so
that two NVEs appear as one hop to each other. Many tunneling that two NVEs appear as one hop to each other. Many tunneling
technologies can serve this function. The tunneling may in turn be technologies can serve this function. The tunneling may in turn be
tunneled over other intermediate tunnels over the Internet or other tunneled over other intermediate tunnels over the Internet or other
WANs. It is also possible that intra DC and inter DC tunnels are WANs. It is also possible that intra DC and inter DC tunnels are
stitched together to form an end-to-end tunnel between two NVEs. stitched together to form an end-to-end tunnel between two NVEs.
A DC virtual network may be accessed via an external network in a A DC virtual network may be accessed via an external network in a
secure way. Many existing technologies can help achieve this. secure way. Many existing technologies can help achieve this.
8. Security Considerations NVO3 implementation may vary. Some DC operators prefer to use
centralized controller to manage tenant system reachbility in a
tenant network, other prefer to use distributed protocols to
advertise the tenant system location, i.e. attached NVEs. For the
migration and special requirement, the different solutions may apply
to one tenant network in a DC. When a tenant network spans across
multiple DCs and WANs, each network administration domain may use
different methods to distribute the tenant system locations. Both
control plane and data plane interworking are necessary.
7. Security Considerations
Security is a concern. DC operators need to provide a tenant a Security is a concern. DC operators need to provide a tenant a
secured virtual network, which means one tenant's traffic isolated secured virtual network, which means one tenant's traffic isolated
from the other tenant's traffic and non-tenant's traffic; they also from the other tenant's traffic and non-tenant's traffic; they also
need to prevent DC underlying network from any tenant application need to prevent DC underlying network from any tenant application
attacking through the tenant virtual network or one tenant attacking through the tenant virtual network or one tenant
application attacking another tenant application via DC networks. application attacking another tenant application via DC networks.
For example, a tenant application attempts to generate a large For example, a tenant application attempts to generate a large
volume of traffic to overload DC underlying network. The NVO3 volume of traffic to overload DC underlying network. The NVO3
solution has to address these issues. solution has to address these issues.
9. IANA Considerations 8. IANA Considerations
This document does not request any action from IANA. This document does not request any action from IANA.
10. Acknowledgements 9. Acknowledgements
Authors like to thank Sue Hares, Young Lee, David Black, Pedro Authors like to thank Sue Hares, Young Lee, David Black, Pedro
Marques, Mike McBride, David McDysan, Randy Bush, and Uma Chunduri Marques, Mike McBride, David McDysan, Randy Bush, and Uma Chunduri
for the review, comments, and suggestions. for the review, comments, and suggestions.
11. References 10. References
11.1. Normative References 10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997 Requirement Levels", BCP 14, RFC 2119, March 1997
[RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private
Networks (VPNs)", RFC 4364, February 2006. Networks (VPNs)", RFC 4364, February 2006.
[IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5: [IEEE 802.1ag] "Virtual Bridged Local Area Networks - Amendment 5:
Connectivity Fault Management", December 2007. Connectivity Fault Management", December 2007.
skipping to change at page 15, line 36 skipping to change at page 15, line 14
[ITU-T Y.1564] "Ethernet service activation test methodology", 2011. [ITU-T Y.1564] "Ethernet service activation test methodology", 2011.
[RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol
Label Switching (MPLS) Operations and Management (OAM)", Label Switching (MPLS) Operations and Management (OAM)",
RFC4378, February 2006 RFC4378, February 2006
[RFC4301] Kent, S., "Security Architecture for the Internet [RFC4301] Kent, S., "Security Architecture for the Internet
Protocol", rfc4301, December 2005 Protocol", rfc4301, December 2005
[RFC4664] Andersson, L., "Framework for Layer 2 Virtual Private
Networks (L2VPNs)", rfc4664, September 2006
[RFC4797] Rekhter, Y., et al, "Use of Provider Edge to Provider Edge
(PE-PE) Generic Routing Encapsulation (GRE) or IP in
BGP/MPLS IP Virtual Private Networks", RFC4797, January
2007
[RFC5641] McGill, N., "Layer 2 Tunneling Protocol Version 3 (L2TPv3)
Extended Circuit Status Values", rfc5641, April 2009.
[RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection
(BFD)", rfc5880, June 2010. (BFD)", rfc5880, June 2010.
11.2. Informative References 10.2. Informative References
[NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic [NVGRE] Sridharan, M., "NVGRE: Network Virtualization using Generic
Routing Encapsulation", draft-sridharan-virtualization- Routing Encapsulation", draft-sridharan-virtualization-
nvgre-01, work in progress. nvgre-02, work in progress.
[NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network [NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network
Virtualization", draft-ietf-nvo3-overlay-problem- Virtualization", draft-ietf-nvo3-overlay-problem-
statement-02, work in progress. statement-02, work in progress.
[NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC [NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC
Network Virtualization", draft-ietf-nvo3-framework-02, Network Virtualization", draft-ietf-nvo3-framework-02,
work in progress. work in progress.
[NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3",
draft-ghanwani-nvo3-mcast-issues-00, work in progress.
[VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com
[VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for [VXLAN] Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for
Overlaying Virtualized Layer 2 Networks over Layer 3 Overlaying Virtualized Layer 2 Networks over Layer 3
Networks", draft-mahalingam-dutt-dcops-vxlan-02.txt, work Networks", draft-mahalingam-dutt-dcops-vxlan-03.txt, work
in progress. in progress.
Authors' Addresses Authors' Addresses
Lucy Yong Lucy Yong
Huawei Technologies, Huawei Technologies,
4320 Legacy Dr. 5340 Legacy Dr.
Plano, Tx75025 US Plano, TX 75025
Phone: +1-469-277-5837 Phone: +1-469-277-5837
Email: lucy.yong@huawei.com Email: lucy.yong@huawei.com
Mehmet Toy Mehmet Toy
Comcast Comcast
1800 Bishops Gate Blvd., 1800 Bishops Gate Blvd.,
Mount Laurel, NJ 08054 Mount Laurel, NJ 08054
Phone : +1-856-792-2801 Phone : +1-856-792-2801
E-mail : mehmet_toy@cable.comcast.com E-mail : mehmet_toy@cable.comcast.com
Aldrin Isaac Aldrin Isaac
Bloomberg Bloomberg
E-mail: aldrin.isaac@gmail.com E-mail: aldrin.isaac@gmail.com
Vishwas Manral Vishwas Manral
Hewlett-Packard Corp. Hewlett-Packard Corp.
191111 Pruneridge Ave. 3000 Hanover Street, Building 20C
Palo Alto, CA 95014
Cupertino, CA 95014
Phone: 408-447-1497 Phone: 650-857-5501
Email: vishwas.manral@hp.com Email: vishwas.manral@hp.com
Linda Dunbar Linda Dunbar
Huawei Technologies, Huawei Technologies,
4320 Legacy Dr. 5340 Legacy Dr.
Plano, Tx75025 US Plano, TX 75025 US
Phone: +1-469-277-5840 Phone: +1-469-277-5840
Email: linda.dunbar@huawei.com Email: linda.dunbar@huawei.com
 End of changes. 90 change blocks. 
412 lines changed or deleted 387 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/