draft-ietf-nvo3-use-case-08.txt   draft-ietf-nvo3-use-case-09.txt 
Network Working Group L. Yong Network Working Group L. Yong
Internet Draft Huawei Internet Draft L. Dunbar
Category: Informational M. Toy Category: Informational Huawei
Comcast M. Toy
A. Isaac A. Isaac
Bloomberg Juniper Networks
V. Manral V. Manral
Ionos Networks Ionos Networks
L. Dunbar
Huawei
Expires: December 2016 June 3, 2016 Expires: March 2017 September 1, 2016
Use Cases for Data Center Network Virtualization Overlays Use Cases for Data Center Network Virtualization Overlays
draft-ietf-nvo3-use-case-08 draft-ietf-nvo3-use-case-09
Abstract Abstract
This document describes Data Center (DC) Network Virtualization over This document describes Data Center (DC) Network Virtualization over
Layer 3 (NVO3) use cases that can be deployed in various data Layer 3 (NVO3) use cases that can be deployed in various data
centers and serve different applications. centers and serve different applications.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with This Internet-Draft is submitted to IETF in full conformance with
skipping to change at page 1, line 46 skipping to change at page 1, line 45
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on December 3, 2016. This Internet-Draft will expire on March 3, 2017.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 29 skipping to change at page 2, line 29
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
1.1. Terminology...............................................4 1.1. Terminology...............................................4
2. Basic Virtual Networks in a Data Center........................4 2. Basic Virtual Networks in a Data Center........................4
3. DC Virtual Network and External Network Interconnection........6 3. DC Virtual Network and External Network Interconnection........6
3.1. DC Virtual Network Access via the Internet................6 3.1. DC Virtual Network Access via the Internet................6
3.2. DC VN and SP WAN VPN Interconnection......................7 3.2. DC VN and SP WAN VPN Interconnection......................7
4. DC Applications Using NVO3.....................................8 4. DC Applications Using NVO3.....................................8
4.1. Supporting Multiple Technologies and Applications.........9 4.1. Supporting Multiple Technologies..........................9
4.2. Tenant Network with Multiple Subnets......................9 4.2. DC Application with Multiple Virtual Networks.............9
4.3. Virtualized Data Center (vDC)............................11 4.3. Virtualized Data Center (vDC)............................10
5. Summary.......................................................12 5. Summary.......................................................11
6. Security Considerations.......................................13 6. Security Considerations.......................................12
7. IANA Considerations...........................................13 7. IANA Considerations...........................................12
8. References....................................................13 8. References....................................................12
8.1. Normative References.....................................13 8.1. Normative References.....................................12
8.2. Informative References...................................13 8.2. Informative References...................................12
Contributors.....................................................14 Contributors.....................................................13
Acknowledgements.................................................15 Acknowledgements.................................................14
Authors' Addresses...............................................15 Authors' Addresses...............................................14
1. Introduction 1. Introduction
Server Virtualization has changed the Information Technology (IT) Server Virtualization has changed the Information Technology (IT)
industry in terms of the efficiency, cost, and speed of providing industry in terms of the efficiency, cost, and speed of providing
new applications and/or services such as cloud applications. However new applications and/or services such as cloud applications. However
traditional Data Center (DC) networks have some limits in supporting traditional Data Center (DC) networks have some limits in supporting
cloud applications and multi tenant networks [RFC7364]. The goal of cloud applications and multi tenant networks [RFC7364]. The goal of
Network Virtualization Overlays in the DC is to decouple the Network Virtualization Overlays in the DC is to decouple the
communication among tenant systems from DC physical infrastructure communication among tenant systems from DC physical infrastructure
skipping to change at page 4, line 15 skipping to change at page 4, line 15
o Basic NVO3 virtual networks in a DC (Section 2). All Tenant o Basic NVO3 virtual networks in a DC (Section 2). All Tenant
Systems (TS) in the virtual network are located within the same Systems (TS) in the virtual network are located within the same
DC. The individual virtual networks can be either Layer 2 (L2) or DC. The individual virtual networks can be either Layer 2 (L2) or
Layer 3 (L3). The number of NVO3 virtual networks in a DC is much Layer 3 (L3). The number of NVO3 virtual networks in a DC is much
higher than what traditional VLAN based virtual networks [IEEE higher than what traditional VLAN based virtual networks [IEEE
802.1Q] can support. This case is often referred as to the DC 802.1Q] can support. This case is often referred as to the DC
East-West traffic. East-West traffic.
o Virtual networks that span across multiple Data Centers and/or to o Virtual networks that span across multiple Data Centers and/or to
customer premises, i.e., an NVO3 virtual network where some customer premises, i.e., an NVO3 virtual network where some
tenant systems in a DC attach to interconnects another virtual or tenant systems in a DC attach to interconnect another virtual or
physical network outside the data center. An enterprise customer physical network outside the data center. An enterprise customer
may use a traditional carrier VPN or an IPsec tunnel over the may use a traditional carrier VPN or an IPsec tunnel over the
Internet to communicate with its systems in the DC. This is Internet to communicate with its systems in the DC. This is
described in Section 3. described in Section 3.
o DC applications or services require an advanced network that o DC applications or services require an advanced network that
contains several NVO3 virtual networks that are interconnected by contains several NVO3 virtual networks that are interconnected by
the gateways. Three scenarios are described in Section 4: 1) the gateways. Three scenarios are described in Section 4: 1)
using NVO3 and other network technologies to build a tenant using NVO3 and other network technologies to build a tenant
network; 2) constructing several virtual networks as a tenant network; 2) constructing several virtual networks as a tenant
skipping to change at page 9, line 5 skipping to change at page 9, line 5
NVO3 technology provides DC operators with the flexibility in NVO3 technology provides DC operators with the flexibility in
designing and deploying different applications in an end-to-end designing and deploying different applications in an end-to-end
virtualization overlay environment. Operators no longer need to virtualization overlay environment. Operators no longer need to
worry about the constraints of the DC physical network configuration worry about the constraints of the DC physical network configuration
when creating VMs and configuring a virtual network. A DC provider when creating VMs and configuring a virtual network. A DC provider
may use NVO3 in various ways, in conjunction with other physical may use NVO3 in various ways, in conjunction with other physical
networks and/or virtual networks in the DC for a reason. This networks and/or virtual networks in the DC for a reason. This
section highlights some use cases for this goal. section highlights some use cases for this goal.
4.1. Supporting Multiple Technologies and Applications 4.1. Supporting Multiple Technologies
Servers deployed in a large data center are often installed at Servers deployed in a large data center are often installed at
different times, and may have different capabilities/features. Some different times, and may have different capabilities/features. Some
servers may be virtualized, while others may not; some may be servers may be virtualized, while others may not; some may be
equipped with virtual switches, while others may not. For the equipped with virtual switches, while others may not. For the
servers equipped with Hypervisor-based virtual switches, some may servers equipped with Hypervisor-based virtual switches, some may
support VxLAN [RFC7348] encapsulation, some may support NVGRE support VxLAN [RFC7348] encapsulation, some may support NVGRE
encapsulation [RFC7637], and some may not support any encapsulation. encapsulation [RFC7637], and some may not support any encapsulation.
To construct a tenant network among these servers and the ToR To construct a tenant network among these servers and the ToR
switches, operators can construct one traditional VLAN network and switches, operators can construct one traditional VLAN network and
two virtual networks where one uses VxLAN encapsulation and the two virtual networks where one uses VxLAN encapsulation and the
other uses NVGRE, and interconnect these three networks via a other uses NVGRE, and interconnect these three networks via a
gateway or virtual GW. The GW performs packet gateway or virtual GW. The GW performs packet
encapsulation/decapsulation translation between the networks. encapsulation/decapsulation translation between the networks.
A data center may be also constructed with multi-tier zones, where Another case is that some software of a tenant is high CPU and
each zone has different access permissions and runs different memory consumption, which only makes a sense to run on metal servers;
applications. For example, the three-tier zone design has a front other software of the tenant may be good to run on VMs. However
zone (Web tier) with Web applications, a mid zone (application tier) provider DC infrastructure is configured to use NVO3 to connect to
where service applications such as credit payment or ticket booking VMs and VLAN [IEEE802.1Q] connect to metal services. The tenant
run, and a back zone (database tier) with Data. External users are network requires interworking between NVO3 and traditional VLAN.
only able to communicate with the Web application in the front zone.
In this case, communications between the zones must pass through the
security GW/firewall. One virtual network can be configured in each
zone and a GW can be used to interconnect two virtual networks, i.e.,
two zones. If the virtual network in individual zones uses the
different implementations, the GW needs to support these
implementations as well.
4.2. Tenant Network with Multiple Subnets
A tenant network may contain multiple subnets. The DC physical
network needs to support the connectivity for many such tenant
networks. In some cases, the inter-subnet policies can be placed at
designated gateway devices. Such a design requires the inter-subnet
traffic to be sent to one of the gateway devices first for the
policy checking, which may cause traffic to "hairpin" at the gateway
in a DC. It is desirable for an NVE to be able to hold some policies
and be able to forward the inter-subnet traffic directly. To reduce
the burden on the NVE, a hybrid design may be deployed, i.e., an NVE
can perform forwarding for selected inter-subnets while the
designated GW performs forwarding for the rest. For example, each
NVE performs inter-subnet forwarding for intra-DC traffic while the
designated GW is used for traffic to/from a remote DC.
A tenant network may span across multiple Data Centers that are at
different locations. DC operators may configure an L2 VN within each
DC and an L3 VN between DCs for a tenant network. For this
configuration, the virtual L2/L3 gateway can be implemented on the
DC GW device. Figure 2 illustrates this configuration.
Figure 2 depicts two DC sites. Site A constructs one L2 VN, say
L2VNa, on NVE1, NVE2, and NVE5. NVE1 and NVE2 reside on the servers
which host multiple tenant systems. NVE5 resides on the DC GW device.
Site Z has similar configuration, with L2VNz on NVE3, NVE4, and NVE6.
An L3 VN, L3VNx, is configured on NVE5 at Site A and the NVE6 at
Site Z. An internal Virtual Interface of Routing and Bridging (VIRB)
is used between the L2VNI and L3VNI on NVE5 and NVE6, respectively.
The L2VNI requires the MAC/NVE mapping table and the L3VNI requires
the IP prefix/NVE mapping table. A packet arriving at NVE5 from
L2VNa will be decapsulated, converted into an IP packet, and then
encapsulated and sent to Site Z. A packet to NVE5 from L3VNx will be
decapsulated, converted into a MAC frame, and then encapsulated and
sent within Site A. The ARP protocol [RFC826] can be used to get the
MAC address for an IP address in the L2VNa. The policies can be
checked at the VIRB.
Note that L2VNa, L2VNz, and L3VNx in Figure 2 are NVO3 virtual
networks.
NVE5/DCGW+------------+ +-----------+ NVE6/DCGW
| +-----+ | '''''''''''''''' | +-----+ |
| |L3VNI+----+' L3VNx '+---+L3VNI| |
| +--+--+ | '''''''''''''''' | +--+--+ |
| |VIRB | | VIRB| |
| +--+--+ | | +--+--+ |
| |L2VNI| | | |L2VNI| |
| +--+--+ | | +--+--+ |
+----+-------+ +------+----+
''''|'''''''''' ''''''|'''''''
' L2VNa ' ' L2VNz '
NVE1/S ''/'''''''''\'' NVE2/S NVE3/S'''/'''''''\'' NVE4/S
+-----+---+ +----+----+ +------+--+ +----+----+
| +--+--+ | | +--+--+ | | +---+-+ | | +--+--+ |
| |L2VNI| | | |L2VNI| | | |L2VNI| | | |L2VNI| |
| ++---++ | | ++---++ | | ++---++ | | ++---++ |
+--+---+--+ +--+---+--+ +--+---+--+ +--+---+--+
|...| |...| |...| |...|
Tenant Systems Tenant Systems 4.2. DC Application with Multiple Virtual Networks
DC Site A DC Site Z
Figure 2 - Tenant Virtual Network with Bridging/Routing A DC application may necessarily be constructed with multi-tier
zones, where each zone has different access permissions and runs
different applications. For example, a three-tier zone design has a
front zone (Web tier) with Web applications, a mid zone (application
tier) where service applications such as credit payment or ticket
booking run, and a back zone (database tier) with Data. External
users are only able to communicate with the Web application in the
front zone; the back zone can only receive traffic from the
application zone. In this case, communications between the zones
must pass through a GW/firewall. Each zone can be implemented by one
virtual network and a GW/firewall can be used to between two virtual
networks, i.e., two zones. A tunnel carrying virtual network traffic
has to be terminated at the GW/firewall where overlay traffic is
processed.
4.3. Virtualized Data Center (vDC) 4.3. Virtualized Data Center (vDC)
An Enterprise Data Center today may deploy routers, switches, and An Enterprise Data Center today may deploy routers, switches, and
network appliance devices to construct its internal network, DMZ, network appliance devices to construct its internal network, DMZ,
and external network access; it may have many servers and storage and external network access; it may have many servers and storage
running various applications. With NVO3 technology, a DC Provider running various applications. With NVO3 technology, a DC Provider
can construct a virtualized DC over its physical DC infrastructure can construct a virtualized DC over its physical DC infrastructure
and offer a virtual DC service to enterprise customers. A vDC at the and offer a virtual DC service to enterprise customers. A vDC at the
DC Provider site provides the same capability as a physical DC at DC Provider site provides the same capability as the physical DC at
the customer site. A customer manages their own applications running the customer site. A customer manages their own applications running
in their vDC. A DC Provider can further offer different network in their vDC. A DC Provider can further offer different network
service functions to the customer. The network service functions may service functions to the customer. The network service functions may
include firewall, DNS, load balancer, gateway, etc. include firewall, DNS, load balancer, gateway, etc.
Figure 3 below illustrates one such scenario. For simplicity, it Figure 3 below illustrates one such scenario. For simplicity, it
only shows the L3 VN or L2 VN in abstraction. In this example, the only shows the L3 VN or L2 VN in abstraction. In this example, the
DC Provider operators create several L2 VNs (L2VNx, L2VNy, L2VNz) to DC Provider operators create several L2 VNs (L2VNx, L2VNy, L2VNz) to
group the tenant systems together on a per-application basis, and group the tenant systems together on a per-application basis, and
one L3 VN (L3VNa) for the internal routing. A network firewall and one L3 VN (L3VNa) for the internal routing. A network firewall and
skipping to change at page 12, line 25 skipping to change at page 11, line 23
...+.... |..| ...+.... |..|
+-------: L3 VNa :---------+ LANs +-------: L3 VNa :---------+ LANs
+-+-+ ........ | +-+-+ ........ |
|LB | | | Enterprise Site |LB | | | Enterprise Site
+-+-+ | | +-+-+ | |
...+... ...+... ...+... ...+... ...+... ...+...
: L2VNx : : L2VNy : : L2VNz : : L2VNx : : L2VNy : : L2VNz :
....... ....... ....... ....... ....... .......
|..| |..| |..| |..| |..| |..|
| | | | | | | | | | | |
Web Apps Mail Apps VoIP Apps Web App. Mail App. VoIP App.
Provider DC Site Provider DC Site
Figure 3 - Virtual Data Center (vDC) Figure 2 - Virtual Data Center (vDC)
5. Summary 5. Summary
This document describes some general and potential NVO3 use cases in This document describes some general and potential NVO3 use cases in
DCs. The combination of these cases will give operators the DCs. The combination of these cases will give operators the
flexibility and capability to design more sophisticated cases for flexibility and capability to design more sophisticated cases for
various cloud applications. various cloud applications.
DC services may vary, from infrastructure as a service (IaaS), to DC services may vary, from infrastructure as a service (IaaS), to
platform as a service (PaaS), to software as a service (SaaS). platform as a service (PaaS), to software as a service (SaaS).
skipping to change at page 13, line 23 skipping to change at page 12, line 18
protocols to advertise the tenant system location, i.e., NVE protocols to advertise the tenant system location, i.e., NVE
location. When a tenant network spans across multiple DCs and WANs, location. When a tenant network spans across multiple DCs and WANs,
each network administration domain may use different methods to each network administration domain may use different methods to
distribute the tenant system locations. Both control plane and data distribute the tenant system locations. Both control plane and data
plane interworking are necessary. plane interworking are necessary.
6. Security Considerations 6. Security Considerations
Security is a concern. DC operators need to provide a tenant with a Security is a concern. DC operators need to provide a tenant with a
secured virtual network, which means one tenant's traffic is secured virtual network, which means one tenant's traffic is
isolated from other tenants' traffic as well as from non-tenants' isolated from other tenants' traffic as well as from underlay
traffic. DC operators also need to prevent against a tenant networks. DC operators also need to prevent against a tenant
application attacking their underlying DC network through the application attacking their underlay DC network; further, they need
tenant's virtual network; further, they need to protect against a to protect against a tenant application attacking another tenant
tenant application attacking another tenant application via the DC application via the DC infrastructure network. For example, a tenant
infrastructure network. For example, a tenant application attempts application attempts to generate a large volume of traffic to
to generate a large volume of traffic to overload the DC's overload the DC's underlying network. An NVO3 solution has to
underlying network. An NVO3 solution has to address these issues. address these issues.
7. IANA Considerations 7. IANA Considerations
This document does not request any action from IANA. This document does not request any action from IANA.
8. References 8. References
8.1. Normative References 8.1. Normative References
[RFC7364] Narten, T., et al "Problem Statement: Overlays for Network [RFC7364] Narten, T., et al "Problem Statement: Overlays for Network
skipping to change at page 15, line 20 skipping to change at page 14, line 16
Kieran Milne Kieran Milne
Juniper Networks Juniper Networks
1133 Innovation Way 1133 Innovation Way
Sunnyvale, CA 94089 Sunnyvale, CA 94089
Phone: +1-408-745-2000 Phone: +1-408-745-2000
Email: kmilne@juniper.net Email: kmilne@juniper.net
Acknowledgements Acknowledgements
Authors like to thank Sue Hares, Young Lee, David Black, Pedro Authors like to thank Sue Hares, Young Lee, David Black, Pedro
Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, Eric
Eric Gray for the review, comments, and suggestions. Gray, David Allan, and Joe Touch for the review, comments, and
suggestions.
Authors' Addresses Authors' Addresses
Lucy Yong Lucy Yong
Huawei Technologies Huawei Technologies
Phone: +1-918-808-1918 Phone: +1-918-808-1918
Email: lucy.yong@huawei.com Email: lucy.yong@huawei.com
Linda Dunbar
Huawei Technologies,
5340 Legacy Dr.
Plano, TX 75025 US
Phone: +1-469-277-5840
Email: linda.dunbar@huawei.com
Mehmet Toy Mehmet Toy
Comcast
1800 Bishops Gate Blvd.,
Mount Laurel, NJ 08054
Phone : +1-856-792-2801 Phone : +1-856-792-2801
E-mail : mehmet_toy@cable.comcast.com E-mail : mtoy054@yahoo.com
Aldrin Isaac Aldrin Isaac
Bloomberg Juniper Networks
E-mail: aldrin.isaac@gmail.com E-mail: aldrin.isaac@gmail.com
Vishwas Manral Vishwas Manral
Ionas Networks
Email: vishwas@ionosnetworks.com Email: vishwas@ionosnetworks.com
Linda Dunbar
Huawei Technologies,
5340 Legacy Dr.
Plano, TX 75025 US
Phone: +1-469-277-5840
Email: linda.dunbar@huawei.com
 End of changes. 24 change blocks. 
119 lines changed or deleted 67 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/