Network Working Group                                           L. Yong
Internet Draft                                                   Huawei
Category: Informational                                          M. Toy
                                                                Comcast
                                                               A. Isaac
                                                              Bloomberg
                                                              V. Manral
                                                        Hewlett-Packard
                                                              L. Dunbar
                                                                 Huawei

Expires: January August 2015                                  February 5, 2015                                     July 1, 2014

         Use Cases for DC Data Center Network Virtualization Overlays

                       draft-ietf-nvo3-use-case-04

                       draft-ietf-nvo3-use-case-05

Abstract

   This document describes DC Data Center (DC) Network Virtualization over
   Layer 3 (NVO3) use cases that may can be potentially deployed in various data
   centers and apply serve to different applications.

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with
   the provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups. Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time. It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on January, August 5, 2015.

Copyright Notice

   Copyright (c) 2014 2015 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document. Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Table of Contents

   1. Introduction ................................................ 3 Introduction...................................................3
      1.1. Contributors ........................................... 4
      1.2. Terminology ............................................ 4 Terminology...............................................4
   2. Basic Virtual Networks in a Data Center ..................... 4 Center........................4
   3. Interconnecting DC Virtual Network and External Networks .... 6 Network Interconnection........6
      3.1. DC Virtual Network Access via Internet ................. 6 Internet....................6
      3.2. DC VN and Enterprise Sites interconnected via SP WAN ... 7 VPN Interconnection......................7
   4. DC Applications Using NVO3 .................................. 8 NVO3.....................................8
      4.1. Supporting Multi Multiple Technologies and Applications in a DC . 9 Applications.........8
      4.2. Tenant Network with Multi-Subnets or across multi DCs .. 9 Multiple Subnets......................9
      4.3. Virtualized Data Center (vDC) ......................... 11 (vDC)............................11
   5. OAM Considerations ......................................... 12 Summary.......................................................12
   6. Summary .................................................... 13
   7. Security Considerations .................................... 14
   8. Considerations.......................................13
   7. IANA Considerations ........................................ 14
   9. Acknowledgements ........................................... 14
   10. References ................................................ 14
      10.1. Considerations...........................................13
   8. References....................................................13
      8.1. Normative References ................................. 14
      10.2. References.....................................13
      8.2. Informative References ............................... 15 References...................................13
   Contributors.....................................................14
   Acknowledgements.................................................15
   Authors' Addresses ............................................ 15 Addresses...............................................15

1. Introduction

   Server Virtualization has changed IT the Information Technology (IT)
   industry in terms of the efficiency, cost, and the speed in of providing a
   new applications and/or services.
   However, today's data center However traditional Data Center
   (DC) networks have limited support for some limits in supporting cloud applications and
   multi tenant networks.[NVO3PRBM] networks [RFC7364]. The goal of DC Network Virtualization Overlays, i.e. NVO3,
   Overlays in DC is to decouple the communication among tenant systems
   from DC physical infrastructure networks and to allow one physical
   network infrastructure to provide: 1) multi-
   tenant

   o  Multi-tenant virtual networks and traffic isolation among the
      virtual networks over the same physical network; 2) independent network.

   o  Independent address spaces in individual virtual networks such as
      MAC, IP, TCP/UDP etc;
   3) etc.

   o  Flexible VMs or Virtual Machines (VM) and/or workload placement
      including the ability to move them from servers server to other servers server without
      requiring VM address and configuration change and the ability
      doing a hot move in which with no disruption to the live application
      running on VM. VMs.

   These characteristics will of NVO3 help address the issues in today's that cloud
   applications [NVO3PRBM]. face in Data Centers [RFC7364].

   An NVO3 network is necessary to can interconnect with a another physical network,
   where tenant systems attach to i.e.,
   not the both networks. physical network that the NVO3 network is over. For examples: example:
   1) DCs that migrates migrate toward NVO3 solution will be done in steps; 2) a
   lot of
   many DC applications are served serve to Internet cloud users which exist who are on
   physical networks; 3) some applications are CPU bound like such as Big
   Data analytics and may not run on virtualized resources.

   This document is to describe describes general NVO3 use cases that apply to various
   data centers. Three types of the use cases described here are:

   o  Basic NVO3 virtual networks in DC. a DC (Section 2). All TS of the Tenant
      Systems (TS) in virtual networks are located within one DC. The Virtual
      individual virtual networks can be either L2 Layer 2 (L2) or
      L3. Layer 3
      (L3). The number of Virtual Networks to be virtual networks supported in by NVO3 in a DC is
      usually more
      much higher than what traditional VLAN based virtual networks
      [IEEE 802.1Q] can support. The This case is often referred as to the
      DC East-West traffic.

   o  Virtual networks that span across multiple Data Centers or and/or to
      customer premises, i.e. i.e., a Virtual Network virtual network that has connects some nodes
      tenant systems in a DC and other nodes in other places. interconnects another virtual or physical
      network outside the data center. An enterprise customer may use a
      traditional VPN provided by a carrier VPN or an IPsec tunnel over Internet to connect
      communicate its systems in the TSs across multiple DCs and customer
      premises. DC. This is described in Section 3.

   o  DC applications or services that may use NVO3. NVO3 (Section 4). Three
      scenarios are described: 1) use NVO3 and other network
      technologies to build a tenant network; 2) construct several
      virtual networks as a tenant network; 3) apply NVO3 to a
      virtualized DC (vDC).

   The document uses the architecture reference model defined in
   [NVO3FRWK]
   [RFC7365] to describe the use cases.

1.1. Contributors

      Vinay Bannai
      PayPal
      2211 N. First St,
      San Jose, CA 95131
      Phone: +1-408-967-7784
      Email: vbannai@paypal.com

      Ram Krishnan
      Brocade Communications
      San Jose, CA 95134
      Phone: +1-408-406-7890
      Email: ramk@brocade.com

1.2.  Terminology

   This document uses the terminologies defined in [NVO3FRWK], [RFC7365] and
   [RFC4364]. Some additional terms used in the document are listed
   here.

   CPE: Customer Premise Equipment

   DMZ: Demilitarized Zone. A computer or small subnetwork sub-network that sits
   between a trusted internal network, such as a corporate private LAN,
   and an un-trusted external network, such as the public Internet.

   DNS: Domain Name Service

   NAT: Network Address Translation

   VIRB: Virtual Integrated Routing/Bridging

   Note that a virtual network in this document is an overlay a virtual network instance. in
   DC that is implemented with NVO3 technology.

2. Basic Virtual Networks in a Data Center

   A virtual network may exist within in a DC. The network DC enables a communication among Tenant
   Systems (TS). A TS may can be a physical server/device or a virtual
   machine (VM) on a server. server, i.e., end-device [RFC7365]. A network
   virtual edge Network
   Virtual Edge (NVE) may can be co-located with a TS, i.e. i.e., on a same end-
   device, or reside on a different device, e.g. e.g., a top of rack switch
   (ToR). A virtual network has a unique virtual network identifier
   (may (can be local or
   global unique) for an NVE to properly differentiate
   it from other virtual networks. unique or local significant at NVEs).

   Tenant Systems attached to the same NVE may belong to the same or
   different virtual network. The multiple virtual networks can be
   constructed in a way so that the policies are enforced when the TSs
   in one virtual network communicate with the TSs in other virtual networks. An NVE provides tenant traffic
   forwarding/encapsulation and obtains tenant systems reachability
   information from Network Virtualization Authority (NVA)[NVO3ARCH]. Furthermore in a

   DC operators may can construct many tenant virtual networks that have no
   communication in between at all. In this case, each tenant virtual network
   may use
   can have its own address spaces such as MAC and IP. One tenant
   network may have DC operators can
   also construct multiple virtual networks in a way so that the
   policies are enforced when the TSs in one or more virtual network
   communicate with the TSs in other virtual networks. This is referred
   to as Distributed Gateway [NVO3ARCH].

   A Tenant System may also can be configured with one or multiple addresses and
   participate in multiple virtual networks, i.e. i.e., use the same or
   different address in different virtual networks. For examples, a TS may
   Tenant System can be a NAT GW or a firewall and connect to more than
   one virtual network.

   Network Virtualization Overlay in this context means that a virtual
   network is implemented with an overlay technology, i.e. i.e., tenant
   traffic from
   an is encapsulated at its local NVE and carried by a tunnel
   over DC IP network to another NVE where the packet is sent via a tunnel between decapsulated
   prior to sending to a pair of
   NVEs.[NVO3FRWK] target tenant system. This architecture
   decouples tenant system address
   scheme space and configuration from the
   infrastructure's, which brings a great flexibility for VM placement
   and mobility. This also makes The technology results the transit nodes in the
   infrastructure not aware of the existence of the virtual networks.
   One tunnel may carry the traffic belonging to different virtual
   networks; a virtual network identifier is used for traffic
   demultiplexing.

   A virtual network may be an L2 or L3 domain. The TSs attached to an
   NVE may can belong to different virtual networks that may be are either in L2
   or L3. A virtual network may can carry unicast traffic and/or
   broadcast/multicast/unknown traffic from/to tenant systems. There
   are several ways to transport virtual network BUM traffic.[NVO3MCAST] traffic
   [NVO3MCAST].

  It is worth to mention two distinct cases here. regarding to NVE location.
  The first is that TSs and an NVE are co-located on a same end device,
  which means that the NVE can be made aware of the TS state at any time
  via internal API. The second is that TSs and an NVE are remotely connected, i.e.
  connected reside on
  different devices that connect via a switched network or point-to-point link. In wire; in this case, a protocol
  is necessary for NVE to know TS state. state [NVO3HYVR2NVE].

  One virtual network may connect can provide connectivity to many TSs that attach
  to many different NVEs. NVEs in a DC. TS dynamic placement and mobility
  results in frequent changes in of the binding between a TS and NVE bindings. an NVE.
  The TS reachbility update mechanism mechanisms need be fast enough to so that the
  updates do not cause any service interruption.  The capability of
  supporting many TSs in a virtual network and many more virtual
  networks in a DC is critical for NVO3 solution.

   If a virtual network spans across multiple DC sites, one design is
   to allow the network seamlessly to span across the sites without DC
   gateway routers' termination. In this case, the tunnel between a
   pair of NVEs may in turn can be tunneled over carried within other intermediate tunnels
   over of the
   Internet or other WANs, or the intra DC and inter DC tunnels are can be
   stitched together to form an end-to-end virtual network
   across DCs.

3. Interconnecting DC Virtual Network and External Networks

   For a tunnel between the pair of NVEs that are
   in different DC sites. Both cases will form one virtual network
   across multiple DC sites.

3. DC Virtual Network and External Network Interconnection

   For customers (an enterprise or individuals) who utilize the DC
   provider's compute and storage resources to run their applications,
   they need to access their systems hosted in a DC through Internet or
   Service Providers' WANs. Wide Area Networks (WAN). A DC provider may can
   construct a virtual network that connect provides the connectivity to all
   the resources designated for a customer and
   allow allows the customer to
   access their resources via a virtual gateway (vGW). This, in turn,
   becomes the case of interconnecting a DC virtual network and the
   network at customer site(s) via Internet or WANs. Two use cases are
   described here.

3.1. DC Virtual Network Access via Internet

   A customer can connect to a DC virtual network via Internet in a
   secure way. Figure 1 illustrates this case. A virtual network is
   configured on NVE1 and NVE2 and two NVEs are connected via an L3 IP
   tunnel in the Data Center. A set of tenant systems are attached to
   NVE1 on a server. The NVE2 resides on a DC Gateway device. NVE2
   terminates the tunnel and uses the VNID on the packet to pass the
   packet to the corresponding vGW entity on the DC GW. A customer can
   access their systems, i.e. i.e., TS1 or TSn, in the DC via Internet by
   using IPsec tunnel [RFC4301]. The IPsec tunnel is configured between
   the vGW and the customer gateway at customer site. Either static
   route or BGP iBGP may be used for peer routes. routes update. The vGW provides IPsec
   functionality such as authentication scheme and encryption. Note
   that: 1) some encryption; iBGP
   protocol is carried within the IPsec tunnel. Some vGW features are
   listed below:

   o  Some vGW functions such as firewall and load balancer may
   also can be
      performed by locally attached network appliance devices; 2) devices.

   o  The virtual network in DC may use different address space than
      external users, then vGW need needs to provide the NAT function; 3) more function.

   o  More than one IPsec tunnels can be configured for the redundancy; 4) redundancy.

   o  vGW
   may can be implemented on a server or VM. In this case, IP
      tunnels or IPsec tunnels may can be used over DC infrastructure.

   o  DC operators need to construct a vGW for each customer.

   Server+---------------+
         |   TS1 TSn     |
         |    |...|      |
         |  +-+---+-+    |             Customer Site
         |  |  NVE1 |    |               +-----+
         |  +---+---+    |               | CGW |
         +------+--------+               +--+--+
                |                           *
            L3 Tunnel                       *
                |                           *
   DC GW +------+---------+            .--.  .--.
         |  +---+---+     |           (    '*   '.--.
         |  |  NVE2 |     |        .-.'   *          )
         |  +---+---+     |       (    *  Internet    )
         |  +---+---+.    |        ( *               /
         |  |  vGW  | * * * * * * * * '-'          '-'
         |  +-------+ |   | IPsec       \../ \.--/'
         |   +--------+   | Tunnel
         +----------------+

           DC Provider Site

            Figure 1 - DC Virtual Network Access via Internet

3.2. DC VN and Enterprise Sites interconnected via SP WAN

   An enterprise company may lease the VM and storage resources hosted
   in the 3rd party DC VPN Interconnection

   In this case, an Enterprise customer wants to run its applications. For example, the
   company may run use Service Provider
   (SP) WAN VPN [RFC4364] [EVPN] to interconnect its web applications at 3  party sites but run
   backend applications in own DCs. The Web applications and backend
   applications need to communicate privately. The 3  party DC may
   construct one or more a
   virtual networks to connect all VMs and
   storage running the Enterprise Web applications. The company may buy network in DC site. Service Provider constructs a p2p private tunnel such as VPWS from VPN for
   the enterprise customer. Each enterprise site peers with a SP to interconnect its site PE.
   The DC Provider and the VPN Service Provider can build a DC virtual
   network at the 3rd party site.  A protocol is
   necessary for exchanging (VN) and VPN independently and interconnects the reachability VN and VPN
   via a local link or a tunnel between two peering points DC GW and WAN PE devices. The
   control plan interconnection options between the traffic are carried over the tunnel. If an enterprise has
   multiple sites, it may buy multiple p2p tunnels to form a mesh
   interconnection among the sites and the third party site. This
   requires each site peering with all other sites for route
   distribution.

   Another way to achieve multi-site interconnection is to use Service
   Provider (SP) VPN services, in which each site only peers with SP PE
   site. A DC Provider and VPN SP may build a DC virtual network (VN)
   and VPN independently. The VPN interconnects several enterprise
   sites and the DC virtual network at DC site, i.e. VPN site. The DC
   VN and SP VPN interconnect via a local link or a tunnel. The control
   plan interconnection options VN and VPN are
   described in RFC4364 [RFC4364]. In Option A with VRF-LITE [VRF-LITE],
   both ASBRs, i.e., DC GW and SP PE PE, maintain a routing/forwarding
   table, and perform the table lookup in forwarding. In Option B, DC GW
   ASBR and SP PE ASBR do not maintain the forwarding table, it only
   maintains the VN and VPN identifier mapping, and swap the
   identifier
   identifiers on the packet in the forwarding process. Both option A
   and B requires tunnel termination. In option C, DC GW the VN and SP PE VPN use
   the same identifier for VN and VPN, identifier, and just Both ASBRs perform the tunnel stitching, i.e.
   i.e., change the tunnel end points. Each option has pros/cons (see
   RFC4364) and has been deployed in SP networks depending on the
   applications. The BGP protocols may can be used in these options for
   route distribution. Note that if the provider DC is the SP Data Center, the
   DC GW and SP PE in this case may can be on merged into one
   device. device that
   performs the interworking of the VN and VPN.

   This configuration allows the enterprise networks communicating to
   the tenant systems attached to the VN in a provider DC provider site without
   interfering with DC provider underlying physical networks and other
   virtual networks in the DC. The enterprise may can use its own address
   space on the tenant systems in the VN. The DC provider can manage which VM and storage attachment
   attaching to the VN. The enterprise customer manages what
   applications to run on the VMs in the VN. See VN without the knowledge of
   VMs location in the DC. (See Section 4 for more.

   The interesting feature more)

   Furthermore, in this use case is that the VN and compute
   resource are managed by case, the DC provider. The DC operator can place
   them at any server without notifying the enterprise and WAN SP
   because the DC physical network is completely isolated from the
   carrier and enterprise network. Furthermore, the DC operator may move the VMs
   assigned to the enterprise from one sever to another in the DC
   without the enterprise customer awareness, i.e. i.e., no impact on the
   enterprise 'live' applications running these resources. Such
   advanced features technologies bring DC providers great benefits in serving offering
   cloud applications but also add some requirements for NVO3 [NVO3PRBM]. [RFC7364] as
   well.

4. DC Applications Using NVO3

   NVO3 technology brings DC operators the flexibility in designing and
   deploying different applications in an end-to-end virtualization
   overlay environment, where the operators no longer need to worry
   about the constraints of the DC physical network configuration when
   creating VMs and configuring a virtual network. DC provider may use
   NVO3 in various ways and also use it in the conjunction with other
   physical networks in DC for many reasons. a reason. This section just highlights
   some use cases.

4.1. Supporting Multi Multiple Technologies and Applications in a DC

   Most likely servers deployed in a large data center are rolled in at
   different times and may have different capacities/features. Some
   servers may be virtualized, some may not; some may be equipped with
   virtual switches, some may not. For the ones equipped with
   hypervisor based virtual switches, some may support VxLAN [VXLAN] [RFC7348]
   encapsulation, some may support NVGRE encapsulation [NVGRE], and
   some may not support any types of encapsulation. To construct a
   tenant network among these servers and the ToR switches, it may operators
   can construct one NVO3 virtual network and one traditional VLAN
   network; or two virtual networks that one uses VxLAN encapsulation
   and another uses NVGRE.

   In these cases, a gateway device or virtual GW is used to
   participate in multiple two virtual networks. It performs the packet
   encapsulation/decapsulation translation and may also perform address mapping or
   translation, and etc.

   A data center may be also constructed with multi-tier zones. Each
   zone has different access permissions and run runs different
   applications. For example, the three-tier zone design has a front
   zone (Web tier) with Web applications, a mid zone (application tier)
   with service applications such as payment and booking, and a back
   zone (database tier) with Data. External users are only able to
   communicate with the Web application in the front zone. In this case,
   the communication between the zones must pass through the security
   GW/firewall. One virtual network may can be configured in each zone and
   a GW is used to interconnect two virtual networks. networks, i.e., two zones.
   If individual zones use the different implementations, the GW needs
   to support these implementations as well.

4.2. Tenant Network with Multi-Subnets or across multi DCs Multiple Subnets

   A tenant network may contain multiple subnets. The DC physical
   network needs to support the connectivity for many tenant networks.
   The
   inter-subnets inter-subnet policies may be placed at some designated gateway
   devices only. Such design requires the inter-subnet traffic to be
   sent to one of the gateways first for the policy checking, which may
   cause traffic hairpin at the gateway in a DC. It is desirable that
   an NVE can hold some policies and be able to forward inter-subnet
   traffic directly. To reduce NVE burden, the hybrid design may be
   deployed, i.e. i.e., an NVE can perform forwarding for the selected inter-
   subnets
   inter-subnets and the designated GW performs for the rest. For
   example, each NVE performs inter-subnet forwarding for a tenant, and
   the designated GW is used for inter-subnet traffic from/to the
   different tenant networks.

   A tenant network may span across multiple Data Centers that are in distance.
   difference locations. DC operators may configure an L2 VN within
   each DC and an L3 VN between DCs for a tenant network. For this
   configuration, the virtual L2/L3 gateway can be implemented on DC GW
   device. Figure 2 illustrates this configuration.

   Figure 2 depicts two DC sites. The site A constructs one L2 VN, say
   L2VNa, on NVE1, NVE2, and NVE3. NVE1 and NVE2 reside on the servers
   which host multiple tenant systems. NVE3 resides on the DC GW device.
   The site Z has similar configuration with L2VNz on NVE3, NVE4, and
   NVE6. One L3 VN, say L3VNx, is configured on the NVE5 at site A and
   the NVE6 at site Z. An internal Virtual Interface of Routing and
   Bridging (VIRB) is used between L2VNI and L3VNI on NVE5 and NVE6,
   respectively. The L2VNI is the MAC/NVE mapping table and the L3VNI
   is the IP prefix/NVE mapping table. A packet to the NVE5 from L2VNa
   will be decapsulated and converted into an IP packet and then
   encapsulated and sent to the site Z. The policies can be checked at
   VIRB.

   Note that the L2VNa, L2VNz, and L3VNx in Figure 2 are overlay NVO3 virtual
   networks.

   NVE5/DCGW+------------+                  +-----------+ NVE6/DCGW
            | +-----+    | '''''''''''''''' |   +-----+ |
            | |L3VNI+----+'    L3VNx       '+---+L3VNI| |
            | +--+--+    | '''''''''''''''' |   +--+--+ |
            |    |VIRB   |                  |  VIRB|    |
            | +--+---+   |                  |  +---+--+ |
            | |L2VNIs|   |                  |  |L2VNIs| |
            | +--+---+   |                  |  +---+--+ |
            +----+-------+                  +------+----+
             ''''|''''''''''                 ''''''|'''''''
            '     L2VNa     '               '     L2VNz    '
      NVE1/S ''/'''''''''\'' NVE2/S    NVE3/S'''/'''''''\'' NVE4/S
        +-----+---+  +----+----+        +------+--+ +----+----+
        | +--+--+ |  | +--+--+ |        | +---+-+ | | +--+--+ |
        | |L2VNI| |  | |L2VNI| |        | |L2VNI| | | |L2VNI| |
        | ++---++ |  | ++---++ |        | ++---++ | | ++---++ |
        +--+---+--+  +--+---+--+        +--+---+--+ +--+---+--+
           |...|        |...|              |...|       |...|

             Tenant Systems                  Tenant Systems

                DC Site A                    DC Site Z

         Figure 2 - Tenant Virtual Network with Bridging/Routing

4.3. Virtualized Data Center (vDC)

   An Enterprise DC's Data Center today may deploy routers, switches, and
   network appliance devices to construct its internal network, DMZ,
   and external network access and access; it may have many servers and storage
   running various applications. A With NVO3 technology, a DC Provider may
   can construct a virtualized DC over its DC infrastructure and offer
   a virtual DC service to enterprise customers. A vDC at DC Provider
   site provides the same capability as a physical DC. DC at the customer
   site. A customer manages what and how applications to run in
   the its
   vDC. Instead of using many hardware devices to do it, with the
   network virtualization overlay technology, DC operators may build
   such vDCs on top of a common DC infrastructure for many such
   customers and Provider can further offer different network service
   functions to a vDC. The network service functions may include
   firewall, DNS, load balancer, gateway, and etc. The network virtualization overlay further enables potential
   for vDC mobility when a customer moves to different locations
   because vDC configuration is decouple from the infrastructure
   network.

   Figure 3 below illustrates one scenario. For the simple
   illustration, it only shows the L3 VN or L2 VN as virtual routers or
   switches. in abstraction. In
   this case, example, DC Provider operators create several L2 VNs (L2VNx,
   L2VNy, L2VNz) in Figure 3 to group the tenant systems together per application
   basis, create one L3 VN, e.g. e.g., VNa for the internal routing. A net device (may be
   network function, firewall and gateway, runs on a VM or server) runs firewall/gateway
   applications and server that
   connects to the L3VNa and Internet. is used for inbound and outbound traffic
   process. A load balancer (LB) is used in L2 VNx. A VPWS p2p tunnel VPN is also built
   between the gateway and enterprise router. Enterprise customer runs
   Web/Mail/Voice applications on VMs at the provider DC site; lets site that can
   spread out on many servers; the users at Enterprise site to access the
   applications via running in the VPN tunnel and
   Internet provider DC site via a gateway at the Enterprise site; let VPN; Internet
   users access the these applications via the gateway in gateway/firewall at the
   provider DC.

   The

   Enterprise customer decides which applications are accessed by
   intranet only and which by both intranet and extranet and configures
   the proper security policy and gateway function. function at firewall/gateway.
   Furthermore a an enterprise customer may want multi-zones in a vDC
   (See section 4.1) for the security and/or set different QoS levels
   for the different applications.

   This

   The vDC use case requires the NVO3 solution to provide the DC operator
   operators an easy and quick way to create a VN and NVEs for any design and vDC
   design, to quickly allocate TSs and assign TSs to VNIs on a NVE they attach to, easily to set up virtual
   topology and place or configure policies on an NVE or VMs that run
   net services, the corresponding VN, and support VM mobility. Furthermore a DC operator
   and/or customer should be able
   to view the illustrate vDC topology and access manage/configure individual virtual components in the vDC. Either DC provider or
   tenant can provision virtual components elements
   in the vDC. It is desirable
   to automate vDC via the provisioning process and have programmability. vDC topology.

                           Internet                    ^ Internet
                                                       |
                              ^                     +--+---+
                              |                     |  GW  |
                              |                     +--+---+
                              |                        |
                      +-------+--------+            +--+---+
                      |Firewall/Gateway+--- VPN-----+router|
                      +-------+--------+            +-+--+-+
                              |                       |  |
                           ...+....                   |..|
                  +-------: L3 VNa :---------+        LANs
                +-+-+      ........          |
                |LB |          |             |     Enterprise Site
                +-+-+          |             |
               ...+...      ...+...       ...+...
              : L2VNx :    : L2VNy :     : L2VNx :
               .......      .......       .......
                 |..|         |..|          |..|
                 |  |         |  |          |  |
               Web Apps     Mail Apps      VoIP Apps

                        Provider DC Site

    firewall/gateway and Load Balancer (LB) may run on a server or VMs

                   Figure 3 - Virtual Data Center by Using NVO3 (vDC)

5. OAM Considerations Summary

   This document describes some general potential use cases of NVO3 brings the ability in
   DCs. The combination of these cases will give operators flexibility
   and capability to design more sophisticated cases for a various cloud
   applications.

   DC provider to segregate tenant
   traffic. A DC provider needs to manage and maintain NVO3 instances.

   Similarly, the tenant needs to be informed about underlying network
   failures impacting tenant applications or the tenant network is able
   to detect both overlay and underlay network failures and builds some
   resiliency mechanisms.

   Various OAM and SOAM tools and procedures are defined in [IEEE
   802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for
   L2 and L3 networks, and for user, including continuity check,
   loopback, link trace, testing, alarms such as AIS/RDI, and on-demand
   and periodic measurements. These procedures may apply to tenant
   overlay networks and tenants not only for proactive maintenance, but
   also to ensure support of Service Level Agreements (SLAs).

   As the tunnel traverses different networks, OAM messages need to be
   translated at the edge of each network to ensure end-to-end OAM.

6. Summary

   The document describes some general potential use cases of NVO3 in
   DCs. The combination of these cases should give operators
   flexibility and capability to design more sophisticated cases for
   various purposes.

   DC services may vary from infrastructure as a service (IaaS),
   platform as a service (PaaS), services may vary from infrastructure as a service (IaaS),
   platform as a service (PaaS), to software as a service (SaaS), in
   which the network virtualization overlay is NVO3 virtual networks are just a portion of an
   application service. such services.

   NVO3 decouples the service
   construction/configurations from the DC network infrastructure
   configuration, and helps deployment of higher level services over
   the application.

   NVO3's underlying network provides the tunneling between NVEs uses tunnel technique so that two NVEs appear as one hop to
   each other. other in a virtual network. Many tunneling technologies can
   serve this function. The tunneling may in turn be tunneled over
   other intermediate tunnels over the Internet or other WANs. It is also possible that intra DC and inter DC tunnels are
   stitched together to form an end-to-end tunnel between two NVEs.

   A DC virtual network may be accessed by external users in a secure
   way. Many existing technologies can help achieve this.

   NVO3 implementations may vary. Some DC operators prefer to use
   centralized controller to manage tenant system reachbility in a
   tenant
   virtual network, other prefer to use distributed protocols to
   advertise the tenant system location, i.e. associated NVEs. For the
   migration and special requirement, the different solutions may apply
   to one tenant network in a DC. i.e., NVE location. When a
   tenant network spans across multiple DCs and WANs, each network
   administration domain may use different methods to distribute the
   tenant system locations. Both control plane and data plane
   interworking are necessary.

7.

6. Security Considerations

   Security is a concern. DC operators need to provide a tenant a
   secured virtual network, which means one tenant's traffic isolated
   from the other tenant's traffic and non-tenant's traffic; they also need
   to prevent DC underlying network from any tenant application
   attacking through the tenant virtual network or one tenant
   application attacking another tenant application via DC networks.
   infrastructure network. For example, a tenant application attempts
   to generate a large volume of traffic to overload DC underlying
   network. The NVO3 solution has to address these issues.

8.

7. IANA Considerations

   This document does not request any action from IANA.

9. Acknowledgements

   Authors like to thank Sue Hares, Young Lee, David Black, Pedro
   Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and
   Eric Gray for the review, comments, and suggestions.

10.

8. References

10.1.

8.1. Normative References

   [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private
             Networks (VPNs)", RFC 4364, February 2006.

   [IEEE 802.1ag]  "Virtual Bridged Local Area Networks - Amendment 5:
             Connectivity Fault Management", December 2007.

   [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet
             based Networks, 2011.

   [ITU-T Y.1564] "Ethernet service activation test methodology", 2011.

   [RFC4378] Allan, D., Nadeau,

   [RFC7364] Narten, T., "A Framework for Multi-Protocol
             Label Switching (MPLS) Operations and Management (OAM)",
             RFC4378, February 2006

   [RFC4301] Kent, S., "Security Architecture et al "Problem Statement: Overlays for the Internet
             Protocol", rfc4301, December 2005

   [RFC5880] Katz, D. Network
             Virtualization", RFC7364, October 2014.

   [RFC7365] Lasserre, M., Motin, T., and Ward, D., "Bidirectional Forwarding Detection
             (BFD)", rfc5880, June 2010.

10.2. et al, "Framework for DC
             Network Virtualization", RFC7365, October 2014.

8.2. Informative References

   [EVPN]  Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A. and J.
             Uttaro, "BGP MPLS Based Ethernet VPN", Work in Progress,
             draft-ietf-l2vpn-evpn-11, work in progress.

   [IEEE 802.1Q]  IEEE, "IEEE Standard for Local and metropolitan area
             networks -- Media Access Control (MAC) Bridges and Virtual
             Bridged Local Area", IEEE Std 802.1Q, 2011.

   [NVO3HYVR2NVE] Li, Y., et al, "Hypervisor to NVE Control Plane
             Requirements", draft-ietf-nvo3-hpvr2nve-cp-req-01, work in
             progress.

   [NVGRE]  Sridharan, M., "NVGRE: Network Virtualization using Generic
             Routing Encapsulation", draft-sridharan-virtualization-
             nvgre-03,
             nvgre-07, work in progress.

   [NVO3ARCH] Black, D., et al, "An Architecture for Overlay Networks
             (NVO3)", draft-ietf-nvo3-arch-00, work in progress.

   [NVO3PRBM] Narten, T., et al "Problem Statement: Overlays for
             Network Virtualization", draft-ietf-nvo3-overlay-problem-
             statement-04, work in progress.

   [NVO3FRWK] Lasserre, M., Motin, T., and et al, "Framework for DC
             Network Virtualization", draft-ietf-nvo3-framework-04, draft-ietf-nvo3-arch-02, work in progress.

   [NVO3MCAST] Ghanwani, A., "Multicast Issues "Framework of Supporting Applications
             Specific Multicast in Networks Using NVO3",
             draft-ghanwani-nvo3-mcast-issues-00, draft-ghanwani-nvo3-app-
             mcast-framework-02, work in progress.

   [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com

   [VXLAN]

   [RFC4301] Kent, S., "Security Architecture for the Internet
             Protocol", rfc4301, December 2005

   [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private
             Networks (VPNs)", RFC 4364, February 2006.

   [RFC7348]  Mahalingam,M., Dutt, D., ific Multicast in etc "VXLAN: A
             Framework for Overlaying Virtualized Layer 2 Networks over
             Layer 3 Networks", draft-mahalingam-dutt-dcops-vxlan-06.txt, work
             in progress. RFC7348 August 2014.

   [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com

Contributors

      Vinay Bannai
      PayPal
      2211 N. First St,
      San Jose, CA 95131
      Phone: +1-408-967-7784
      Email: vbannai@paypal.com

      Ram Krishnan
      Brocade Communications
      San Jose, CA 95134
      Phone: +1-408-406-7890
      Email: ramk@brocade.com

Acknowledgements

   Authors like to thank Sue Hares, Young Lee, David Black, Pedro
   Marques, Mike McBride, David McDysan, Randy Bush, Uma Chunduri, and
   Eric Gray for the review, comments, and suggestions.

 Authors' Addresses

   Lucy Yong

   Phone: +1-918-808-1918
   Email: lucy.yong@huawei.com

   Mehmet Toy
   Comcast
   1800 Bishops Gate Blvd.,
   Mount Laurel, NJ 08054

   Phone : +1-856-792-2801
   E-mail : mehmet_toy@cable.comcast.com

   Aldrin Isaac
   Bloomberg
   E-mail: aldrin.isaac@gmail.com

   Vishwas Manral
   Hewlett-Packard Corp.
   3000 Hanover Street, Building 20C
   Palo Alto, CA  95014

   Phone: 650-857-5501
   Email: vishwas.manral@hp.com

   Linda Dunbar
   Huawei Technologies,
   5340 Legacy Dr.
   Plano, TX 75025 US

   Phone: +1-469-277-5840
   Email: linda.dunbar@huawei.com