Network working group Working Group                                           L. Yong
Internet Draft                                                   Huawei
Category: Informational                                          M. Toy
                                                                Comcast
                                                               A. Isaac
                                                              Bloomberg
                                                              V. Manral
                                                        Hewlett-Packard
                                                              L. Dunbar
                                                                 Huawei

Expires: August November 2013                                  February 15,                                  May 1, 2013

             Use Cases for DC Network Virtualization Overlays

                       draft-ietf-nvo3-use-case-00

                       draft-ietf-nvo3-use-case-01

Abstract

   This draft document describes the general DC NVO3 use cases. The work intention
   is to help validate the NVO3 framework cases that may be
   potentially deployed in various data centers and requirements as along
   with the development apply to different
   applications. An application in a DC may be a combination of the solutions. some
   use cases described here.

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with
   the provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups. Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time. It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on August, November, 2013.

Copyright Notice

   Copyright (c) 2013 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document. Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC-2119 [RFC2119].

Table of Contents

   1. Introduction...................................................3
      1.1. Contributors..............................................4
      1.2. Terminology...............................................4
   2. Terminology....................................................4
   3. Basic Virtual Networks in a Data Center........................4
   4. Center........................5
   3. Interconnecting DC Virtual Network and External Networks.......6
      4.1.
      3.1. DC Virtual Network Access via Internet....................7
      4.2. Internet....................6
      3.2. DC Virtual Network VN and WAN VPN Interconnection............7
   5. Enterprise Sites interconnected via SP WAN......7
   4. DC Applications Using NVO3.....................................9
      5.1. NVO3.....................................8
      4.1. Supporting Multi Technologies and Applications in a Data Center...........10
      5.2. DC....9
      4.2. Tenant Virtual Network with Bridging/Routing.............10
      5.3. Multi-Subnets or across multi DCs.....9
      4.3. Virtual Data Center (VDC)................................11
      5.4. Federating NV03 Domains..................................13
   6. (vDC)................................11
   5. OAM Considerations............................................13
   7.
   6. Summary.......................................................13
   8.
   7. Security Considerations.......................................14
   9.
   8. IANA Considerations...........................................14
   9. Acknowledgements..............................................14
   10. Acknowledgements.............................................15
   11. References...................................................15
      11.1. References...................................................14
      10.1. Normative References....................................15
      11.2. References....................................14
      10.2. Informative References..................................16 References..................................15
   Authors' Addresses...............................................16 Addresses...............................................15

1. Introduction

   Compute

   Server Virtualization has dramatically and quickly changed IT industry in terms of efficiency,
   cost, and the speed in providing a new applications and/or services.
   However the problems in today's data center networks hinder the
   support of an elastic cloud service and dynamic virtual tenant
   networks [NVO3PRBM]. The goal of DC Network Virtualization Overlays,
   i.e. NVO3, is to decouple tenant system the communication networking among tenant systems
   from DC physical networks and to allow one physical network
   infrastructure to provide: 1) traffic isolation among tenant virtual
   networks over the same physical network; 2) independent address
   space in each virtual network and address isolation from the
   infrastructure's; 3) Flexible VM placement and move from one server
   to another without any of the physical network
   limitation. limitations. These
   characteristics will help address the issues that hinder true
   virtualization in the data centers [NVO3PRBM].

   Although NVO3 may enable enables a true virtual environment where VMs and
   network service appliances communicate, virtualization environment, the NVO3
   solution has to address the communication between a virtual network
   and one a physical network. This is because 1) many traditional DCs exist and that need to
   provide network virtualization are currently running over physical
   networks, the migration will not
   disappear any time soon; be in steps; 2) a lot of DC
   applications serve are served to Internet and/or cooperation users which run directly on
   physical networks; 3) some applications are CPU bound like Big Data
   analytics which are CPU bound and may not
   want need the virtualization capability.

   This document is to describe general NVO3 use cases that apply to
   various data center networks to ensure nvo3 framework and solutions
   can meet the demands. centers. Three types of the use cases described here
   are:

   o  A virtual network connects many tenant systems within a Data
      Center and form one L2 or L3 communication domain. A virtual
      network segregates its traffic from others and allows the VMs in
      the network moving from one server to another. The case may be
      used for DC internal applications that constitute the DC East-
      West traffic.

   o  A DC provider offers a secure DC service to an enterprise
      customer and/or Internet users. In these cases, the enterprise
      customer may use a traditional VPN provided by a carrier or an
      IPsec tunnel over Internet connecting to an overlay virtual a NVO3 network offered by within a Data Center provider.
      provider DC. This is mainly constitutes DC North-South traffic.

   o  A DC provider uses may use NVO3 to and other network technologies for a
      tenant network, construct different topologies or zones for a
      tenant network, and may design a variety of cloud applications
      that make use of may require the network service appliance, virtual compute,
      storage, and networking. In this case, the NVO3 provides the
      virtual
      networking functions for the applications.

   The document uses the architecture reference model and terminologies defined in
   [NVO3FRWK] to describe the use cases.

2.

1.1. Contributors

      Vinay Bannai
      PayPal
      2211 N. First St,
      San Jose, CA 95131
      Phone: +1-408-967-7784
      Email: vbannai@paypal.com

      Ram Krishnan
      Brocade Communications
      San Jose, CA 95134
      Phone: +1-408-406-7890
      Email: ramk@brocade.com

1.2.  Terminology

   This document uses the terminologies defined in [NVO3FRWK],
   [RFC4364]. Some additional terms used in the document are listed
   here.

   CUG: Closed User Group

   L2 VNI: L2 Virtual Network Instance

   L3 VNI: L3 Virtual Network Instance

   ARP: Address Resolution Protocol

   CPE: Customer Premise Equipment

   DMZ: Demilitarized Zone

   DNS: Domain Name Service

   DMZ: DeMilitarized Zone

   NAT: Network Address Translation

   VNIF:

   VIRB: Virtual Network Interconnection Interface

3. Integrated Routing/Bridging

   Note that a virtual network in this document is a network
   virtualization overlay instance.

2. Basic Virtual Networks in a Data Center

   A virtual network may exist within a DC. The network enables a
   communication among tenant systems Tenant Systems (TSs) that are in a Closed User
   Group (CUG). A TS may be a physical server or virtual machine (VM)
   on a server. The network virtual edge (NVE) may co-exist with Tenant
   Systems, i.e. on an end-device, or exist on a different device, e.g.
   a top of rack switch (ToR). A virtual network has a unique virtual
   network identifier (may be local or global unique) for switches/routers an NVE to
   properly differentiate it from other virtual networks.

   The CUGs TSs attached to the same NVE are
   formed not necessary in the same CUG,
   i.e. in the same virtual network. The multiple CUGs can be
   constructed in a way so that proper the policies can be applied are enforced when the TSs
   in one CUG communicate with the TSs in other CUGs.

   Figure 1 depicts this case by using  An NVE provides
   the framework model.[NVO3FRWK]
   NVE1 and NVE2 are two network virtual edges and each may exist on reachbility for Tenant Systems in a
   server or ToR. Each NVE CUG, and may be also have the member of one or more virtual
   networks. Each virtual network may be L2 or L3 basis. In this
   illustration, three virtual networks with VN context Ta, Tn, and Tm
   are shown. The VN 'Ta' terminates on both NVE1 and NVE2; The VN 'Tn'
   terminates on NVE1
   policies and provide the VN 'Tm' at NVE2 only. If an NVE is a
   member of reachbility for Tenant Systems in different
   CUGs (See section 4.2). Furthermore in a VN, one or more virtual network instances (VNI) (i.e.
   routing and forwarding table) exist on the NVE. Each NVE has one
   overlay module to perform frame encapsulation/decapsulation and
   tunneling initiation/termination. DC operators may construct
   many tenant networks that have no communication at all. In this scenario, a tunnel between
   NVE1 and NVE2 is necessary for the virtual
   case, each tenant network Ta.

   A TS attaches to a virtual may use its own address space. Note that
   one tenant network (VN) via a virtual access point
   (VAP) on an NVE. One TS may participate in contain one or more virtual
   networks via VAPs; one NVE CUGs.

   A Tenant System may also be configured with multiple VAPs for
   a VN. Furthermore if individual addresses and
   participate in multiple virtual networks networks, i.e. use different address spaces, the TS participating
   in all of them will be
   configured with multiple addresses as well. A different virtual network. For examples, a TS as is NAT GW; or a gateway TS
   is an
   example a firewall server for this. In addition, multiple TSs may use one VAP to
   attach to a VN. For example, VMs are on a server and NVE is on ToR,
   then some VMs may attach to NVE CUGs.

   Network Virtualization Overlay in this context means the virtual
   networks over DC infrastructure network via a VLAN.

   A VNI on an NVE is tunnel, i.e. a routing and forwarding table that caches and/or
   maintains the mapping tunnel
   between any pair of a NVEs. This architecture decouples tenant system
   address schema from the infrastructure address space, which brings a
   great flexibility for VM placement and its attached NVE. The
   table entry may be updated by mobility. This also makes the control plane, data plane,
   management plane, or
   transit nodes in the combination infrastructure not aware of them.

                      +------- L3 Network ------+
                      |       Tunnel Overlay    |
         +------------+--------+       +--------+-------------+
         | +----------+------+ |       | +------+----------+  |
         | | Overlay Module  | |       | | Overlay Module  |  |
         | +---+---------+---+ |       | +--+----------+---+  |
         |     |Ta       |Tn   |       |    |Ta        |Tm    |
         |  +--+---+  +--+---+ |       |  +-+----+  +--+---+  |
         |  | VNIa |..| VNIn | |       |  | VNIa |..| VNIm |  |
    NVE1 |  ++----++  ++----++ |       |  ++----++  ++----++  | NVE2
         |   |VAPs|    |VAPs|  |       |   |VAPs|    |VAPs|   |
         +---+----+----+----+--+       +---+----+----+----+---+
             |    |    |    |              |    |    |    |
       ------+----+----+----+------   -----+----+----+----+-----
             | .. |    | .. |              | .. |    | .. |
             |    |    |    |              |    |    |    |
              Tenant systems                Tenant systems
              Figure 1    NVO3 for Tenant System Networking the existence of
   the virtual networks. One tunnel may carry the traffic belonging to
   different virtual networks; a virtual network identifier is used for
   traffic segregation in a tunnel.

   A virtual network may have many be an L2 or L3 domain. An NVE members and interconnect
  several thousands of TSs (as may be a matter of policy), the capability member
   of
  supporting a lot several virtual networks each of TSs per tenant instance and TS mobility which is
  critical for NVO3 solution no matter where an in L2 or L3. A virtual
   network may carry unicast traffic and/or broadcast/multicast/unknown
   traffic from/to tenant systems. An NVE resides. may use p2p tunnels or a p2mp
   tunnel to transport broadcast or multicast traffic, or may use other
   mechanisms [NVO3MCAST].

  It is worth to mention two distinct cases here. The first is when that TS
  and NVE are co-located on a same physical end device, which means that the
  NVE is can be made aware of the TS state at any time via internal API.

  The second is when that TS and NVE are remotely connected, i.e. connected
  via a switched network or point-to-point link. In this case, a
  protocol is necessary for NVE to know TS state.

  Note that if all NVEs are co-located with

  One virtual network may have many NVE members each of which many TSs
  may attach to. TS dynamic placement and mobility results in a CUG, the
  communication frequent
  changes in the CUG is in a true virtual environment. If a TS
  connects to a and NVE remotely, the communication from this bindings. The TS reachbility update
  mechanism MUST be fast enough to other
  TSs in the CUG is not in a true virtual environment. cause any service interruption.
  The packets
  to/from this TS are directly carried over capability of supporting a physical network, i.e.
  on the wire. This may require some necessary configuration on the
  physical lot of TSs in a tenant network to facilitate the communication.

   Individual virtual networks may use its own address space and the
   space a
  lot of tenant networks is isolated from DC infrastructure. This eliminates the route
   reconfiguration in the DC underlying network when VMs move. Note
   that the critical for NVO3 solutions still have to address VM move in the overlay
   network, i.e. the TS/NVE association change when a VM moves. solution.

   If a virtual network spans across multiple DC sites, one design is
   to allow the corresponding NVO3 instance seamlessly span across
   those sites without DC gateway routers' termination. In this case,
   the tunnel between a pair of NVEs may in turn be tunneled over other
   intermediate tunnels over the Internet or other WANs, or the intra
   DC and inter DC tunnels are stitched together to form an end-to-end
   tunnel between two NVEs in different
   virtual network across DCs.

4. The latter is described in section 3.2.
   Section 4.2 describes other options.

3. Interconnecting DC Virtual Network and External Networks

   For customers (an enterprise or individuals) who want to utilize the
   DC provider's compute and storage resources to run their
   applications, they need to access their systems hosted in a DC
   through Carrier WANs Internet or Internet. Service Providers' WANs. A DC provider may use
   construct an NVO3
   virtual network which all the resources designated for such a
   customer connect to and allow the customer to access their systems; then it, systems
   via the network. This, in turn, becomes the case of interconnecting
   a DC virtual NVO3 network and external networks. networks via Internet or WANs. Two
   cases are described here.

4.1.

3.1. DC Virtual Network Access via Internet

   A user or an enterprise customer connects securely to a DC virtual
   network via Internet but securely. Internet. Figure 2 1 illustrates this case. An L3 A virtual
   network is configured on NVE1 and NVE2 and two NVEs are connected
   via an L3 tunnel in the Data Center. A set of tenant systems attach are
   attached to NVE1. NVE1 on a server. The NVE2 connects resides on a DC Gateway
   device. NVE2 terminates the tunnel and uses the VNID on the packet
   to pass the packet to one (may be more) TS
   that runs the corresponding VN gateway and NAT applications (known as network
   service appliance). GW entity on the DC GW. A
   user or customer can access their systems systems, i.e. TS1 or TSn, in the
   DC via Internet by using IPsec tunnel [RFC4301]. The encrypted IPsec tunnel is
   established
   between the VN GW and the user machine or CPE at enterprise edge location.
   The VN GW provides IPsec functionality such as authentication scheme
   and
   encryption. encryption, as well as the mapping to the right virtual network
   entity on the DC GW. Note that 1) some VN GW function functions such as
   firewall and load balancer may also be performed by a locally attached
   network
   service appliance device or on a devices; 2) The virtual network in DC GW.

                       +--------------+ +----------+ may use
   different address space than external users, then VN GW serves the
   NAT function.

   Server+---------------+
         |    +------+   TS1 TSn     |
         | Firewall    |...|      | TS
                  +----+(OM)+L3 VNI+--+-+ NAT
         | (VN GW)  +-+---+-+    |             External User
         |  |  NVE1 |    |               +-----+
         |  +---+---+    |               |    +------+ PC  |
         +------+--------+               +--+--+
                | +----+-----+                           *
            L3 Tunnel +--------------+      ^                       *
                |               NVE2       |IPsec Tunnel
         +--------+---------+                           *
   DC GW +------+---------+            .--.  .--.
         | +------+-------+  +---+---+     |           (    :'    '*   '.--.
         | |Overlay Module|  |  NVE2 |     |        .-.'    :   *          )
         | +------+-------+  +---+---+     |       (    *  Internet    )
         |  +-----+------+  +---+---+.    |        (        : *               /
         |  |   L3 VNI   |  |        '-'     :      '-'
    NVE1 |  +-+--------+-+  |            \../+\.--/'
         +----+--------+----+                |
              |  ...   |                     V
            Tenant Systems               User Access

             DC Provider Site

      OM: Overlay Module;

             Figure 2 DC Virtual Network Access via Internet

4.2. DC Virtual Network and WAN VPN Interconnection

   A DC Provider and Carrier may build a VN and VPN independently and
   interconnect the VN and VPN at the DC GW and PE for an enterprise
   customer. Figure 3 depicts this case in an L3 overlay (L2 overlay is
   the same). The DC provider constructs an L3 VN between the NVE1 on a
   server and the NVE2 on the VNGW1 * * * * * * * * '-'          '-'
         |  +-------+ |   | IPsec       \../ \.--/'
         |   +--------+   | Tunnel
         +----------------+

           DC GW in the Provider Site

             Figure 1 DC site; the carrier
   constructs an L3VPN between PE1 Virtual Network Access via Internet

3.2. DC VN and PE2 in its IP/MPLS network. Enterprise Sites interconnected via SP WAN

   An
   Ethernet Interface physically connects the Enterprise company would lease some DC GW and PE2 devices.
   The local VLAN over the Ethernet interface [VRF-LITE] is configured provider compute resources
   to connect the L3VNI/NVE2 and VRF, which makes the interconnection
   between the L3 VN in run some applications. For example, the company may run its web
   applications at DC and the L3VPN provider sites but run backend applications in IP/MPLS network. An
   Ethernet Interface may be used between PE1 and CE to connect the
   L3VPN
   their own DCs. The Web applications and enterprise physical networks.

   This configuration allows the enterprise networks communicating to
   the tenant systems attached backend applications need to the L3 VN without interfering with
   communicate privately. DC provider underlying physical networks and other overlay networks in may construct a NVO3 network to
   connect all VMs running the DC. Enterprise Web applications. The
   enterprise company may use its own address space on the tenant
   systems attached buy a p2p private tunnel such as VPWS from a
   SP to the L3 VN. The DC provider can manage the VMs interconnect its site and storage attached to the L3 VN NVO3 network in provider DC site.
   A protocol is necessary for exchanging the enterprise customer. The
   enterprise customer can determine reachability between two
   peering points and run their applications on the
   VMs. From traffic are carried over the L3 VN perspective, tunnel. If an end point in the
   enterprise
   location appears as the end point associating to the NVE2. The NVE2
   on the DC GW has multiple sites, it may buy multiple p2p tunnels to perform both the GRE tunnel termination [RFC4797]
   and the local VLAN termination and forward the packets in between.
   The
   form a mesh interconnection among the sites and DC provider and Carrier negotiate the local VLAN ID used on the
   Ethernet interface. site.
   This configuration makes the L3VPN over the WANs only has the
   reachbility requires each site peering with all other sites for route
   distribution.

   Another way to the TS achieve multi-site interconnection is to use Service
   Provider (SP) VPN services, in the L3 VN. It does not have the
   reachability of which each site only peers with SP PE
   site. A DC physical networks Provider and other VNs in VPN SP may build a NVO3 network (VN) and VPN
   independently. The VN provides the DC.
   However, networking for all the L3VPN has related
   TSes within the reachbility of provider DC. The VPN interconnects several
   enterprise networks. Note
   that both the sites, i.e. VPN sites. The DC provider and enterprise may have multiple network
   locations connecting to VPN SP further
   connect the VN and VPN at the L3VPN.

   The eBGP protocol can be used between DC GW GW/ASBR and PE2 SP PE/ASBR. Several
   options for the route
   population interconnection of the VN and VPN are described in between.
   RFC4364 [RFC4364]. In fact, this is like the Option A in
   [RFC4364]. This configuration can work with any NVO3 solution. The
   eBGP, OSPF, or other can be used between PE1 and CE for the route
   population.

         +-----------------+           +-------------+
         |  +----------+   |           | +-------+   |
    NVE2 |  | L3 VNI   +---+===========+-+ VRF   |   |
         |  +----+-----+   |   VLAN    | +---+---+   | PE2
         |       |         |           |     |       |
         | +-----+-------+ |          /+-----+-------+--\
         | |Overly Module| |         (       :           '
         | +-------------+ |        {        :            }
         +--------+--------+        {        : LSP Tunnel }
                  |                  ;       :            ;
                  |IP Tunnel         {  IP/MPLS Network }
                  |                    \     :          /
         +--------+---------+           +----+------+  -
         | +------+-------+ |           | +--+---+  | '
         | |Overlay Module| |           | | VRF  |  |
         | +------+-------+ |           | +--+---+  | PE1
         |        |Ta       |           |    |      |
         |  +-----+------+  |           +----+------+
         |  |   L3 VNI   |  |                |
    NVE1 |  +-+--------+-+  |                |
         |    |  VAPs  |    |               CE Site
         +----+--------+----+
              |  ...   |                Enterprise Site
            Tenant systems VRF-LITE [VRF-LITE], both DC GW
   and SP PE maintain the routing/forwarding table, and perform the
   table lookup in forwarding. In Option B, DC Provider Site

     Figure 3 L3 VNI GW and L3VPN interconnection across multi networks

   If an enterprise SP PE do not
   maintain the forwarding table, it only maintains the VN and VPN
   identifier mapping, and exchange the identifier on the packet in the
   forwarding process. In option C, DC GW and SP PE use the same
   identifier for VN and VPN, and just perform the tunnel stitching,
   i.e. change the tunnel end points. Each option has one location, it pros/cons (see
   RFC4364) and has been deployed in SP networks depending on the
   applications. The BGP protocols may use P2P VPWS [RFC4664]
   or L2TP [RFC5641] to connect one DC be used in these options for
   route distribution. Note that if the provider site. In DC is the SP Data
   Center, the DC GW and PE in this case, case may be on one
   edge connects device.

   This configuration allows the enterprise networks communicating to
   the tenant systems attached to the VN in a provider DC without
   interfering with DC provider underlying physical network networks and another edge connects other
   virtual networks in the DC. The enterprise may use its own address
   space on the tenant systems attached to an
   overlay network.

   Various alternatives can be configured between the VN. The DC GW provider can
   manage the VMs and SP PE storage attachment to
   achieve the same capability. Option B, C, or D in RFC4364 [RFC4364] VN for the enterprise
   customer. The enterprise customer can be used determine and run their
   applications on the characteristics of each option are described
   there. VMs. See section 4 for more.

   The interesting feature in this use case is that the L3 VN and compute
   resource are managed by the DC provider. The DC operator can place
   them at any location without notifying the enterprise and
   carrier WAN SP
   because the DC physical network is completely isolated from the
   carrier and enterprise network. Furthermore, the DC operator may
   move the VMs assigned to the enterprise from one sever to another in
   the DC without the enterprise customer awareness, i.e. no impact on
   the enterprise 'live' applications running these resources. Such
   advanced feature brings features bring DC providers great benefits in serving these
   kinds of applications but also add some requirements for NVO3
   [NVO3PRBM].

5.

4. DC Applications Using NVO3

   NVO3 brings DC operators the flexibility to design in designing and deploying
   different applications in a true virtual environment (or nearly true) without
   worrying an end-to-end virtualization environment,
   where the operators not need worry about the constraints of the
   physical network configuration in the Data Center. DC
   operators provider may build several virtual networks and interconnect them
   directly to form a tenant virtual network
   use NVO3 in various ways and implement also use it in the
   communication rules, i.e. policy between different virtual networks;
   or may allocate some VMs to run tenant applications and some to run
   network service application such as Firewall and DNS conjunction with
   physical networks in DC for the tenant.
   Several many reasons. This section highlights
   some use cases are given in this section.

5.1. but not limits to.

4.1. Supporting Multi Technologies and Applications in a Data Center DC

   Most likely servers deployed in a large data center are rolled in at
   different times and may have different capacities/features. Some
   servers may be virtualized, some may not; some may be equipped with
   virtual switches, some may not. For the ones equipped with
   hypervisor based virtual switches, some may support VxLAN [VXLAN]
   encapsulation, some may support NVGRE encapsulation [NVGRE], and
   some may not support any types of encapsulation. To construct a
   tenant virtual network among these servers and the ToRs, it ToR switches, it
   may construct one virtual network overlay and one virtual network
   w/o overlay, or two virtual networks overlay with different
   implementations. For example, one virtual network overlay uses VxLAN
   encapsulation and another virtual network w/o overlay uses
   traditional VLAN or another virtual network overlay uses NVGRE.

   The gateway device or virtual gateway on a device may be used. The
   gateway participates in to both virtual networks. It performs the
   packet encapsulation/decapsulation and may also perform address
   mapping or translation, and etc.

   A data center may be also constructed with multi-tier zones. Each
   zone has different access permissions and run different applications.
   For example, the three-tier zone design has a front zone (Web tier)
   with Web applications, a mid zone (application tier) with service
   applications such as payment and booking, and a back zone (database
   tier) with Data. External users are only able to communicate with
   the web application in the front zone. In this case, the
   communication between the zones MUST pass through the security
   GW/firewall. The network virtualization may be used in each zone. If
   individual zones use the different implementations, the GW needs to
   support these implementations as well.

4.2. Tenant Network with Multi-Subnets or across multi DCs

   A tenant network may contain multiple subnets. DC operators may
   construct multiple tenant networks. The access policy for inter-
   subnets is often necessary. To benefit the policy management, the
   policies may use
   two virtual networks and a be placed at some designated gateway devices only. Such
   design requires the inter-subnet traffic MUST be sent to allow different
   implementations working together. For example, one virtual network
   uses VxLAN encapsulation and another virtual network uses
   traditional VLAN.

   The gateway entity, either of the
   gateways first for the policy checking. However this may cause
   traffic hairpin on VMs or standalone one, participates the gateway in
   to both virtual networks, a DC. It is desirable that an NVE
   can hold some policy and maps be able to forward inter-subnet traffic
   directly. To reduce NVE burden, the services hybrid design may be deployed,
   i.e. an NVE can perform forwarding for the selected inter-subnets
   and identifiers the designated GW performs for the rest. For example, each NVE
   performs inter-subnet forwarding for a tenant, and
   changes the packet encapsulations.

5.2. Tenant Virtual Network with Bridging/Routing designated GW
   is used for inter-subnet traffic from/to the different tenant
   networks.

   A tenant virtual network may span across multiple Data Centers. Centers in distance.
   DC
   operator operators may want to use an L2VN within a each DC and L3VN outside between DCs
   for a tenant network. This is very similar to today's DC physical network
   configuration. L2 bridging has the simplicity and endpoint
   awareness while L3 routing has advantages in policy based routing,
   aggregation, and scalability. For this configuration, the virtual
   L2/L3 gateway
   function is necessary to interconnect L2VN and L3VN in each DC. can be implemented on DC GW device. Figure 4 2
   illustrates this configuration.

   Figure 4 2 depicts two DC sites. The site A constructs an L2VN that
   terminates on with
   NVE1, NVE2, and GW1. NVE3. NVE1 and NVE2 reside on the servers where the
   tenant systems are created. NVE3 resides on the DC GW device. The
   site Z has similar configuration with NVE3 and NVE4 on the servers
   and NVE6 on the DC GW. An L3VN is configured between the
   GW1 NVE5 at
   site A and the GW2 NVE6 at site Z. An internal Virtual Network
   Interconnection Interface (VNIF) connects to Integrated
   Routing and Bridging (VIRB) is used between L2VNI and L3VNI on GW1.
   Thus the GW1 is the members of the L2VN NVE5
   and L3VN. NVE6. The L2VNI is the MAC/NVE mapping table and the L3VNI is
   the IP prefix/NVE mapping table.
   Note that a VNI also has the mapping of TS and VAP at the local NVE.
   The site Z has the similar configuration. A packet coming to the GW1 NVE5 from L2VN will
   be descapulated decapsulated and converted into an IP packet and then
   encapsulated and sent to the site Z. The Gateway uses ARP
   protocol to obtain MAC/IP address mapping.

   Note that both the L2VN and L3VN in the figure are carried by the
   tunnels supported by the underlying networks which are not shown in

   Note that both the figure.

            +------------+                  +-----------+
         GW1| L2VNs and L3VN in Figure 2 are encapsulated and
   carried over within DC and across WAN networks, respectively.

   NVE5/DCGW+------------+                  +-----------+NVE6/DCGW
            | +-----+    | '''''''''''''''' |   +-----+ |GW2 |
            | |L3VNI+----+'    L3VN        '+---+L3VNI| |
            | +--+--+    | '''''''''''''''' |   +--+--+ |
            |    |VNIF    |VIRB   |                  |  VNIF|  VIRB|    |
            | +--+--+ +--+---+   |                  |   +--+--+  +---+--+ |
            | |L2VNI| |L2VNIs|   |                  |   |L2VNI|  |L2VNIs| |
            | +--+--+ +--+---+   |                  |   +--+--+  +---+--+ |
            +----+-------+                  +------+----+
             ''''|''''''''''                 ''''''|'''''''
            '     L2VN      '               '     L2VN     '
        NVE1
      NVE1/S ''/'''''''''\'' NVE2      NVE3  '''/'''''''\'' NVE4 NVE2/S    NVE3/S'''/'''''''\'' NVE4/S
        +-----+---+  +----+----+        +------+--+ +----+----+
        | +--+--+ |  | +--+--+ |        | +---+-+ | | +--+--+ |
        | |L2VNI| |  | |L2VNI| |        | |L2VNI| | | |L2VNI| |
        | ++---++ |  | ++---++ |        | ++---++ | | ++---++ |
        +--+---+--+  +--+---+--+        +--+---+--+ +--+---+--+
           |...|        |...|              |...|       |...|

             Tenant Systems                  Tenant Systems

                DC Site A                    DC Site Z

          Figure 4 2 Tenant Virtual Network with Bridging/Routing

5.3.

4.3. Virtual Data Center (VDC) (vDC)

   Enterprise DC's today may often use several routers, switches, and
   service
   network appliance devices to construct its internal network, DMZ,
   and external network access. A DC Provider may offer a virtual DC
   service to an enterprise customer to and run enterprise applications
   such as
   website/emails. website/emails as well. Instead of using many hardware devices,
   devices to do it, with the
   overlay and network virtualization technology of NVO3, overlay
   technology, DC operators can may build them such vDCs on top of a common
   network infrastructure for many such customers and run network
   service applications per customer a vDC basis. The net service applications may include
   such as firewall, gateway, DNS, load
   balancer, NAT, etc. balancer can be designed per vDC. The
   network virtualization overlay further enables potential for vDC
   mobility when customer moves to different locations because tenant
   systems and net appliances configuration can be completely decouple
   from the infrastructure network.

   Figure 5 3 below illustrates this one scenario. For the simple
   illustration, it only shows the L3VN or L2VN as virtual and overlay
   routers or switches. In this case, DC operators construct several L2
   VNs (L2VNx, L2VNy, L2VNz L2VNz) in Figure 5) 3 to group the end tenant
   systems together per application basis, create an L3VNa for the
   internal routing. A server or net device (may be a VM or server) runs
   firewall/gateway applications and connects to the L3VNa and
   Internet. A VPN load Balancer (LB) is used in L2VNx. A VPWS p2p tunnel
   is also built between the gateway and enterprise router. The design
   runs Enterprise Web/Mail/Voice applications at the provider DC site;
   lets the users at Enterprise site to access the applications via the
   VPN tunnel and Internet via a gateway at the Enterprise site; let
   Internet users access the applications via the gateway in the
   provider DC.

   The enterprise operators can also use the VPN tunnel or
   IPsec over Internet to access the vDC for the management purpose.
   The firewall/gateway provides application-level and packet-level
   gateway function and/or NAT function.

   The Enterprise customer decides which applications are accessed by
   intranet only and which by both intranet and extranet; DC operators
   then design and configure the proper security policy and gateway
   function. Furthermore DC operators may further use multi-zones in a vDC for
   the security and/or set different QoS levels for the different
   applications for a customer. based on customer applications.

   This application use case requires the NVO3 solution to provide the DC operator
   an easy way to create NVEs a VN and VNIs NVEs for any design and to quickly
   assign TSs to a VNI, VNI on a NVE they attach to, easily place to set up
   virtual topology and place or configure policies on an NVE, NVE or VMs
   that run net services, and support VM mobility. Furthermore, DC
   operator needs to view the tenant network topology and know the
   tenant node capability and is able to configure a net service on the
   tenant node. DC provider may further let a tenant to manage the vDC
   itself.

                         Internet                      ^ Internet
                                                       |
                            ^                        +-+----+
                            |                        |  GW  |
                            |                        +--+---+
                            |                           |
                    +-------+--------+                +-+----+
                    |FireWall/Gateway+---VPN Tunnel---+Router|
                    |FireWall/Gateway+--- VPWS/MPLS---+Router|
                    +-------+--------+                +-+--+-+
                            |                           |  |
                         ...+...                        |..|
                  +-----: L3VNa :--------+              LANs
                  |
                +-+-+    .......         |
                |LB |        |           |         Enterprise Site
                +-+-+        |           |
               ...+...    ...+...     ...+...
              : L2VNx :  : L2VNy :   : L2VNz :
               .......    .......     .......
                 |..|       |..|        |..|
                 |  |       |  |        |  |
               Web Apps   Mail Apps    VoIP Apps

                        Provider DC Site

     *

    firewall/gateway and Load Balancer (LB) may run on a server or VMs

                Figure 5 3 Virtual Data Center by Using NVO3

5.4. Federating NV03 Domains

   Two general cases are 1) Federating AS managed by a single operator;
   2) Federating AS managed by different Operators. The detail will be
   described in next version.

6.

5. OAM Considerations

   NVO3 brings the ability for a DC provider to segregate tenant
   traffic. A DC provider needs to manage and maintain NVO3 instances.
   Similarly, the tenant needs to be informed about tunnel underlying network
   failures impacting tenant applications or the tenant network is able
   to detect both overlay and underlay network failures
   impacting tenant applications. and builds some
   resiliency mechanisms.

   Various OAM and SOAM tools and procedures are defined in [IEEE
   802.1ag], [ITU-T Y.1731], [RFC4378], [RFC5880], [ITU-T Y.1564] for
   L2 and L3 networks, and for user, including continuity check,
   loopback, link trace, testing, alarms such as AIS/RDI, and on-demand
   and periodic measurements. These procedures may apply to tenant
   overlay networks and tenants not only for proactive maintenance, but
   also to ensure support of Service Level Agreements (SLAs).

   As the tunnel traverses different networks, OAM messages need to be
   translated at the edge of each network to ensure end-to-end OAM.

   It is important that failures at lower layers which do not affect
   NVo3 instance are to be suppressed.

7.

6. Summary

   The document describes some basic general potential use cases of NVO3. NVO3 in
   DCs. The combination of these cases should give operators
   flexibility and capability to design more sophisticated cases for
   various purposes.

   The key requirements for NVO3 are 1) traffic segregation; 2)
   supporting a large scale number of virtual networks in a common
   infrastructure; 3) supporting highly distributed virtual network
   with sparse memberships 3) VM mobility 4) auto or easy to construct
   a NVE and its associated TS; 5) Security 6) NVO3 Management
   [NVO3PRBM].

   Difference between other overlay network technologies and NVO3 is
   that the client edges of the NVO3 network are individual and
   virtualized hosts, not network sites or LANs. NVO3 enables these
   virtual hosts communicating in a true virtual environment without
   constraints in physical networks.

   NVO3 allows individual tenant virtual networks to use their own
   address space and isolates the space from the network infrastructure.
   The approach not only segregates the traffic from multi tenants on a
   common infrastructure but also makes VM placement and move easier.

   DC services may vary from infrastructure as a service (IaaS),
   platform as a service (PaaS), to software as a service (SaaS), in
   which the network virtual virtualization overlay is just a portion of an
   application service. NVO3 decouples the services service
   construction/configurations from the DC network infrastructure configuration.
   configuration, and helps deployment of higher level services over
   the application.

   NVO3's underlying network provides the tunneling between NVEs so
   that two NVEs appear as one hop to each other. Many tunneling
   technologies can serve this function. The tunneling may in turn be
   tunneled over other intermediate tunnels over the Internet or other
   WANs. It is also possible that intra DC and inter DC tunnels are
   stitched together to form an end-to-end tunnel between two NVEs.

   A DC virtual network may be accessed via an external network in a
   secure way. Many existing technologies can help achieve this.

8.

   NVO3 implementation may vary. Some DC operators prefer to use
   centralized controller to manage tenant system reachbility in a
   tenant network, other prefer to use distributed protocols to
   advertise the tenant system location, i.e. attached NVEs. For the
   migration and special requirement, the different solutions may apply
   to one tenant network in a DC. When a tenant network spans across
   multiple DCs and WANs, each network administration domain may use
   different methods to distribute the tenant system locations. Both
   control plane and data plane interworking are necessary.

7. Security Considerations

   Security is a concern. DC operators need to provide a tenant a
   secured virtual network, which means one tenant's traffic isolated
   from the other tenant's traffic and non-tenant's traffic; they also
   need to prevent DC underlying network from any tenant application
   attacking through the tenant virtual network or one tenant
   application attacking another tenant application via DC networks.
   For example, a tenant application attempts to generate a large
   volume of traffic to overload DC underlying network. The NVO3
   solution has to address these issues.

9.

8. IANA Considerations

   This document does not request any action from IANA.

10.

9. Acknowledgements

   Authors like to thank Sue Hares, Young Lee, David Black, Pedro
   Marques, Mike McBride, David McDysan, Randy Bush, and Uma Chunduri
   for the review, comments, and suggestions.

11.

10. References

11.1.

10.1. Normative References

   [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
             Requirement Levels", BCP 14, RFC 2119, March 1997

   [RFC4364] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private
             Networks (VPNs)", RFC 4364, February 2006.

   [IEEE 802.1ag]  "Virtual Bridged Local Area Networks - Amendment 5:
             Connectivity Fault Management", December 2007.

   [ITU-T G.8013/Y.1731] OAM Functions and Mechanisms for Ethernet
             based Networks, 2011.

   [ITU-T Y.1564] "Ethernet service activation test methodology", 2011.

   [RFC4378] Allan, D., Nadeau, T., "A Framework for Multi-Protocol
             Label Switching (MPLS) Operations and Management (OAM)",
             RFC4378, February 2006

   [RFC4301] Kent, S., "Security Architecture for the Internet
             Protocol", rfc4301, December 2005

   [RFC4664] Andersson, L., "Framework for Layer 2 Virtual Private
             Networks (L2VPNs)", rfc4664, September 2006

   [RFC4797] Rekhter, Y., et al, "Use of Provider Edge to Provider Edge
             (PE-PE) Generic Routing Encapsulation (GRE) or IP in
             BGP/MPLS IP Virtual Private Networks", RFC4797, January
             2007

   [RFC5641] McGill, N., "Layer 2 Tunneling Protocol Version 3 (L2TPv3)
             Extended Circuit Status Values", rfc5641, April 2009.

   [RFC5880] Katz, D. and Ward, D., "Bidirectional Forwarding Detection
             (BFD)", rfc5880, June 2010.

11.2.

10.2. Informative References

   [NVGRE]  Sridharan, M., "NVGRE: Network Virtualization using Generic
             Routing Encapsulation", draft-sridharan-virtualization-
             nvgre-01,
             nvgre-02, work in progress.

   [NVO3PRBM] Narten, T., etc "Problem Statement: Overlays for Network
             Virtualization", draft-ietf-nvo3-overlay-problem-
             statement-02, work in progress.

   [NVO3FRWK] Lasserre, M., Motin, T., and etc, "Framework for DC
             Network Virtualization", draft-ietf-nvo3-framework-02,
             work in progress.

   [NVO3MCAST] Ghanwani, A., "Multicast Issues in Networks Using NVO3",
             draft-ghanwani-nvo3-mcast-issues-00, work in progress.

   [VRF-LITE] Cisco, "Configuring VRF-lite", http://www.cisco.com

   [VXLAN]  Mahalingam,M., Dutt, D., etc "VXLAN: A Framework for
             Overlaying Virtualized Layer 2 Networks over Layer 3
             Networks", draft-mahalingam-dutt-dcops-vxlan-02.txt, draft-mahalingam-dutt-dcops-vxlan-03.txt, work
             in progress.

 Authors' Addresses

   Lucy Yong
   Huawei Technologies,
   4320
   5340 Legacy Dr.
   Plano, Tx75025 US TX 75025

   Phone: +1-469-277-5837
   Email: lucy.yong@huawei.com

   Mehmet Toy
   Comcast
   1800 Bishops Gate Blvd.,
   Mount Laurel, NJ 08054

   Phone : +1-856-792-2801
   E-mail : mehmet_toy@cable.comcast.com

   Aldrin Isaac
   Bloomberg
   E-mail: aldrin.isaac@gmail.com

   Vishwas Manral
   Hewlett-Packard Corp.
   191111 Pruneridge Ave.

   Cupertino,
   3000 Hanover Street, Building 20C
   Palo Alto, CA  95014

   Phone: 408-447-1497 650-857-5501
   Email: vishwas.manral@hp.com

   Linda Dunbar
   Huawei Technologies,
   4320
   5340 Legacy Dr.
   Plano, Tx75025 TX 75025 US

   Phone: +1-469-277-5840
   Email: linda.dunbar@huawei.com