--- 1/draft-ietf-nvo3-framework-00.txt 2012-10-19 16:14:46.937425614 +0200 +++ 2/draft-ietf-nvo3-framework-01.txt 2012-10-19 16:14:46.985425725 +0200 @@ -5,41 +5,41 @@ Expires: March 2013 Thomas Morin France Telecom Orange Nabil Bitar Verizon Yakov Rekhter Juniper - September 11, 2012 + October 19, 2012 Framework for DC Network Virtualization - draft-ietf-nvo3-framework-00.txt + draft-ietf-nvo3-framework-01.txt Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on March 11, 2013. + This Internet-Draft will expire on April 19, 2013. Copyright Notice Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents @@ -54,58 +54,60 @@ Several IETF drafts relate to the use of overlay networks to support large scale virtual data centers. This draft provides a framework for Network Virtualization over L3 (NVO3) and is intended to help plan a set of work items in order to provide a complete solution set. It defines a logical view of the main components with the intention of streamlining the terminology and focusing the solution set. Table of Contents - 1. Introduction.................................................3 + 1. Introduction................................................3 1.1. Conventions used in this document.......................4 1.2. General terminology.....................................4 1.3. DC network architecture.................................6 1.4. Tenant networking view..................................7 - 2. Reference Models.............................................8 + 2. Reference Models............................................8 2.1. Generic Reference Model.................................8 2.2. NVE Reference Model....................................10 - 2.3. NVE Service Types......................................11 - 2.3.1. L2 NVE providing Ethernet LAN-like service........11 - 2.3.2. L3 NVE providing IP/VRF-like service..............11 - 3. Functional components.......................................11 + 2.3. NVE Service Types......................................12 + 2.3.1. L2 NVE providing Ethernet LAN-like service.........12 + 2.3.2. L3 NVE providing IP/VRF-like service..............12 + 3. Functional components.......................................12 3.1. Generic service virtualization components..............12 - 3.1.1. Virtual Access Points (VAPs)......................12 - 3.1.2. Virtual Network Instance (VNI)....................12 + 3.1.1. Virtual Access Points (VAPs)......................13 + 3.1.2. Virtual Network Instance (VNI)....................13 3.1.3. Overlay Modules and VN Context....................13 - 3.1.4. Tunnel Overlays and Encapsulation options.........14 + 3.1.4. Tunnel Overlays and Encapsulation options..........14 3.1.5. Control Plane Components..........................14 - 3.1.5.1. Auto-provisioning/Service discovery.............14 - 3.1.5.2. Address advertisement and tunnel mapping........15 - 3.1.5.3. Tunnel management...............................15 - 3.2. Service Overlay Topologies.............................16 - 4. Key aspects of overlay networks.............................16 - 4.1. Pros & Cons............................................16 - 4.2. Overlay issues to consider.............................17 - 4.2.1. Data plane vs Control plane driven................17 - 4.2.2. Coordination between data plane and control plane..18 + 3.1.5.1. Distributed vs Centralized Control Plane.........15 + 3.1.5.2. Auto-provisioning/Service discovery.............15 + 3.1.5.3. Address advertisement and tunnel mapping.........16 + 3.1.5.4. Tunnel management...............................17 + 3.2. Multi-homing..........................................17 + 3.3. Service Overlay Topologies.............................18 + 4. Key aspects of overlay networks.............................18 + 4.1. Pros & Cons...........................................18 + 4.2. Overlay issues to consider.............................19 + 4.2.1. Data plane vs Control plane driven................19 + 4.2.2. Coordination between data plane and control plane..20 4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) - traffic..................................................18 - 4.2.4. Path MTU..........................................19 - 4.2.5. NVE location trade-offs...........................19 - 4.2.6. Interaction between network overlays and underlays.20 - 5. Security Considerations.....................................21 - 6. IANA Considerations.........................................21 - 7. References..................................................21 - 7.1. Normative References...................................21 - 7.2. Informative References.................................21 - 8. Acknowledgments.............................................22 + traffic.................................................20 + 4.2.4. Path MTU.........................................21 + 4.2.5. NVE location trade-offs...........................21 + 4.2.6. Interaction between network overlays and underlays.22 + 5. Security Considerations.....................................23 + 6. IANA Considerations........................................23 + 7. References.................................................23 + 7.1. Normative References...................................23 + 7.2. Informative References.................................23 + 8. Acknowledgments............................................24 1. Introduction This document provides a framework for Data Center Network Virtualization over L3 tunnels. This framework is intended to aid in standardizing protocols and mechanisms to support large scale network virtualization for data centers. Several IETF drafts relate to the use of overlay networks for data centers. @@ -136,24 +138,24 @@ 1.2. General terminology This document uses the following terminology: NVE: Network Virtualization Edge. It is a network entity that sits on the edge of the NVO3 network. It implements network virtualization functions that allow for L2 and/or L3 tenant separation and for hiding tenant addressing information (MAC and IP addresses). An NVE could be implemented as part of a virtual switch within a hypervisor, a physical switch or router, a Network Service - Appliance or even be embedded within an End Station. + Appliance. VN: Virtual Network. This is a virtual L2 or L3 domain that belongs - a tenant. + to a tenant. VNI: Virtual Network Instance. This is one instance of a virtual overlay network. Two Virtual Networks are isolated from one another and may use overlapping addresses. Virtual Network Context or VN Context: Field that is part of the overlay encapsulation header which allows the encapsulated frame to be delivered to the appropriate virtual network endpoint by the egress NVE. The egress NVE uses this field to determine the appropriate virtual network context in which to process the packet. @@ -177,50 +179,60 @@ Data Center (DC): A physical complex housing physical servers, network switches and routers, Network Service Appliances and networked storage. The purpose of a Data Center is to provide application and/or compute and/or storage services. One such service is virtualized data center services, also known as Infrastructure as a Service. Virtual Data Center or Virtual DC: A container for virtualized compute, storage and network services. Managed by a single tenant, a - Virtual DC can contain multiple VNs and multiple Tenant End Systems - that are connected to one or more of these VNs. + Virtual DC can contain multiple VNs and multiple Tenant Systems that + are connected to one or more of these VNs. VM: Virtual Machine. Several Virtual Machines can share the resources of a single physical computer server using the services of a Hypervisor (see below definition). Hypervisor: Server virtualization software running on a physical compute server that hosts Virtual Machines. The hypervisor provides shared compute/memory/storage and network connectivity to the VMs that it hosts. Hypervisors often embed a Virtual Switch (see below). Virtual Switch: A function within a Hypervisor (typically implemented in software) that provides similar services to a physical Ethernet switch. It switches Ethernet frames between VMs' virtual NICs within the same physical server, or between a VM and a physical NIC card connecting the server to a physical Ethernet switch. It also enforces network isolation between VMs that should not communicate with each other. - Tenant: A customer who consumes virtualized data center services - offered by a cloud service provider. A single tenant may consume one - or more Virtual Data Centers hosted by the same cloud service - provider. + Tenant: In a DC, a tenant refers to a customer that could an + organization within an enterprise, or an enterprise with a set of DC + compute, storage and network resources associated with it. - Tenant End System: It defines an end system of a particular tenant, - which can be for instance a virtual machine (VM), a non-virtualized - server, or a physical appliance. + Tenant System: A physical or virtual system that can play the role + of a host, or a forwarding element such as a router, switch, + firewall, etc. It belongs to a single tenant and connects to one or + more VNs of that tenant. + + End device: A physical system to which networking service is + provided. Examples include hosts (e.g. server or server blade), + storage systems (e.g. file servers, iSCSI storage systems) and + network devices (e.g. firewall, load-balancer, IPSec gateway). An + end device may include internal networking functionality that + interconnects the device's components (e.g. virtual switches that + interconnects VMs running on the same server). NVE functionality may + be implemented as part of that internal networking. ELAN: MEF ELAN, multipoint to multipoint Ethernet service + EVPN: Ethernet VPN as defined in [EVPN] 1.3. DC network architecture A generic architecture for Data Centers is depicted in Figure 1: ,---------. ,' `. ( IP/MPLS WAN ) `. ,' @@ -233,22 +245,22 @@ ( ' '.--. .-.' Intra-DC ' ( network ) ( .'-' '--'._.'. )\ \ / / '--' \ \ / / | | \ \ +---+--+ +-`.+--+ +--+----+ | ToR | | ToR | | ToR | +-+--`.+ +-+-`.-+ +-+--+--+ - .' \ .' \ .' `. - __/_ _i./ i./_ _\__ + / \ / \ / \ + __/_ \ / \ /_ _\__ '--------' '--------' '--------' '--------' : End : : End : : End : : End : : Device : : Device : : Device : : Device : '--------' '--------' '--------' '--------' Figure 1 : A Generic Architecture for Data Centers An example of multi-tier DC network architecture is presented in this figure. It provides a view of physical components inside a DC. @@ -260,30 +272,20 @@ also service virtualization. In some DC architectures, it is possible that some tier layers provide L2 and/or L3 services, are collapsed, and that Internet connectivity, inter-DC connectivity and VPN support are handled by a smaller number of nodes. Nevertheless, one can assume that the functional blocks fit with the architecture above. The following components can be present in a DC: - o End Device: a DC resource to which the networking service is - provided. End Device may be a compute resource (server or - server blade), storage component or a network appliance - (firewall, load-balancer, IPsec gateway). Alternatively, the - End Device may include software based networking functions used - to interconnect multiple hosts. An example of soft networking - is the virtual switch in the server blades, used to - interconnect multiple virtual machines (VMs). End Device may be - single or multi-homed to the Top of Rack switches (ToRs). - o Top of Rack (ToR): Hardware-based Ethernet switch aggregating all Ethernet links from the End Devices in a rack representing the entry point in the physical DC network for the hosts. ToRs may also provide routing functionality, virtual IP network connectivity, or Layer2 tunneling over IP for instance. ToRs are usually multi-homed to switches in the Intra-DC network. Other deployment scenarios may use an intermediate Blade Switch before the ToR or an EoR (End of Row) switch to provide similar function as a ToR. @@ -291,170 +293,182 @@ switches aggregating multiple ToRs. Core switches are usually Ethernet switches but can also support routing capabilities. o DC GW: Gateway to the outside world providing DC Interconnect and connectivity to Internet and VPN customers. In the current DC network model, this may be simply a Router connected to the Internet and/or an IPVPN/L2VPN PE. Some network implementations may dedicate DC GWs for different connectivity types (e.g., a DC GW for Internet, and another for VPN). + Note that End Devices may be single or multi-homed to ToRs. + 1.4. Tenant networking view The DC network architecture is used to provide L2 and/or L3 service connectivity to each tenant. An example is depicted in Figure 2: +----- L3 Infrastructure ----+ | | - ,--+-'. ;--+--. - ..... Rtr1 )...... . Rtr2 ) - | '-----' | '-----' + ,--+--. ,--+--. + .....( Rtr1 )...... ( Rtr2 ) + | `-----' | `-----' | Tenant1 |LAN12 Tenant1| |LAN11 ....|........ |LAN13 - '':'''''''':' | | '':'''''''':' - ,'. ,'. ,+. ,+. ,'. ,'. + .............. | | .............. + | | | | | | + ,-. ,-. ,-. ,-. ,-. ,-. (VM )....(VM ) (VM )... (VM ) (VM )....(VM ) `-' `-' `-' `-' `-' `-' Figure 2 : Logical Service connectivity for a single tenant In this example one or more L3 contexts and one or more LANs (e.g., one per application type) running on DC switches are assigned for DC tenant 1. For a multi-tenant DC, a virtualized version of this type of service connectivity needs to be provided for each tenant by the Network Virtualization solution. 2. Reference Models 2.1. Generic Reference Model The following diagram shows a DC reference model for network - virtualization using Layer3 overlays where edge devices provide a - logical interconnect between Tenant End Systems that belong to - specific tenant network. + virtualization using Layer3 overlays where NVEs provide a logical + interconnect between Tenant Systems that belong to specific tenant + network. +--------+ +--------+ - | Tenant | | Tenant | - | End +--+ +---| End | - | System | | | | System | - +--------+ | ................... | +--------+ - | +-+--+ +--+-+ | + | Tenant +--+ +----| Tenant | + | System | | (') | System | + +--------+ | ................... ( ) +--------+ + | +-+--+ +--+-+ (_) | | NV | | NV | | - +--|Edge| |Edge|--+ + +--|Edge| |Edge|---+ +-+--+ +--+-+ - / . L3 Overlay . \ - +--------+ / . Network . \ +--------+ - | Tenant +--+ . . +----| Tenant | - | End | . . | End | - | System | . +----+ . | System | - +--------+ .....| NV |........ +--------+ + / . . + / . L3 Overlay +--+-++--------+ + +--------+ / . Network | NV || Tenant | + | Tenant +--+ . |Edge|| System | + | System | . +----+ +--+-++--------+ + +--------+ .....| NV |........ |Edge| +----+ | | - +--------+ - | Tenant | - | End | - | System | - +--------+ + ===================== + | | + +--------+ +--------+ + | Tenant | | Tenant | + | System | | System | + +--------+ +--------+ Figure 3 : Generic reference model for DC network virtualization over a Layer3 infrastructure + A Tenant System can be attached to a Network Virtualization Edge + (NVE) node in several ways: + + - locally, by being co-located i.e. resident in the same device + + - remotely, via a point-to-point connection or a switched network + (e.g. Ethernet) + + When an NVE is local, the state of Tenant Systems can be provided + without protocol assistance. For instance, the operational status of + a VM can be communicated via a local API. When an NVE is remote, the + state of Tenant Systems needs to be exchanged via a data or control + plane protocol, or via a management entity. + The functional components in this picture do not necessarily map directly with the physical components described in Figure 1. For example, an End Device can be a server blade with VMs and - virtual switch, i.e. the VM is the Tenant End System and the NVE + virtual switch, i.e. the VM is the Tenant System and the NVE functions may be performed by the virtual switch and/or the - hypervisor. + hypervisor. In this case, the Tenant System and NVE function are co- + located. Another example is the case where an End Device can be a traditional physical server (no VMs, no virtual switch), i.e. the server is the - Tenant End System and the NVE functions may be performed by the ToR. + Tenant System and the NVE function may be performed by the ToR. Other End Devices in this category are Physical Network Appliances or Storage Systems. - A Tenant End System attaches to a Network Virtualization Edge (NVE) - node, either directly or via a switched network (typically - Ethernet). - The NVE implements network virtualization functions that allow for L2 and/or L3 tenant separation and for hiding tenant addressing information (MAC and IP addresses), tenant-related control plane activity and service contexts from the Routed Backbone nodes. Core nodes utilize L3 techniques to interconnect NVE nodes in support of the overlay network. These devices perform forwarding based on outer L3 tunnel header, and generally do not maintain per tenant-service state albeit some applications (e.g., multicast) may require control plane or forwarding plane information that pertain to a tenant, group of tenants, tenant service or a set of services that belong to one or more tunnels. When such tenant or tenant- service related information is maintained in the core, overlay virtualization provides knobs to control that information. 2.2. NVE Reference Model - The NVE is composed of a tenant service instance that Tenant End + The NVE is composed of a Virtual Network instance that Tenant Systems interface with and an overlay module that provides tunneling overlay functions (e.g. encapsulation/decapsulation of tenant traffic from/to the tenant forwarding instance, tenant identification and mapping, etc), as described in figure 4: +------- L3 Network ------+ | | | Tunnel Overlay | +------------+---------+ +---------+------------+ | +----------+-------+ | | +---------+--------+ | | | Overlay Module | | | | Overlay Module | | | +---------+--------+ | | +---------+--------+ | | |VN context| | VN context| | | | | | | | | +--------+-------+ | | +--------+-------+ | | | |VNI| . |VNI| | | | |VNI| . |VNI| | NVE1 | +-+------------+-+ | | +-+-----------+--+ | NVE2 | | VAPs | | | | VAPs | | - +----+------------+----+ +----+------------+----+ + +----+------------+----+ +----+-----------+-----+ | | | | - -------+------------+-----------------+------------+------- + -------+------------+-----------------+-----------+------- | | Tenant | | | | Service IF | | - Tenant End Systems Tenant End Systems + Tenant Systems Tenant Systems Figure 4 : Generic reference model for NV Edge Note that some NVE functions (e.g. data plane and control plane functions) may reside in one device or may be implemented separately in different devices. For example, the NVE functionality could reside solely on the End Devices, on the ToRs or on both the End Devices and the ToRs. In the - latter case we say that the the End Device NVE component acts as the - NVE Spoke, and ToRs act as NVE hubs. Tenant End Systems will - interface with the tenant service instances maintained on the NVE - spokes, and tenant service instances maintained on the NVE spokes - will interface with the tenant service instances maintained on the - NVE hubs. + latter case we say that the End Device NVE component acts as the NVE + Spoke, and ToRs act as NVE hubs. Tenant Systems will interface with + VNIs maintained on the NVE spokes, and VNIs maintained on the NVE + spokes will interface with VNIs maintained on the NVE hubs. 2.3. NVE Service Types NVE components may be used to provide different types of virtualized service connectivity. This section defines the service types and associated attributes 2.3.1. L2 NVE providing Ethernet LAN-like service L2 NVE implements Ethernet LAN emulation (ELAN), an Ethernet based - multipoint service where the Tenant End Systems appear to be + multipoint service where the Tenant Systems appear to be interconnected by a LAN environment over a set of L3 tunnels. It provides per tenant virtual switching instance with MAC addressing isolation and L3 tunnel encapsulation across the core. 2.3.2. L3 NVE providing IP/VRF-like service Virtualized IP routing and forwarding is similar from a service definition perspective with IETF IP VPN (e.g., BGP/MPLS IPVPN and IPsec VPNs). It provides per tenant routing instance with addressing isolation and L3 tunnel encapsulation across the core. @@ -485,30 +499,32 @@ | | | | | | | +-------+-------+ | | +-------+-------+ | | ||VNI| ... |VNI|| | | ||VNI| ... |VNI|| | NVE1 | +-+-----------+-+ | | +-+-----------+-+ | NVE2 | | VAPs | | | | VAPs | | +----+-----------+----+ +----+-----------+----+ | | | | -----+-----------+-----------------+-----------+----- | | Tenant | | | | Service IF | | - Tenant End Systems Tenant End Systems + Tenant Systems Tenant Systems Figure 5 : Generic reference model for NV Edge 3.1.1. Virtual Access Points (VAPs) - Tenant End Systems are connected to the VNI Instance through Virtual - Access Points (VAPs). The VAPs can be in reality physical ports on a - ToR or virtual ports identified through logical interface - identifiers (VLANs, internal VSwitch Interface ID leading to a VM). + Tenant Systems are connected to the VNI Instance through Virtual + Access Points (VAPs). + + The VAPs can be physical ports or virtual ports identified through + logical interface identifiers (VLANs, internal VSwitch Interface ID + leading to a VM). 3.1.2. Virtual Network Instance (VNI) The VNI represents a set of configuration attributes defining access and tunnel policies and (L2 and/or L3) forwarding functions. Per tenant FIB tables and control plane protocol instances are used to maintain separate private contexts between tenants. Hence tenants are free to use their own addressing schemes without concerns about address overlapping with other tenants. @@ -574,48 +590,78 @@ . Auto-provisioning/Service discovery . Address advertisement and tunnel mapping . Tunnel management A control plane component can be an on-net control protocol or a management control entity. - 3.1.5.1. Auto-provisioning/Service discovery + 3.1.5.1. Distributed vs Centralized Control Plane - NVEs must be able to select the appropriate VNI for each Tenant End + A control/management plane entity can be centralized or distributed. + Both approaches have been used extensively in the past. The routing + model of the Internet is a good example of a distributed approach. + Transport networks have usually used a centralized approach to + manage transport paths. + + It is also possible to combine the two approaches i.e. using a + hybrid model. A global view of network state can have many benefits + but it does not preclude the use of distributed protocols within the + network. Centralized controllers provide a facility to maintain + global and distribute that state to the network which in combination + with distributed protocols can aid in achieving greater network + efficiencies, improve reliability and robustness. Domain and/or + deployment specific constraints define the balance between + centralized and distributed approaches. + + On one hand, a control plane module can reside in every NVE. This is + how routing control plane modules are implemented in routers. At the + same time, an external controller can manage a group of NVEs via an + agent sitting in each NVE. This is how an SDN controller could + communicate with the nodes it controls, via OpenFlow for instance. + + In the case where a centralized control plane is preferred, the + controller will need to be distributed to more than one node for + redundancy. Depending upon the size of the DC domain, hence the + number of NVEs to manage, it should be possible to use several + external controllers. Inter-controller communication will thus be + necessary for scalability and redundancy. + + 3.1.5.2. Auto-provisioning/Service discovery + + NVEs must be able to select the appropriate VNI for each Tenant System. This is based on state information that is often provided by external entities. For example, in a VM environment, this information is provided by compute management systems, since these are the only entities that have visibility on which VM belongs to which tenant. - A mechanism for communicating this information between Tenant End + A mechanism for communicating this information between Tenant Systems and the local NVE is required. As a result the VAPs are - created and mapped to the appropriate Tenant Instance. + created and mapped to the appropriate VNI. Depending upon the implementation, this control interface can be - implemented using an auto-discovery protocol between Tenant End - Systems and their local NVE or through management entities. + implemented using an auto-discovery protocol between Tenant Systems + and their local NVE or through management entities. When a protocol is used, appropriate security and authentication - mechanisms to verify that Tenant End System information is not - spoofed or altered are required. This is one critical aspect for - providing integrity and tenant isolation in the system. + mechanisms to verify that Tenant System information is not spoofed + or altered are required. This is one critical aspect for providing + integrity and tenant isolation in the system. - Another control plane protocol can also be used to advertize NVE - tenant service instance (tenant and service type provided to the - tenant) to other NVEs. Alternatively, management control entities - can also be used to perform these functions. + Another control plane protocol can also be used to advertize + supported VNs to other NVEs. Alternatively, management control + entities can also be used to perform these functions. - 3.1.5.2. Address advertisement and tunnel mapping + 3.1.5.3. Address advertisement and tunnel mapping As traffic reaches an ingress NVE, a lookup is performed to determine which tunnel the packet needs to be sent to. It is then encapsulated with a tunnel header containing the destination address of the egress overlay node. Intermediate nodes (between the ingress and egress NVEs) switch or route traffic based upon the outer destination address. One key step in this process consists of mapping a final destination address to the proper tunnel. NVEs are responsible for maintaining @@ -625,33 +671,66 @@ When a control plane protocol is used to distribute address advertisement and tunneling information, the auto- provisioning/Service discovery could be accomplished by the same protocol. In this scenario, the auto-provisioning/Service discovery could be combined with (be inferred from) the address advertisement and tunnel mapping. Furthermore, a control plane protocol that carries both MAC and IP addresses eliminates the need for ARP, and hence addresses one of the issues with explosive ARP handling. - 3.1.5.3. Tunnel management + 3.1.5.4. Tunnel management A control plane protocol may be required to exchange tunnel state information. This may include setting up tunnels and/or providing tunnel state information. This applies to both unicast and multicast tunnels. For instance, it may be necessary to provide active/standby status information between NVEs, up/down status information, pruning/grafting information for multicast tunnels, etc. - 3.2. Service Overlay Topologies + 3.2. Multi-homing + + Multi-homing techniques can be used to increase the reliability of + an nvo3 network. It is also important to ensure that physical + diversity in an nvo3 network is taken into account to avoid single + points of failure. + + Multi-homing can be enabled in various nodes, from tenant systems + into TORs, TORs into core switches/routers, and core nodes into DC + GWs. + + The nvo3 underlay nodes (i.e. from NVEs to DC GWs) rely on IP + routing as the means to re-route traffic upon failures and/or ECMP + techniques. + + Tenant systems can either be L2 or L3 nodes. In the former case + (L2), techniques such as LAG or STP for instance can be used. In the + latter case (L3), it is possible that no dynamic routing protocol is + enabled. Tenant systems can be multi-homed into remote NVE using + several interfaces (physical NICS or vNICS) with an IP address per + interface either to the same nvo3 network or into different nvo3 + networks. When one of the links fails, the corresponding IP is not + reachable but the other interfaces can still be used. When a tenant + system is co-located with an NVE, IP routing can be relied upon to + handle routing over diverse links to TORs. + + External connectivity is handled by to or more nvo3 gateways. Each + gateway is connected to a different domain (e.g. ISP) and runs BGP + multi-homing. They serve as an access point to external networks + such as VPNs or the Internet. When a connection to an upstream + router is lost, the alternative connection is used and the failed + route withdrawn. + + 3.3. Service Overlay Topologies A number of service topologies may be used to optimize the service connectivity and to address NVE performance limitations. The topology described in Figure 3 suggests the use of a tunnel mesh between the NVEs where each tenant instance is one hop away from a service processing perspective. Partial mesh topologies and an NVE hierarchy may be used where certain NVEs may act as service transit points. @@ -673,21 +752,21 @@ in the core network. o Tunnels are used to aggregate traffic and hence offer the advantage of minimizing the amount of forwarding state required within the underlay network o Decoupling of the overlay addresses (MAC and IP) used by VMs from the underlay network. This offers a clear separation between addresses used within the overlay and the underlay networks and it enables the use of overlapping addresses spaces - by Tenant End Systems + by Tenant Systems o Support of a large number of virtual network identifiers Overlay networks also create several challenges: o Overlay networks have no controls of underlay networks and lack critical network information o Overlays typically probe the network to measure link properties, such as available bandwidth or packet loss rate. It is difficult to accurately evaluate network @@ -726,31 +805,32 @@ Dynamic data plane learning implies that flooding of unknown destinations be supported and hence implies that broadcast and/or multicast be supported. Multicasting in the core network for dynamic learning may lead to significant scalability limitations. Specific forwarding rules must be enforced to prevent loops from happening. This can be achieved using a spanning tree, a shortest path tree, or a split-horizon mesh. It should be noted that the amount of state to be distributed is dependent upon network topology and the number of virtual machines. - Different forms of caching can also be utilized to minimize state - distribution between the various elements. + distribution between the various elements. The control plane should + not require an NVE to maintain the locations of all the tenant + systems whose VNs are not present on the NVE. 4.2.2. Coordination between data plane and control plane For an L2 NVE, the NVE needs to be able to determine MAC addresses - of the end systems present on a VAP (for instance, dataplane - learning may be relied upon for this purpose). For an L3 NVE, the - NVE needs to be able to determine IP addresses of the end systems - present on a VAP. + of the end systems present on a VAP. This can be achieved via + dataplane learning or a control plane. For an L3 NVE, the NVE needs + to be able to determine IP addresses of the end systems present on a + VAP. In both cases, coordination with the NVE control protocol is needed such that when the NVE determines that the set of addresses behind a VAP has changed, it triggers the local NVE control plane to distribute this information to its peers. 4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) traffic There are two techniques to support packet replication needed for broadcast, unknown unicast and multicast: @@ -783,42 +863,41 @@ 4.2.4. Path MTU When using overlay tunneling, an outer header is added to the original frame. This can cause the MTU of the path to the egress tunnel endpoint to be exceeded. In this section, we will only consider the case of an IP overlay. It is usually not desirable to rely on IP fragmentation for performance reasons. Ideally, the interface MTU as seen by a Tenant - End System is adjusted such that no fragmentation is needed. TCP - will adjust its maximum segment size accordingly. + System is adjusted such that no fragmentation is needed. TCP will + adjust its maximum segment size accordingly. It is possible for the MTU to be configured manually or to be discovered dynamically. Various Path MTU discovery techniques exist in order to determine the proper MTU size to use: o Classical ICMP-based MTU Path Discovery [RFC1191] [RFC1981] o - Tenant End Systems rely on ICMP messages to discover the - MTU of the end-to-end path to its destination. This method - is not always possible, such as when traversing middle - boxes (e.g. firewalls) which disable ICMP for security - reasons + Tenant Systems rely on ICMP messages to discover the MTU of + the end-to-end path to its destination. This method is not + always possible, such as when traversing middle boxes + (e.g. firewalls) which disable ICMP for security reasons o Extended MTU Path Discovery techniques such as defined in [RFC4821] It is also possible to rely on the overlay layer to perform segmentation and reassembly operations without relying on the Tenant - End Systems to know about the end-to-end MTU. The assumption is that + Systems to know about the end-to-end MTU. The assumption is that some hardware assist is available on the NVE node to perform such SAR operations. However, fragmentation by the overlay layer can lead to performance and congestion issues due to TCP dynamics and might require new congestion avoidance mechanisms from then underlay network [FLOYD]. Finally, the underlay network may be designed in such a way that the MTU can accommodate the extra tunnel overhead. 4.2.5. NVE location trade-offs @@ -865,26 +944,35 @@ Better visibility between overlays and underlays can be achieved by providing mechanisms to exchange information about: o Performance metrics (throughput, delay, loss, jitter) o Cost metrics 5. Security Considerations - The tenant to overlay mapping function can introduce significant - security risks if appropriate protocols are not used that can - support mutual authentication. + As a framework document, no protocols are being defined and hence no + specific security consideration are raised. - No other new security issues are introduced beyond those described - already in the related L2VPN and L3VPN RFCs. + The following security aspects shall be discussed in respective + solutions documents: + + Traffic isolation between NVO3 domains is guaranteed by the use of + per tenant FIB tables (VNIs). + + The creation of overlay networks and the tenant to overlay mapping + function can introduce significant security risks. When dynamic + protocols are used, authentication should be supported. When a + centralized controller is used, access to that controller should be + restricted to authorized personnel. This can be achieved via login + authentication. 6. IANA Considerations IANA does not need to take any action for this draft. 7. References 7.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate @@ -914,20 +1002,22 @@ [RFC4821] Mathis, M. et al, "Packetization Layer Path MTU Discovery", RFC4821, March 2007 8. Acknowledgments In addition to the authors the following people have contributed to this document: Dimitrios Stiliadis, Rotem Salomonovitch, Alcatel-Lucent + Lucy Yong, Huawei + This document was prepared using 2-Word-v2.0.template.dot. Authors' Addresses Marc Lasserre Alcatel-Lucent Email: marc.lasserre@alcatel-lucent.com Florin Balus Alcatel-Lucent