draft-ietf-rtgwg-net2cloud-gap-analysis-05.txt   draft-ietf-rtgwg-net2cloud-gap-analysis-06.txt 
Network Working Group L. Dunbar Network Working Group L. Dunbar
Internet Draft Futurewei Internet Draft Futurewei
Intended status: Informational A. Malis Intended status: Informational A. Malis
Expires: September 18, 2020 Independent Expires: November 1, 2020 Independent
C. Jacquenet C. Jacquenet
Orange Orange
March 18, 2020 May 1, 2020
Networks Connecting to Hybrid Cloud DCs: Gap Analysis Networks Connecting to Hybrid Cloud DCs: Gap Analysis
draft-ietf-rtgwg-net2cloud-gap-analysis-05 draft-ietf-rtgwg-net2cloud-gap-analysis-06
Abstract Abstract
This document analyzes the technical gaps that may affect the This document analyzes the technical gaps that may affect the
dynamic connection to workloads and applications hosted in hybrid dynamic connection to workloads and applications hosted in hybrid
Cloud Data Centers from enterprise premises. Cloud Data Centers from enterprise premises.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
skipping to change at page 2, line ? skipping to change at page 1, line 41
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire on September 18, 2020. This Internet-Draft will expire on November 1, 2020.
xxx, et al. Expires September 18, 2020 [Page
1]
Copyright Notice Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the Copyright (c) 2020 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Conventions used in this document..............................3 2. Conventions used in this document..............................3
3. Gap Analysis for Accessing Cloud Resources.....................4 3. Gap Analysis for Accessing Cloud Resources.....................4
4. Gap Analysis of Overlay Edge Node's WAN Port Management........4 3.1. Multiple PEs connecting to virtual CPEs in Cloud DCs......7
5. Aggregating VPN paths and Internet paths.......................6 3.2. Access Control for workloads in the Cloud DCs.............7
5.1. Control Plane for Overlay over Heterogeneous Networks.....7 3.3. NAT Traversal.............................................8
5.2. Using BGP UPDATE Messages.................................8 3.4. BGP between PEs and remote CPEs via Internet..............8
5.2.1. Lacking SD-WAN Segments Identifier...................8 3.5. Multicast traffic from/to the remote edges................9
5.2.2. Missing attributes in Tunnel-Encap...................8 4. Gap Analysis of Overlay Edge Node's WAN Port Management........9
5.3. SECURE-L3VPN/EVPN.........................................9 5. Aggregating VPN paths and Internet paths......................11
5.4. Preventing attacks from Internet-facing ports............11 5.1. Control Plane for Overlay over Heterogeneous Networks....12
6. C-PEs not directly connected to VPN PEs.......................11 5.2. Using BGP UPDATE Messages................................13
6.1. Floating PEs to connect to Remote CPEs...................14 5.2.1. Lacking SD-WAN Segments Identifier..................13
6.2. NAT Traversal............................................14 5.2.2. Missing attributes in Tunnel-Encap..................13
6.3. Complexity of using BGP between PEs and remote CPEs via 5.3. SECURE-L3VPN/EVPN........................................15
Internet......................................................14 5.4. Preventing attacks from Internet-facing ports............16
6.4. Designated Forwarder to the remote edges.................15 6. Gap Summary...................................................16
6.5. Traffic Path Management..................................16 7. Manageability Considerations..................................17
7. Manageability Considerations..................................16 8. Security Considerations.......................................17
8. Security Considerations.......................................16 9. IANA Considerations...........................................18
9. IANA Considerations...........................................17 10. References...................................................18
10. References...................................................17 10.1. Normative References....................................18
10.1. Normative References....................................17 10.2. Informative References..................................18
10.2. Informative References..................................17 11. Acknowledgments..............................................19
11. Acknowledgments..............................................18
1. Introduction 1. Introduction
[Net2Cloud-Problem] describes the problems enterprises face today [Net2Cloud-Problem] describes the problems enterprises face today
when interconnecting their branch offices with dynamic workloads when interconnecting their branch offices with dynamic workloads
hosted in third party data centers (a.k.a. Cloud DCs). In hosted in third party data centers (a.k.a. Cloud DCs). In
particular, this document analyzes the routing protocols to identify particular, this document analyzes the routing protocols to identify
whether there are any gaps that may impede such interconnection whether there are any gaps that may impede such interconnection
which may for example justify additional specification effort to which may for example justify additional specification effort to
define proper protocol extensions. define proper protocol extensions.
skipping to change at page 4, line 13 skipping to change at page 4, line 13
underlay networks to get better WAN bandwidth underlay networks to get better WAN bandwidth
management, visibility & control. When the underlay is a management, visibility & control. When the underlay is a
private network, traffic may be forwarded without any private network, traffic may be forwarded without any
additional encryption; when the underlay networks are additional encryption; when the underlay networks are
public, such as the Internet, some traffic needs to be public, such as the Internet, some traffic needs to be
encrypted when passing through (depending on user- encrypted when passing through (depending on user-
provided policies). provided policies).
3. Gap Analysis for Accessing Cloud Resources 3. Gap Analysis for Accessing Cloud Resources
Many problems described in the [Net2Cloud-Problem] are not in the Because of the ephemeral property of the selected Cloud DCs for
scope of IETF, let alone IETF Routing area. This document primarily specific workloads/Apps, an enterprise or its network service
focuses on the gap analysis for protocols in IETF Routing area. provider may not have direct physical connections to the Cloud DCs
that are optimal for hosting the enterprise's specific
workloads/Apps. Under those circumstances, an overlay network design
can be an option to interconnect the enterprise's on-premises data
centers & branch offices to its desired Cloud DCs.
However, overlay paths established over the public Internet can have
unpredictable performance, especially over long distances and across
operators' domains. Therefore, it is highly desirable to minimize
the distance or the number of segments that traffic had to be
forwarded over the public Internet.
The Metro Ethernet Forum's Cloud Service Architecture [MEF-Cloud]
also describes a use case of network operators using Overlay paths
over an LTE network or the public Internet for the last mile access
where the VPN service providers cannot always provide the required
physical infrastructure.
In these scenarios, some overlay edge nodes may not be directly
attached to the PEs that participate to the delivery and the
operation of the enterprise's VPN.
When using an overlay network to connect the enterprise's sites to
the workloads hosted in Cloud DCs, the existing C-PEs at
enterprise's sites have to be upgraded to connect to the said
overlay network. If the workloads hosted in Cloud DCs need to be
connected to many sites, the upgrade process can be very expensive.
[Net2Cloud-Problem] describes a hybrid network approach that extends
the existing MPLS-based VPNs to the Cloud DC Workloads over the
access paths that are not under the VPN provider's control. To make
it work properly, a small number of the PEs of the BGP/MPLS VPN can
be designated to connect to the remote workloads via secure IPsec
tunnels. Those designated PEs are shown as fPE (floating PE or
smart PE) in Figure 3. Once the secure IPsec tunnels are
established, the workloads hosted in Cloud DCs can be reached by the
enterprise's VPN without upgrading all of the enterprise's CPEs. The
only CPE that needs to connect to the overlay network would be a
virtualized CPE instantiated within the cloud DC.
+--------+ +--------+
| Host-a +--+ +----| Host-b |
| | | (') | |
+--------+ | +-----------+ ( ) +--------+
| +-+--+ ++-+ ++-+ +--+-+ (_)
| | CPE|--|PE| |PE+--+ CPE| |
+--| | | | | | | |---+
+-+--+ ++-+ ++-+ +----+
/ | |
/ | MPLS +-+---+ +--+-++--------+
+------+-+ | Network |fPE-1| |CPE || Host |
| Host | | | |- --| || d |
| c | +-----+ +-+---+ +--+-++--------+
+--------+ |fPE-2|-----+
+---+-+ (|)
(|) (|) Overlay
(|) (|) over any access
+=\======+=========+
// \ | Cloud DC \\
// \ ++-----+ \\
+ |
| vCPE |
+-+----+
----+-------+-------+-----
| |
+---+----+ +---+----+
| Remote | | Remote |
| App-1 | | App-2 |
+--------+ +--------+
Figure 1: VPN Extension to Cloud DC
In Figure 1, the optimal Cloud DC to host the workloads (as a
function of the proximity, capacity, pricing, or any other criteria
chosen by the enterprises) does not have a direct connection to the
PEs of the NGP/MPLS VPN that interconnects the enterprise's sites.
3.1. Multiple PEs connecting to virtual CPEs in Cloud DCs
To extend BGP/MPLS VPNs to virtual CPEs in Cloud DCs, it is
necessary to establish secure tunnels (such as IPsec tunnels)
between the PEs and the vCPEs.
Even though a set of PEs can be manually selected for a specific
cloud data center, there are no standard protocols for those PEs to
interact with the vCPEs instantiated in the third party cloud data
centers over unsecure networks. The interaction includes exchanging
performance, route information, etc..
When there is more than one PE available for use (as there should be
for resiliency purposes or because of the need to support multiple
cloud DCs geographically scattered), it is not straightforward to
designate an egress PE to remote vCPEs based on applications. It
might not be possible for PEs to recognize all applications because
too much traffic traversing the PEs.
When there are multiple floating PEs that have established IPsec
tunnels with a remote CPE, the remove CPE can forward outbound
traffic to the optimal PE, which in turn forwards traffic to egress
PEs to reach the final destinations. However, it is not
straightforward for the ingress PE to select which egress PEs to
send traffic. For example, in Figure 1:
- fPE-1 is the optimal PE for communication between App-1 <->
Host-a due to latency, pricing or other criteria.
- fPE-2 is the optimal PE for communication between App-1 <->
Host-b.
3.2. Access Control for workloads in the Cloud DCs
There is widespread diffusion of access policy for Cloud Resource,
some of which is not easy for verification and validation. Because
there are multiple parties involved in accessing Cloud Resources,
policy enforcement points are not easily visible for policy
refinement, monitoring, and testing.
The current state of the art for specifying access policies for
Cloud Resources could be improved by having automated and reliable
tools to map the user-friendly (natural language) rules into machine
readable policies and to provide interfaces for enterprises to self-
manage policy enforcement points for their own workloads.
3.3. NAT Traversal
Cloud DCs that only assign private IPv4 addresses to the
instantiated workloads assume that traffic to/from the workload
usually needs to traverse NATs.
An overlay edge node can solicit a STUN (Session Traversal of UDP
Through Network Address Translation, [RFC3489]) Server to get the
information about the NAT property, the public IP addresses and port
numbers so that such information can be communicated to the relevant
peers.
3.4. BGP between PEs and remote CPEs via Internet
Even though an EBGP (external BGP) Multi-Hop design can be used to
connect peers that are not directly connected to each other, there
are still some issues about extending BGP from MPLS VPN PEs to
remote CPEs via any access path (e.g., Internet).
The path between the remote CPEs and VPN PEs that maintain VPN
routes can traverse untrusted segments.
EBGP Multi-hop design requires configuration on both peers, either
manually or via NETCONF from a controller. To use EBGP between a PE
and remote CPEs, the PE has to be manually configured with the
"next-hop" set to the IP address of the CPEs. When remote CPEs,
especially remote virtualized CPEs are dynamically instantiated or
removed, the configuration of Multi-Hop EBGP on the PE has to be
changed accordingly.
Egress peering engineering (EPE) is not sufficient. Running BGP on
virtualized CPEs in Cloud DCs requires GRE tunnels to be
established first, which requires the remote CPEs to support
address and key management capabilities. RFC 7024 (Virtual Hub &
Spoke) and Hierarchical VPN do not support the required
properties.
Also, there is a need for a mechanism to automatically trigger
configuration changes on PEs when remote CPEs' are instantiated or
moved (leading to an IP address change) or deleted.
EBGP Multi-hop design does not include a security mechanism by
default. The PE and remote CPEs need secure communication channels
when connecting via the public Internet.
Remote CPEs, if instantiated in Cloud DCs might have to traverse
NATs to reach PEs. It is not clear how BGP can be used between
devices located beyond the NAT and the devices located behind the
NAT. It is not clear how to configure the Next Hop on the PEs to
reach private IPv4 addresses.
3.5. Multicast traffic from/to the remote edges
Among the multiple floating PEs that are reachable from a remote
CPE, multicast traffic sent by the remote CPE towards the MPLS VPN
can be forwarded back to the remote CPE due to the PE receiving the
multicast packets forwarding the multicast/broadcast frame to other
PEs that in turn send to all attached CPEs. This process may cause
traffic loops.
This problem can be solved by selecting one floating PE as the CPE's
Designated Forwarder, similar to TRILL's Appointed Forwarders
[RFC6325].
BGP/MPLS VPNs do not have features like TRILL's Appointed
Forwarders.
4. Gap Analysis of Overlay Edge Node's WAN Port Management 4. Gap Analysis of Overlay Edge Node's WAN Port Management
Very often the Hybrid Cloud DCs are interconnected by overlay Very often the Hybrid Cloud DCs are interconnected by overlay
networks that arch over many different types of networks, such as networks that arch over many different types of networks, such as
VPN, public Internet, wireless and wired infrastructures, etc. VPN, public Internet, wireless and wired infrastructures, etc.
Sometimes the enterprises' VPN providers do not have direct access Sometimes the enterprises' VPN providers do not have direct access
to the Cloud DCs that host some specific applications or workloads to the Cloud DCs that host some specific applications or workloads
operated by the enterprise. operated by the enterprise.
skipping to change at page 7, line 25 skipping to change at page 12, line 41
+----+ +---------+ +--+ trusted +---+ +------+ +----+ +----+ +---------+ +--+ trusted +---+ +------+ +----+
| WAN | | WAN |
+----+ +---------+ +--+ packets +---+ +------+ +----+ +----+ +---------+ +--+ packets +---+ +------+ +----+
| TN1|--| C1--|PE| go natively |PE |-- D1 |--| TN1| | TN1|--| C1--|PE| go natively |PE |-- D1 |--| TN1|
+----+ | C-PE C2--+--+ without encry+---+ | C-PE | +----+ +----+ | C-PE C2--+--+ without encry+---+ | C-PE | +----+
| C | +--------------+ | D | | C | +--------------+ | D |
| | | | | | | |
+----+ | C3--| without encrypt over | | +----+ +----+ | C3--| without encrypt over | | +----+
| TN2|--| C4--+---- Untrusted --+------D2 |--| TN2| | TN2|--| C4--+---- Untrusted --+------D2 |--| TN2|
+----+ +---------+ +------+ +----+ +----+ +---------+ +------+ +----+
Figure 1: CPEs interconnected by VPN paths and Internet Paths Figure 2: CPEs interconnected by VPN paths and Internet Paths
5.1. Control Plane for Overlay over Heterogeneous Networks 5.1. Control Plane for Overlay over Heterogeneous Networks
As described in [BGP-SDWAN-Usage], the Control Plane for Overlay As described in [BGP-SDWAN-Usage], the Control Plane for Overlay
network over heterogeneous networks has three distinct properties: network over heterogeneous networks has three distinct properties:
- WAN Port Property registration to the Overlay Controller. - WAN Port Property registration to the Overlay Controller.
o To inform the Overlay controller and authorized peers of o To inform the Overlay controller and authorized peers of
the WAN port properties of the Edge nodes. When the WAN the WAN port properties of the Edge nodes. When the WAN
ports are assigned private IPv4 addresses, this step can ports are assigned private IPv4 addresses, this step can
skipping to change at page 11, line 15 skipping to change at page 16, line 33
5.4. Preventing attacks from Internet-facing ports 5.4. Preventing attacks from Internet-facing ports
When C-PEs have Internet-facing ports, additional security risks are When C-PEs have Internet-facing ports, additional security risks are
raised. raised.
To mitigate security risks, in addition to requiring Anti-DDoS To mitigate security risks, in addition to requiring Anti-DDoS
features on C-PEs, it is necessary for C-PEs to support means to features on C-PEs, it is necessary for C-PEs to support means to
determine whether traffic sent by remote peers is legitimate to determine whether traffic sent by remote peers is legitimate to
prevent spoofing attacks, in particular. prevent spoofing attacks, in particular.
6. C-PEs not directly connected to VPN PEs 6. Gap Summary
Because of the ephemeral property of the selected Cloud DCs for
specific workloads/Apps, an enterprise or its network service
provider may not have direct physical connections to the Cloud DCs
that are optimal for hosting the enterprise's specific
workloads/Apps. Under those circumstances, an overlay network design
can be an option to interconnect the enterprise's on-premises data
centers & branch offices to its desired Cloud DCs.
However, overlay paths established over the public Internet can have
unpredictable performance, especially over long distances and across
operators' domains. Therefore, it is highly desirable to minimize
the distance or the number of segments that traffic had to be
forwarded over the public Internet.
The Metro Ethernet Forum's Cloud Service Architecture [MEF-Cloud]
also describes a use case of network operators using Overlay paths
over a LTE network or the public Internet for the last mile access
where the VPN service providers cannot always provide the required
physical infrastructure.
In these scenarios, some overlay edge nodes may not be directly
attached to the PEs that participate to the delivery and the
operation of the enterprise's VPN.
When using an overlay network to connect the enterprise's sites to
the workloads hosted in Cloud DCs, the corresponding C-PEs have to
be upgraded to connect to the said overlay network. If the
workloads hosted in Cloud DCs need to be connected to many sites,
the upgrade process can be very expensive.
[Net2Cloud-Problem] describes a hybrid network approach that extends
the existing MPLS-based VPNs to the Cloud DC Workloads over the
access paths that are not under the VPN provider's control. To make
it work properly, a small number of the PEs of the BGP/MPLS VPN can
be designated to connect to the remote workloads via secure IPsec
tunnels. Those designated PEs are shown as fPE (floating PE or
smart PE) in Figure 3. Once the secure IPsec tunnels are
established, the workloads hosted in Cloud DCs can be reached by the
enterprise's VPN without upgrading all of the enterprise's CPEs. The
only CPE that needs to connect to the overlay network would be a
virtualized CPE instantiated within the cloud DC.
+--------+ +--------+
| Host-a +--+ +----| Host-b |
| | | (') | |
+--------+ | +-----------+ ( ) +--------+
| +-+--+ ++-+ ++-+ +--+-+ (_)
| | CPE|--|PE| |PE+--+ CPE| |
+--| | | | | | | |---+
+-+--+ ++-+ ++-+ +----+
/ | |
/ | MPLS +-+---+ +--+-++--------+
+------+-+ | Network |fPE-1| |CPE || Host |
| Host | | | |- --| || d |
| c | +-----+ +-+---+ +--+-++--------+
+--------+ |fPE-2|-----+
+---+-+ (|)
(|) (|) Overlay
(|) (|) over any access
+=\======+=========+
// \ | Cloud DC \\
// \ ++-----+ \\
+Remote|
| CPE |
+-+----+
----+-------+-------+-----
| |
+---+----+ +---+----+
| Remote | | Remote |
| App-1 | | App-2 |
+--------+ +--------+
Figure 3: VPN Extension to Cloud DC
In Figure 3, the optimal Cloud DC to host the workloads (as a
function of the proximity, capacity, pricing, or any other criteria
chosen by the enterprises) does not have a direct connection to the
PEs of the NGP/MPLS VPN that interconnects the enterprise's sites.
6.1. Floating PEs to connect to Remote CPEs
To extend BGP/MPLS VPNs to remote CPEs, it is necessary to establish
secure tunnels (such as IPsec tunnels) between the Floating PEs and
the remote CPEs.
Even though a set of PEs can be manually selected to act as the
floating PEs for a specific cloud data center, there are no standard
protocols for those PEs to interact with the remote CPEs (most
likely virtualized) instantiated in the third party cloud data
centers (e.g., to exchange performance or route information).
When there is more than one fPE available for use (as there should
be for resiliency purposes or because of the need to support
multiple cloud DCs geographically scattered), it is not
straightforward to designate an egress fPE to remote CPEs based on
applications. There is too much applications' traffic traversing
PEs, and it is not feasible for PEs to recognize applications from
the payload of packets.
6.2. NAT Traversal
Cloud DCs that only assign private IPv4 addresses to the
instantiated workloads assume that traffic to/from the workload
usually needs to traverse NATs.
An overlay edge node can solicit a STUN (Session Traversal of UDP
Through Network Address Translation, [RFC3489]) Server to get the
information about the NAT property, the public IP addresses and port
numbers so that such information can be communicated to the relevant
peers.
6.3. Complexity of using BGP between PEs and remote CPEs via Internet
Even though an EBGP (external BGP) Multi-Hop design can be used to
connect peers that are not directly connected to each other, there
are still some issues about extending BGP from MPLS VPN PEs to
remote CPEs via any access path (e.g., Internet).
The path between the remote CPEs and VPN PEs that maintain VPN
routes may very well traverse untrusted nodes.
EBGP Multi-hop design requires configuration on both peers, either
manually or via NETCONF from a controller. To use EBGP between a PE
and remote CPEs, the PE has to be manually configured with the
"next-hop" set to the IP address of the CPEs. When remote CPEs,
especially remote virtualized CPEs are dynamically instantiated or
removed, the configuration of Multi-Hop EBGP on the PE has to be
changed accordingly.
Egress peering engineering (EPE) is not sufficient. Running BGP on
virtualized CPEs in Cloud DCs requires GRE tunnels to be
established first, which requires the remote CPEs to support
address and key management capabilities. RFC 7024 (Virtual Hub &
Spoke) and Hierarchical VPN do not support the required
properties.
Also, there is a need for a mechanism to automatically trigger
configuration changes on PEs when remote CPEs' are instantiated or
moved (leading to an IP address change) or deleted.
EBGP Multi-hop design does not include a security mechanism by
default. The PE and remote CPEs need secure communication channels
when connecting via the public Internet.
Remote CPEs, if instantiated in Cloud DCs might have to traverse
NATs to reach PEs. It is not clear how BGP can be used between
devices located beyond the NAT and the devices located behind the
NAT. It is not clear how to configure the Next Hop on the PEs to
reach private IPv4 addresses.
6.4. Designated Forwarder to the remote edges Here is the summary of the technical gaps discussed in this
document:
Among the multiple floating PEs that are reachable from a remote - For Accessing Cloud Resources
CPE, multicast traffic sent by the remote CPE towards the MPLS VPN
can be forwarded back to the remote CPE due to the PE receiving the
multicast packets forwarding the multicast/broadcast frame to other
PEs that in turn send to all attached CPEs. This process may cause
traffic loops.
This problem can be solved by selecting one floating PE as the CPE's a) When a remote vCPE can be reached by multiple PEs of one
Designated Forwarder, similar to TRILL's Appointed Forwarders provider VPN network, it is not straightforward to designate
[RFC6325]. which egress PE to the remote vCPE based on applications
b) Need automated and reliable tools to map the user-friendly
(natural language) access rules into machine readable
policies and to provide interfaces for enterprises to self-
manage policy enforcement points for their own workloads.
c) NAT Traversal. An enterprise's network controller needs to be
informed of the NAT properties for its workloads in Cloud
DCs. If the workloads are attached to the enterprise's own
vCPEs instantiated in the Cloud DCs, the task can be
achieved.
d) The multicast traffic to/from remote vCPE needs a feature
like Appointed Forwarder specified by TRILL to prevent
multicast data frames from looping around.
e) BGP between PEs and remote CPEs via untrusted networks.
f) Traffic Path Management
BGP/MPLS VPNs do not have features like TRILL's Appointed - Overlay Edge Node's WAN Port Management: BGP UPDATE propagate
Forwarders. client's routes information, but don't distinguish network facing
ports.
6.5. Traffic Path Management - Aggregating VPN paths and Internet paths
When there are multiple floating PEs that have established IPsec a) Control Plane for Overlay over Heterogeneous Networks is not
tunnels with a remote CPE, the latter can forward outbound traffic clear.
to the Designated Forwarder PE, which in turn forwards traffic to b) BGP UPDATE Messages missing properties:
egress PEs and then to the final destinations. However, it is not
straightforward for the egress PE to send back the return traffic to
the Designated Forwarder PE.
As Figure 3: - Lacking SD-WAN Segments Identifier
- Missing attributes in Tunnel-Encap
- fPE-1 is DF for communication between App-1 <-> Host-a due to c) SECURE-L3VPN/EVPN is not enough
latency, pricing or other criteria. d) Missing clear methods in preventing attacks from Internet-
- fPE-2 is DF for communication between App-1 <-> Host-b. facing ports
7. Manageability Considerations 7. Manageability Considerations
Zero touch provisioning of overlay networks to interconnect Hybrid Zero touch provisioning of overlay networks to interconnect Hybrid
Clouds is highly desired. It is necessary for a newly powered up Clouds is highly desired. It is necessary for a newly powered up
edge node to establish a secure connection (by means of TLS, DTLS, edge node to establish a secure connection (by means of TLS, DTLS,
etc.) with its controller. etc.) with its controller.
8. Security Considerations 8. Security Considerations
 End of changes. 17 change blocks. 
205 lines changed or deleted 249 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/