draft-ietf-tsvwg-l4s-arch-06.txt   draft-ietf-tsvwg-l4s-arch-07.txt 
Transport Area Working Group B. Briscoe, Ed. Transport Area Working Group B. Briscoe, Ed.
Internet-Draft Independent Internet-Draft Independent
Intended status: Informational K. De Schepper Intended status: Informational K. De Schepper
Expires: September 10, 2020 Nokia Bell Labs Expires: April 30, 2021 Nokia Bell Labs
M. Bagnulo Braun M. Bagnulo Braun
Universidad Carlos III de Madrid Universidad Carlos III de Madrid
G. White G. White
CableLabs CableLabs
March 9, 2020 October 27, 2020
Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service:
Architecture Architecture
draft-ietf-tsvwg-l4s-arch-06 draft-ietf-tsvwg-l4s-arch-07
Abstract Abstract
This document describes the L4S architecture, which enables Internet This document describes the L4S architecture, which enables Internet
applications to achieve Low Latency, Low Loss, and Scalable applications to achieve Low queuing Latency, Low Loss, and Scalable
throughput (L4S). The insight on which L4S is based is that the root throughput (L4S). The insight on which L4S is based is that the root
cause of queuing delay is in the congestion controllers of senders, cause of queuing delay is in the congestion controllers of senders,
not in the queue itself. The L4S architecture is intended to enable not in the queue itself. The L4S architecture is intended to enable
*all* Internet applications to transition away from congestion _all_ Internet applications to transition away from congestion
control algorithms that cause queuing delay, to a new class of control algorithms that cause queuing delay, to a new class of
congestion controls that utilize explicit congestion signaling congestion controls that induce very little queuing, aided by
provided by the network. This new class of congestion control can explicit congestion signaling from the network. This new class of
provide low latency for capacity-seeking flows, so applications can congestion control can provide low latency for capacity-seeking
achieve both high bandwidth and low latency. flows, so applications can achieve both high bandwidth and low
latency.
The architecture primarily concerns incremental deployment. It The architecture primarily concerns incremental deployment. It
defines mechanisms that allow both classes of congestion control to defines mechanisms that allow the new class of L4S congestion
coexist in a shared network. These mechanisms aim to ensure that the controls to coexist with 'Classic' congestion controls in a shared
latency and throughput performance using an L4S-compliant congestion network. These mechanisms aim to ensure that the latency and
controller is usually much better (and never worse) than the throughput performance using an L4S-compliant congestion controller
performance would have been using a 'Classic' congestion controller, is usually much better (and never worse) than the performance would
and that competing flows continuing to use 'Classic' controllers are have been using a 'Classic' congestion controller, and that competing
typically not impacted by the presence of L4S. These characteristics flows continuing to use 'Classic' controllers are typically not
are important to encourage adoption of L4S congestion control impacted by the presence of L4S. These characteristics are important
algorithms and L4S compliant network elements. to encourage adoption of L4S congestion control algorithms and L4S
compliant network elements.
The L4S architecture consists of three components: network support to The L4S architecture consists of three components: network support to
isolate L4S traffic from classic traffic and to provide appropriate isolate L4S traffic from classic traffic; protocol features that
congestion signaling to both types; protocol features that allow allow network elements to identify L4S traffic; and host support for
network elements to identify L4S traffic and allow for communication L4S congestion controls.
of congestion signaling; and host support for immediate congestion
signaling with an appropriate congestion response that enables
scalable performance.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 10, 2020. This Internet-Draft will expire on April 30, 2021.
Copyright Notice Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the Copyright (c) 2020 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. L4S Architecture Overview . . . . . . . . . . . . . . . . . . 4 2. L4S Architecture Overview . . . . . . . . . . . . . . . . . . 5
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. L4S Architecture Components . . . . . . . . . . . . . . . . . 8 4. L4S Architecture Components . . . . . . . . . . . . . . . . . 7
5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5. Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1. Why These Primary Components? . . . . . . . . . . . . . . 11 5.1. Why These Primary Components? . . . . . . . . . . . . . . 11
5.2. Why Not Alternative Approaches? . . . . . . . . . . . . . 13 5.2. What L4S adds to Existing Approaches . . . . . . . . . . 14
6. Applicability . . . . . . . . . . . . . . . . . . . . . . . . 15 6. Applicability . . . . . . . . . . . . . . . . . . . . . . . . 17
6.1. Applications . . . . . . . . . . . . . . . . . . . . . . 15 6.1. Applications . . . . . . . . . . . . . . . . . . . . . . 17
6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 17 6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3. Deployment Considerations . . . . . . . . . . . . . . . . 18 6.3. Applicability with Specific Link Technologies . . . . . . 19
6.3.1. Deployment Topology . . . . . . . . . . . . . . . . . 19 6.4. Deployment Considerations . . . . . . . . . . . . . . . . 20
6.3.2. Deployment Sequences . . . . . . . . . . . . . . . . 20 6.4.1. Deployment Topology . . . . . . . . . . . . . . . . . 20
6.3.3. L4S Flow but Non-L4S Bottleneck . . . . . . . . . . . 22 6.4.2. Deployment Sequences . . . . . . . . . . . . . . . . 22
6.3.4. Other Potential Deployment Issues . . . . . . . . . . 23 6.4.3. L4S Flow but Non-ECN Bottleneck . . . . . . . . . . . 24
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 6.4.4. L4S Flow but Classic ECN Bottleneck . . . . . . . . . 25
8. Security Considerations . . . . . . . . . . . . . . . . . . . 23 6.4.5. L4S AQM Deployment within Tunnels . . . . . . . . . . 25
8.1. Traffic (Non-)Policing . . . . . . . . . . . . . . . . . 23 7. IANA Considerations (to be removed by RFC Editor) . . . . . . 25
8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 24 8. Security Considerations . . . . . . . . . . . . . . . . . . . 25
8.3. Interaction between Rate Policing and L4S . . . . . . . . 25 8.1. Traffic Rate (Non-)Policing . . . . . . . . . . . . . . . 25
8.4. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 26 8.2. 'Latency Friendliness' . . . . . . . . . . . . . . . . . 26
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 26 8.3. Interaction between Rate Policing and L4S . . . . . . . . 28
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 27 8.4. ECN Integrity . . . . . . . . . . . . . . . . . . . . . . 29
10.1. Normative References . . . . . . . . . . . . . . . . . . 27 8.5. Privacy Considerations . . . . . . . . . . . . . . . . . 29
10.2. Informative References . . . . . . . . . . . . . . . . . 27 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 30
Appendix A. Standardization items . . . . . . . . . . . . . . . 33 10. Informative References . . . . . . . . . . . . . . . . . . . 30
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 35 Appendix A. Standardization items . . . . . . . . . . . . . . . 38
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 40
1. Introduction 1. Introduction
It is increasingly common for _all_ of a user's applications at any It is increasingly common for _all_ of a user's applications at any
one time to require low delay: interactive Web, Web services, voice, one time to require low delay: interactive Web, Web services, voice,
conversational video, interactive video, interactive remote presence, conversational video, interactive video, interactive remote presence,
instant messaging, online gaming, remote desktop, cloud-based instant messaging, online gaming, remote desktop, cloud-based
applications and video-assisted remote control of machinery and applications and video-assisted remote control of machinery and
industrial processes. In the last decade or so, much has been done industrial processes. In the last decade or so, much has been done
to reduce propagation delay by placing caches or servers closer to to reduce propagation delay by placing caches or servers closer to
users. However, queuing remains a major, albeit intermittent, users. However, queuing remains a major, albeit intermittent,
component of latency. For instance spikes of hundreds of component of latency. For instance spikes of hundreds of
milliseconds are common. During a long-running flow, even with milliseconds are common, even with state-of-the-art active queue
state-of-the-art active queue management (AQM), the base speed-of- management (AQM). During a long-running flow, queuing is typically
light path delay roughly doubles. Low loss is also important configured to cause overall network delay to roughly double relative
to expected base (unloaded) path delay. Low loss is also important
because, for interactive applications, losses translate into even because, for interactive applications, losses translate into even
longer retransmission delays. longer retransmission delays.
It has been demonstrated that, once access network bit rates reach It has been demonstrated that, once access network bit rates reach
levels now common in the developed world, increasing capacity offers levels now common in the developed world, increasing capacity offers
diminishing returns if latency (delay) is not addressed. diminishing returns if latency (delay) is not addressed.
Differentiated services (Diffserv) offers Expedited Forwarding (EF Differentiated services (Diffserv) offers Expedited Forwarding
[RFC3246]) for some packets at the expense of others, but this is not (EF [RFC3246]) for some packets at the expense of others, but this is
sufficient when all (or most) of a user's applications require low not sufficient when all (or most) of a user's applications require
latency. low latency.
Therefore, the goal is an Internet service with ultra-Low queueing Therefore, the goal is an Internet service with ultra-Low queueing
Latency, ultra-Low Loss and Scalable throughput (L4S). Ultra-low Latency, ultra-Low Loss and Scalable throughput (L4S). Ultra-low
queuing latency means less than 1 millisecond (ms) on average and queuing latency means less than 1 millisecond (ms) on average and
less than about 2 ms at the 99th percentile. L4S is potentially for less than about 2 ms at the 99th percentile. L4S is potentially for
_all_ traffic - a service for all traffic needs none of the _all_ traffic - a service for all traffic needs none of the
configuration or management baggage (traffic policing, traffic configuration or management baggage (traffic policing, traffic
contracts) associated with favouring some traffic over others. This contracts) associated with favouring some traffic over others. This
document describes the L4S architecture for achieving these goals. document describes the L4S architecture for achieving these goals.
It must be said that queuing delay only degrades performance It must be said that queuing delay only degrades performance
infrequently [Hohlfeld14]. It only occurs when a large enough infrequently [Hohlfeld14]. It only occurs when a large enough
capacity-seeking (e.g. TCP) flow is running alongside the user's capacity-seeking (e.g. TCP) flow is running alongside the user's
traffic in the bottleneck link, which is typically in the access traffic in the bottleneck link, which is typically in the access
network. Or when the low latency application is itself a large network. Or when the low latency application is itself a large
capacity-seeking flow (e.g. interactive video). At these times, the capacity-seeking or adaptive rate (e.g. interactive video) flow. At
performance improvement from L4S must be sufficient that network these times, the performance improvement from L4S must be sufficient
operators will be motivated to deploy it. that network operators will be motivated to deploy it.
Active Queue Management (AQM) is part of the solution to queuing Active Queue Management (AQM) is part of the solution to queuing
under load. AQM improves performance for all traffic, but there is a under load. AQM improves performance for all traffic, but there is a
limit to how much queuing delay can be reduced by solely changing the limit to how much queuing delay can be reduced by solely changing the
network; without addressing the root of the problem. network; without addressing the root of the problem.
The root of the problem is the presence of standard TCP congestion The root of the problem is the presence of standard TCP congestion
control (Reno [RFC5681]) or compatible variants (e.g. TCP Cubic control (Reno [RFC5681]) or compatible variants (e.g. TCP
[RFC8312]). We shall use the term 'Classic' for these Reno-friendly Cubic [RFC8312]). We shall use the term 'Classic' for these Reno-
congestion controls. It has been demonstrated that if the sending friendly congestion controls. Classic congestion controls induce
host replaces a Classic congestion control with a 'Scalable' relatively large saw-tooth-shaped excursions up the queue and down
alternative, when a suitable AQM is deployed in the network the again, which have been growing as flow rate scales [RFC3649]. So if
performance under load of all the above interactive applications can a network operator naively attempts to reduce queuing delay by
be significantly improved. For instance, queuing delay under heavy configuring an AQM to operate at a shallower queue, a Classic
load with the example DCTCP/DualQ solution cited below is roughly 1 congestion control will significantly underutilize the link at the
millisecond (1 to 2 ms) at the 99th percentile without losing link bottom of every saw-tooth.
utilization. This compares with 5 to 20 ms on _average_ with a
It has been demonstrated that if the sending host replaces a Classic
congestion control with a 'Scalable' alternative, when a suitable AQM
is deployed in the network the performance under load of all the
above interactive applications can be significantly improved. For
instance, queuing delay under heavy load with the example DCTCP/DualQ
solution cited below on a DSL or Ethernet link is roughly 1 to 2
milliseconds at the 99th percentile without losing link
utilization [DualPI2Linux], [DCttH15] (for other link types, see
Section 6.3). This compares with 5 to 20 ms on _average_ with a
Classic congestion control and current state-of-the-art AQMs such as Classic congestion control and current state-of-the-art AQMs such as
fq_CoDel [RFC8290] or PIE [RFC8033] and about 20-30 ms at the 99th FQ-CoDel [RFC8290], PIE [RFC8033] or DOCSIS PIE [RFC8034] and about
percentile. Also, with a Classic congestion control, reducing 20-30 ms at the 99th percentile [DualPI2Linux].
queueing to even 5 ms is typically only possible by losing some
utilization.
It has been demonstrated [DCttH15] that it is possible to deploy such It has also been demonstrated [DCttH15], [DualPI2Linux] that it is
an L4S service alongside the existing best efforts service so that possible to deploy such an L4S service alongside the existing best
all of a user's applications can shift to it when their stack is efforts service so that all of a user's applications can shift to it
updated. Access networks are typically designed with one link as the when their stack is updated. Access networks are typically designed
bottleneck for each site (which might be a home, small enterprise or with one link as the bottleneck for each site (which might be a home,
mobile device), so deployment at a single network node should give small enterprise or mobile device), so deployment at each end of this
nearly all the benefit. The L4S approach also requires component link should give nearly all the benefit in each direction. The L4S
mechanisms at the endpoints to fulfill its goal. This document approach also requires component mechanisms at the endpoints to
presents the L4S architecture, by describing the different components fulfill its goal. This document presents the L4S architecture, by
and how they interact to provide the scalable low-latency, low-loss, describing the different components and how they interact to provide
Internet service. the scalable, low latency, low loss Internet service.
2. L4S Architecture Overview 2. L4S Architecture Overview
There are three main components to the L4S architecture (illustrated There are three main components to the L4S architecture:
in Figure 1):
1) Network: L4S traffic needs to be isolated from the queuing 1) Network: L4S traffic needs to be isolated from the queuing
latency of Classic traffic. One queue per application flow (FQ) latency of Classic traffic. One queue per application flow (FQ)
is one way to achieve this, e.g. [RFC8290]. However, just two is one way to achieve this, e.g. FQ-CoDel [RFC8290]. However,
queues is sufficient and does not require inspection of transport just two queues is sufficient and does not require inspection of
layer headers in the network, which is not always possible (see transport layer headers in the network, which is not always
Section 5.2). With just two queues, it might seem impossible to possible (see Section 5.2). With just two queues, it might seem
know how much capacity to schedule for each queue without impossible to know how much capacity to schedule for each queue
inspecting how many flows at any one time are using each. And without inspecting how many flows at any one time are using each.
capacity in access networks is too costly to arbitrarily partition And it would be undesirable to arbitrarily divide access network
into two. The Dual Queue Coupled AQM was developed as a minimal capacity into two partitions. The Dual Queue Coupled AQM was
complexity solution to this problem. It acts like a 'semi- developed as a minimal complexity solution to this problem. It
permeable' membrane that partitions latency but not bandwidth. acts like a 'semi-permeable' membrane that partitions latency but
Note that there is no bandwidth priority between the two queues not bandwidth. As such, the two queues are for transition from
because they are for transition from Classic to L4S behaviour, not Classic to L4S behaviour, not bandwidth prioritization. Section 4
prioritization. Section 4 gives a high level explanation of how gives a high level explanation of how FQ and DualQ solutions work,
FQ and DualQ solutions work, and and [I-D.ietf-tsvwg-aqm-dualq-coupled] gives a full explanation of
[I-D.ietf-tsvwg-aqm-dualq-coupled] gives a full explanation of the the DualQ Coupled AQM framework.
DualQ Coupled AQM framework.
2) Protocol: A host needs to distinguish L4S and Classic packets 2) Protocol: A host needs to distinguish L4S and Classic packets
with an identifier so that the network can classify them into with an identifier so that the network can classify them into
their separate treatments. [I-D.ietf-tsvwg-ecn-l4s-id] considers their separate treatments. [I-D.ietf-tsvwg-ecn-l4s-id] considers
various alternative identifiers, and concludes that all various alternative identifiers for L4S, and concludes that all
alternatives involve compromises, but the ECT(1) and CE codepoints alternatives involve compromises, but the ECT(1) and CE codepoints
of the ECN field represent a workable solution. of the ECN field represent a workable solution.
3) Host: Scalable congestion controls already exist. They solve the 3) Host: Scalable congestion controls already exist. They solve the
scaling problem with Reno congestion control that was explained in scaling problem with Reno congestion control that was explained in
[RFC3649]. The one used most widely (in controlled environments) [RFC3649]. The one used most widely (in controlled environments)
is Data Center TCP (DCTCP [RFC8257]), which has been implemented is Data Center TCP (DCTCP [RFC8257]), which has been implemented
and deployed in Windows Server Editions (since 2012), in Linux and and deployed in Windows Server Editions (since 2012), in Linux and
in FreeBSD. Although DCTCP as-is 'works' well over the public in FreeBSD. Although DCTCP as-is 'works' well over the public
Internet, most implementations lack certain safety features that Internet, most implementations lack certain safety features that
will be necessary once it is used outside controlled environments will be necessary once it is used outside controlled environments
like data centres (see Section 6.3.3 and Appendix A). A similar like data centres (see Section 6.4.3 and Appendix A). Scalable
scalable congestion control will also need to be transplanted into congestion control will also need to be implemented in protocols
protocols other than TCP (QUIC, SCTP, RTP/RTCP, RMCAT, etc.) other than TCP (QUIC, SCTP, RTP/RTCP, RMCAT, etc.). Indeed,
Indeed, between the present document being drafted and published, between the present document being drafted and published, the
the following scalable congestion controls were implemented: TCP following scalable congestion controls were implemented: TCP
Prague [PragueLinux], QUIC Prague, an L4S variant of the RMCAT Prague [PragueLinux], QUIC Prague, an L4S variant of the RMCAT
SCReAM controller [RFC8298] and the L4S ECN part of BBRv2 SCReAM controller [RFC8298] and the L4S ECN part of
[I-D.cardwell-iccrg-bbr-congestion-control] intended for TCP and BBRv2 [I-D.cardwell-iccrg-bbr-congestion-control] intended for TCP
QUIC. and QUIC transports.
(2) (1)
.-------^------. .--------------^-------------------.
,-(3)-----. ______
; ________ : L4S --------. | |
:|Scalable| : _\ ||___\_| mark |
:| sender | : __________ / / || / |______|\ _________
:|________|\; | |/ --------' ^ \1|condit'nl|
`---------'\_| IP-ECN | Coupling : \|priority |_\
________ / |Classifier| : /|scheduler| /
|Classic |/ |__________|\ --------. ___:__ / |_________|
| sender | \_\ || | |||___\_| mark/|/
|________| / || | ||| / | drop |
Classic --------' |______|
Figure 1: Components of an L4S Solution: 1) Isolation in separate
network queues; 2) Packet Identification Protocol; and 3) Scalable
Sending Host
3. Terminology 3. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. In this
document, these words will appear with that interpretation only when
in ALL CAPS. Lower case uses of these words are not to be
interpreted as carrying RFC-2119 significance. COMMENT: Since this
will be an information document, This should be removed.
Classic Congestion Control: A congestion control behaviour that can Classic Congestion Control: A congestion control behaviour that can
co-exist with standard TCP Reno [RFC5681] without causing co-exist with standard TCP Reno [RFC5681] without causing
significantly negative impact on its flow rate [RFC5033]. With significantly negative impact on its flow rate [RFC5033]. With
Classic congestion controls, as flow rate scales, the number of Classic congestion controls, as flow rate scales, the number of
round trips between congestion signals (losses or ECN marks) rises round trips between congestion signals (losses or ECN marks) rises
with the flow rate. So it takes longer and longer to recover with the flow rate. So it takes longer and longer to recover
after each congestion event. Therefore control of queuing and after each congestion event. Therefore control of queuing and
utilization becomes very slack, and the slightest disturbance utilization becomes very slack, and the slightest disturbance
prevents a high rate from being attained [RFC3649]. prevents a high rate from being attained [RFC3649].
For instance, with 1500 byte packets and an end-to-end round trip For instance, with 1500 byte packets and an end-to-end round trip
time (RTT) of 36 ms, over the years, as Reno flow rate scales from time (RTT) of 36 ms, over the years, as Reno flow rate scales from
2 to 100 Mb/s the number of round trips taken to recover from a 2 to 100 Mb/s the number of round trips taken to recover from a
congestion event rises proportionately, from 4 to 200. Cubic congestion event rises proportionately, from 4 to 200.
[RFC8312] was developed to be less unscalable, but it is Cubic [RFC8312] was developed to be less unscalable, but it is
approaching its scaling limit; with the same RTT of 36ms, at approaching its scaling limit; with the same RTT of 36 ms, at
100Mb/s it takes about 106 round trips to recover, and at 800 Mb/s 100Mb/s it takes about 106 round trips to recover, and at 800 Mb/s
its recovery time triples to over 340 round trips, or still more its recovery time triples to over 340 round trips, or still more
than 12 seconds (Reno would take 57 seconds). than 12 seconds (Reno would take 57 seconds).
Scalable Congestion Control: A congestion control where the average Scalable Congestion Control: A congestion control where the average
time from one congestion signal to the next (the recovery time) time from one congestion signal to the next (the recovery time)
remains invariant as the flow rate scales, all other factors being remains invariant as the flow rate scales, all other factors being
equal. This maintains the same degree of control over queueing equal. This maintains the same degree of control over queueing
and utilization whatever the flow rate, as well as ensuring that and utilization whatever the flow rate, as well as ensuring that
high throughput is robust to disturbances. For instance, DCTCP high throughput is more robust to disturbances (e.g. from new
averages 2 congestion signals per round-trip whatever the flow flows starting). For instance, DCTCP averages 2 congestion
rate. See Section 4.3 of [I-D.ietf-tsvwg-ecn-l4s-id] for more signals per round-trip whatever the flow rate. See Section 4.3 of
explanation. [I-D.ietf-tsvwg-ecn-l4s-id] for more explanation.
Classic service: The Classic service is intended for all the Classic service: The Classic service is intended for all the
congestion control behaviours that co-exist with Reno [RFC5681] congestion control behaviours that co-exist with Reno [RFC5681]
(e.g. Reno itself, Cubic [RFC8312], Compound (e.g. Reno itself, Cubic [RFC8312],
[I-D.sridharan-tcpm-ctcp], TFRC [RFC5348]). The term 'Classic Compound [I-D.sridharan-tcpm-ctcp], TFRC [RFC5348]). The term
queue' means a queue providing the Classic service. 'Classic queue' means a queue providing the Classic service.
Low-Latency, Low-Loss Scalable throughput (L4S) service: The 'L4S' Low-Latency, Low-Loss Scalable throughput (L4S) service: The 'L4S'
service is intended for traffic from scalable congestion control service is intended for traffic from scalable congestion control
algorithms, such as Data Center TCP [RFC8257]. The L4S service is algorithms, such as Data Center TCP [RFC8257]. The L4S service is
for more general traffic than just DCTCP--it allows the set of for more general traffic than just DCTCP--it allows the set of
congestion controls with similar scaling properties to DCTCP to congestion controls with similar scaling properties to DCTCP to
evolve (e.g. Relentless TCP [Mathis09], TCP Prague [PragueLinux] evolve (e.g. Relentless TCP [Mathis09], TCP Prague [PragueLinux]
and the L4S variant of SCREAM for real-time media [RFC8298]). The and the L4S variant of SCREAM for real-time media [RFC8298]). The
term 'L4S queue' means a queue providing the L4S service. term 'L4S queue' means a queue providing the L4S service.
The terms Classic or L4S can also qualify other nouns, such as The terms Classic or L4S can also qualify other nouns, such as
'queue', 'codepoint', 'identifier', 'classification', 'packet', 'queue', 'codepoint', 'identifier', 'classification', 'packet',
'flow'. For example: an L4S packet means a packet with an L4S 'flow'. For example: an L4S packet means a packet with an L4S
identifier sent from an L4S congestion control. identifier sent from an L4S congestion control.
Both Classic and L4S services can cope with a proportion of Both Classic and L4S services can cope with a proportion of
unresponsive or less-responsive traffic as well, as long as it unresponsive or less-responsive traffic as well, as long as it
does not build a queue (e.g. DNS, VoIP, game sync datagrams, does not build a queue (e.g. DNS, VoIP, game sync datagrams, etc).
etc).
Reno-friendly: The subset of Classic traffic that excludes Reno-friendly: The subset of Classic traffic that excludes
unresponsive traffic and excludes experimental congestion controls unresponsive traffic and excludes experimental congestion controls
intended to coexist with Reno but without always being strictly intended to coexist with Reno but without always being strictly
friendly to it (as allowed by [RFC5033]). Reno-friendly is used friendly to it (as allowed by [RFC5033]). Reno-friendly is used
in place of 'TCP-friendly', given that the TCP protocol is used in place of 'TCP-friendly', given that friendliness is a property
with many different congestion control behaviours. of the congestion controller (Reno), not the wire protocol (TCP),
which is used with many different congestion control behaviours.
Classic ECN: The original Explicit Congestion Notification (ECN) Classic ECN: The original Explicit Congestion Notification (ECN)
protocol [RFC3168], which requires ECN signals to be treated the protocol [RFC3168], which requires ECN signals to be treated as
same as drops, both when generated in the network and when equivalent to drops, both when generated in the network and when
responded to by the sender. responded to by the sender.
The names used for the four codepoints of the 2-bit IP-ECN field The names used for the four codepoints of the 2-bit IP-ECN field
are as defined in [RFC3168]: Not ECT, ECT(0), ECT(1) and CE, where are as defined in [RFC3168]: Not ECT, ECT(0), ECT(1) and CE, where
ECT stands for ECN-Capable Transport and CE stands for Congestion ECT stands for ECN-Capable Transport and CE stands for Congestion
Experienced. Experienced.
Site: A home, mobile device, small enterprise or campus, where the Site: A home, mobile device, small enterprise or campus, where the
network bottleneck is typically the access link to the site. Not network bottleneck is typically the access link to the site. Not
all network arrangements fit this model but it is a useful, widely all network arrangements fit this model but it is a useful, widely
applicable generalization. applicable generalization.
4. L4S Architecture Components 4. L4S Architecture Components
The L4S architecture is composed of the following elements. The L4S architecture is composed of the following elements.
Protocols:The L4S architecture encompasses the two identifier changes Protocols: The L4S architecture encompasses two identifier changes
(an unassignment and an assignment) and optional further identifiers: (an unassignment and an assignment) and optional further identifiers:
a. An essential aspect of a scalable congestion control is the use a. An essential aspect of a scalable congestion control is the use
of explicit congestion signals rather than losses, because the of explicit congestion signals rather than losses, because the
signals need to be sent immediately and frequently. 'Classic' signals need to be sent frequently and immediately. In contrast,
ECN [RFC3168] requires an ECN signal to be treated the same as a 'Classic' ECN [RFC3168] requires an ECN signal to be treated as
drop, both when it is generated in the network and when it is equivalent to drop, both when it is generated in the network and
responded to by hosts. L4S needs networks and hosts to support a when it is responded to by hosts. L4S needs networks and hosts
different meaning for ECN: to support a different meaning for ECN:
* much more frequent signals--too often to use drops; * much more frequent signals--too often to require an equivalent
excessive degree of drop from non-ECN flows;
* immediately tracking every fluctuation of the queue--too soon * immediately tracking every fluctuation of the queue--too soon
to commit to dropping packets. to warrant dropping packets from non-ECN flows.
So the standards track [RFC3168] has had to be updated to allow So the standards track [RFC3168] has had to be updated to allow
L4S packets to depart from the 'same as drop' constraint. L4S packets to depart from the 'same as drop' constraint.
[RFC8311] is a standards track update to relax specific [RFC8311] is a standards track update to relax specific
requirements in RFC 3168 (and certain other standards track requirements in RFC 3168 (and certain other standards track
RFCs), which clears the way for the experimental changes proposed RFCs), which clears the way for the experimental changes proposed
for L4S. [RFC8311] also reclassifies the original experimental for L4S. [RFC8311] also reclassifies the original experimental
assignment of the ECT(1) codepoint as an ECN nonce [RFC3540] as assignment of the ECT(1) codepoint as an ECN nonce [RFC3540] as
historic. historic.
skipping to change at page 9, line 16 skipping to change at page 8, line 43
detrimental effect, which even then would only involve a detrimental effect, which even then would only involve a
vanishingly small likelihood of a spurious retransmission. vanishingly small likelihood of a spurious retransmission.
c. A network operator might wish to include certain unresponsive, c. A network operator might wish to include certain unresponsive,
non-L4S traffic in the L4S queue if it is deemed to be smoothly non-L4S traffic in the L4S queue if it is deemed to be smoothly
enough paced and low enough rate not to build a queue. For enough paced and low enough rate not to build a queue. For
instance, VoIP, low rate datagrams to sync online games, instance, VoIP, low rate datagrams to sync online games,
relatively low rate application-limited traffic, DNS, LDAP, etc. relatively low rate application-limited traffic, DNS, LDAP, etc.
This traffic would need to be tagged with specific identifiers, This traffic would need to be tagged with specific identifiers,
e.g. a low latency Diffserv Codepoint such as Expedited e.g. a low latency Diffserv Codepoint such as Expedited
Forwarding (EF [RFC3246]), Non-Queue-Building (NQB Forwarding (EF [RFC3246]), Non-Queue-Building
[I-D.white-tsvwg-nqb]), or operator-specific identifiers. (NQB [I-D.white-tsvwg-nqb]), or operator-specific identifiers.
Network components: The L4S architecture encompasses either dual- Network components: The L4S architecture aims to provide low latency
queue or per-flow queue solutions: without the _need_ for per-flow operations in network components.
Nonetheless, the architecture does not preclude per-flow solutions -
it encompasses the following combinations:
a. The Coupled Dual Queue AQM achieves the 'semi-permeable' membrane a. The Dual Queue Coupled AQM (illustrated in Figure 1) achieves the
property mentioned earlier as follows. The obvious part is that 'semi-permeable' membrane property mentioned earlier as follows.
using two separate queues isolates the queuing delay of one from The obvious part is that using two separate queues isolates the
the other. The less obvious part is how the two queues act as if queuing delay of one from the other. The less obvious part is
they are a single pool of bandwidth without the scheduler needing how the two queues act as if they are a single pool of bandwidth
to decide between them. This is achieved by having the Classic without the scheduler needing to decide between them. This is
AQM provide a congestion signal to both queues in a manner that achieved by having the Classic AQM provide a congestion signal to
ensures a consistent response from the two types of congestion both queues in a manner that ensures a consistent response from
control. In other words, the Classic AQM generates a drop/mark the two types of congestion control. In other words, the Classic
probability based on congestion in the Classic queue, uses this AQM generates a drop/mark probability based on congestion in the
probability to drop/mark packets in that queue, and also uses Classic queue, uses this probability to drop/mark packets in that
this probability to affect the marking probability in the L4S queue, and also uses this probability to affect the marking
queue. This coupling of the congestion signaling between the two probability in the L4S queue. This coupling of the congestion
queues makes the L4S flows slow down to leave the right amount of signaling between the two queues makes the L4S flows slow down to
capacity for the Classic traffic (as they would if they were the leave the right amount of capacity for the Classic traffic (as
same type of traffic sharing the same queue). Then the scheduler they would if they were the same type of traffic sharing the same
can serve the L4S queue with priority, because the L4S traffic queue). Then the scheduler can serve the L4S queue with
isn't offering up enough traffic to use all the priority that it priority, because the L4S traffic isn't offering up enough
is given. Therefore, on short time-scales (sub-round-trip) the traffic to use all the priority that it is given. Therefore, on
prioritization of the L4S queue protects its low latency by short time-scales (sub-round-trip) the prioritization of the L4S
allowing bursts to dissipate quickly; but on longer time-scales queue protects its low latency by allowing bursts to dissipate
(round-trip and longer) the Classic queue creates an equal and quickly; but on longer time-scales (round-trip and longer) the
opposite pressure against the L4S traffic to ensure that neither Classic queue creates an equal and opposite pressure against the
has priority when it comes to bandwidth. The tension between L4S traffic to ensure that neither has priority when it comes to
prioritizing L4S and coupling marking from Classic results in bandwidth. The tension between prioritizing L4S and coupling
per-flow fairness. To protect against unresponsive traffic in marking from Classic results in per-flow fairness. To protect
the L4S queue taking advantage of the prioritization and starving against unresponsive traffic in the L4S queue taking advantage of
the Classic queue, it is advisable not to use strict priority, the prioritization and starving the Classic queue, it is
but instead to use a weighted scheduler. advisable not to use strict priority, but instead to use a
weighted scheduler (see Appendix A of
[I-D.ietf-tsvwg-aqm-dualq-coupled]).
When there is no Classic traffic, the L4S queue's AQM comes into When there is no Classic traffic, the L4S queue's AQM comes into
play, and it sets an appropriate marking rate to maintain ultra- play, and it sets an appropriate marking rate to maintain ultra-
low queuing delay. low queuing delay.
The Coupled Dual Queue AQM has been specified as generically as The Dual Queue Coupled AQM has been specified as generically as
possible [I-D.ietf-tsvwg-aqm-dualq-coupled] without specifying possible [I-D.ietf-tsvwg-aqm-dualq-coupled] without specifying
the particular AQMs to use in the two queues so that designers the particular AQMs to use in the two queues so that designers
are free to implement diverse ideas. Informational appendices in are free to implement diverse ideas. Informational appendices in
that draft give pseudocode examples of two different specific AQM that draft give pseudocode examples of two different specific AQM
approaches: a variant of PIE called DualPI2 (pronounced Dual PI approaches: one called DualPI2 (pronounced Dual PI
Squared) [DualPI2Linux], and a zero-config variant of RED called Squared) [DualPI2Linux] that uses the PI2 variant of PIE, and a
Curvy RED. A DualQ Coupled AQM variant based on PIE has also zero-config variant of RED called Curvy RED. A DualQ Coupled AQM
been specified and implemented for Low Latency DOCSIS based on PIE has also been specified and implemented for Low
[DOCSIS3.1]. Latency DOCSIS [DOCSIS3.1].
(2) (1)
.-------^------. .--------------^-------------------.
,-(3)-----. ______
; ________ : L4S --------. | |
:|Scalable| : _\ ||___\_| mark |
:| sender | : __________ / / || / |______|\ _________
:|________|\; | |/ --------' ^ \1|condit'nl|
`---------'\_| IP-ECN | Coupling : \|priority |_\
________ / |Classifier| : /|scheduler| /
|Classic |/ |__________|\ --------. ___:__ / |_________|
| sender | \_\ || | |||___\_| mark/|/
|________| / || | ||| / | drop |
Classic --------' |______|
Figure 1: Components of an L4S Solution: 1) Isolation in separate
network queues; 2) Packet Identification Protocol; and 3) Scalable
Sending Host
b. A scheduler with per-flow queues can be used for L4S. It is b. A scheduler with per-flow queues can be used for L4S. It is
simple to modify an existing design such as FQ-CoDel or FQ-PIE. simple to modify an existing design such as FQ-CoDel or FQ-PIE.
For instance within each queue of an FQ_CoDel system, as well as For instance within each queue of an FQ-CoDel system, as well as
a CoDel AQM, immediate (unsmoothed) shallow threshold ECN marking a CoDel AQM, immediate (unsmoothed) shallow threshold ECN marking
has been added. Then the Classic AQM such as CoDel or PIE is has been added (see Sec.5.2.7 of [RFC8290]). Then the Classic
applied to non-ECN or ECT(0) packets, while the shallow threshold AQM such as CoDel or PIE is applied to non-ECN or ECT(0) packets,
is applied to ECT(1) packets, to give sub-millisecond average while the shallow threshold is applied to ECT(1) packets, to give
queue delay. sub-millisecond average queue delay.
c. It would also be possible to use dual queues for isolation, but
with per-flow marking to control flow-rates (instead of the
coupled per-queue marking of the Dual Queue Coupled AQM). One of
the two queues would be for isolating L4S packets, which would be
classified by the ECN codepoint. Flow rates could be controlled
by flow-specific marking. The policy goal of the marking could
be to differentiate flow rates (e.g. [Nadas20], which requires
additional signalling of a per-flow 'value'), or to equalize
flow-rates (perhaps in a similar way to Approx Fair CoDel [AFCD],
[I-D.morton-tsvwg-codel-approx-fair], but with two queues not
one).
Note that whenever the term 'DualQ' is used loosely without
saying whether marking is per-queue or per-flow, it means a dual
queue AQM with per-queue marking.
Host mechanisms: The L4S architecture includes a number of mechanisms Host mechanisms: The L4S architecture includes a number of mechanisms
in the end host that we enumerate next: in the end host that we enumerate next:
a. Data Center TCP is the most widely used example of a scalable a. Data Center TCP is the most widely used example of a scalable
congestion control. It has been documented as an informational congestion control. It has been documented as an informational
record of the protocol currently in use [RFC8257]. It will be record of the protocol currently in use [RFC8257]. It has been
necessary to define a number of safety features for a variant necessary to define a number of safety features for a variant
usable on the public Internet. A draft list of these, known as usable on the public Internet. A draft list of these, known as
the Prague L4S requirements, has been drawn up (see Appendix A of the Prague L4S requirements, has been drawn up (see Appendix A of
[I-D.ietf-tsvwg-ecn-l4s-id]). The list also includes some [I-D.ietf-tsvwg-ecn-l4s-id]). The list also includes some
optional performance improvements. optional performance improvements.
b. Transport protocols other than TCP use various congestion b. Transport protocols other than TCP use various congestion
controls designed to be friendly with Reno. Before they can use controls designed to be friendly with Reno. Before they can use
the L4S service, it will be necessary to implement scalable the L4S service, it will be necessary to implement scalable
variants of each of these congestion control behaviours. The variants of each of these congestion control behaviours. The
following standards track RFCs currently define these protocols: following standards track RFCs currently define these protocols:
ECN in TCP [RFC3168], in SCTP [RFC4960], in RTP [RFC6679], and in ECN in TCP [RFC3168], in SCTP [RFC4960], in RTP [RFC6679], and in
DCCP [RFC4340]. Not all are in widespread use, but those that DCCP [RFC4340]. Not all are in widespread use, but those that
are will eventually need to be updated to allow a different are will eventually need to be updated to allow a different
congestion response, which they will have to indicate by using congestion response, which they will have to indicate by using
the ECT(1) codepoint. Scalable variants are under consideration the ECT(1) codepoint. Scalable variants are under consideration
for some new transport protocols that are themselves under for some new transport protocols that are themselves under
development, e.g. QUIC [I-D.ietf-quic-transport] and certain development, e.g. QUIC [I-D.ietf-quic-transport] and certain
real-time media congestion avoidance techniques (RMCAT) real-time media congestion avoidance techniques (RMCAT)
protocols. protocols.
c. ECN feedback is sufficient for L4S in some transport protocols c. ECN feedback is sufficient for L4S in some transport protocols
(RTCP, DCCP) but not others: (RTCP, DCCP) but not others:
* For the case of TCP, the feedback protocol for ECN embeds the * For the case of TCP, the feedback protocol for ECN embeds the
assumption from Classic ECN that an ECN mark is the same as a assumption from Classic ECN [RFC3168] that an ECN mark is
drop, making it unusable for a scalable TCP. Therefore, the equivalent to a drop, making it unusable for a scalable TCP.
implementation of TCP receivers will have to be upgraded Therefore, the implementation of TCP receivers will have to be
[RFC7560]. Work to standardize and implement more accurate upgraded [RFC7560]. Work to standardize and implement more
ECN feedback for TCP (AccECN) is in progress accurate ECN feedback for TCP (AccECN) is in
[I-D.ietf-tcpm-accurate-ecn], [PragueLinux]. progress [I-D.ietf-tcpm-accurate-ecn], [PragueLinux].
* ECN feedback is only roughly sketched in an appendix of the * ECN feedback is only roughly sketched in an appendix of the
SCTP specification. A fuller specification has been proposed SCTP specification. A fuller specification has been proposed
[I-D.stewart-tsvwg-sctpecn], which would need to be [I-D.stewart-tsvwg-sctpecn], which would need to be
implemented and deployed before SCTCP could support L4S. implemented and deployed before SCTCP could support L4S.
5. Rationale 5. Rationale
5.1. Why These Primary Components? 5.1. Why These Primary Components?
Explicit congestion signalling (protocol): Explicit congestion Explicit congestion signalling (protocol): Explicit congestion
signalling is a key part of the L4S approach. In contrast, use of signalling is a key part of the L4S approach. In contrast, use of
drop as a congestion signal creates a tension because drop is both drop as a congestion signal creates a tension because drop is both
a useful signal (more would reduce delay) and an impairment (less an impairment (less would be better) and a useful signal (more
would reduce delay): would be better):
* Explicit congestion signals can be used many times per round * Explicit congestion signals can be used many times per round
trip, to keep tight control, without any impairment. Under trip, to keep tight control, without any impairment. Under
heavy load, even more explicit signals can be applied so the heavy load, even more explicit signals can be applied so the
queue can be kept short whatever the load. Whereas state-of- queue can be kept short whatever the load. Whereas state-of-
the-art AQMs have to introduce very high packet drop at high the-art AQMs have to introduce very high packet drop at high
load to keep the queue short. Further, when using ECN, the load to keep the queue short. Further, when using ECN, the
congestion control's sawtooth reduction can be smaller and congestion control's sawtooth reduction can be smaller and
therefore return to the operating point more often, without therefore return to the operating point more often, without
worrying that this causes more signals (one at the top of each worrying that this causes more signals (one at the top of each
smaller sawtooth). The consequent smaller amplitude sawteeth smaller sawtooth). The consequent smaller amplitude sawteeth
fit between a very shallow marking threshold and an empty fit between a very shallow marking threshold and an empty
queue, so delay variation can be very low, without risk of queue, so queue delay variation can be very low, without risk
under-utilization. of under-utilization.
* Explicit congestion signals can be sent immediately to track * Explicit congestion signals can be sent immediately to track
fluctuations of the queue. L4S shifts smoothing from the fluctuations of the queue. L4S shifts smoothing from the
network (which doesn't know the round trip times of all the network (which doesn't know the round trip times of all the
flows) to the host (which knows its own round trip time). flows) to the host (which knows its own round trip time).
Previously, the network had to smooth to keep a worst-case Previously, the network had to smooth to keep a worst-case
round trip stable, delaying congestion signals by 100-200ms. round trip stable, which delayed congestion signals by 100-200
ms.
All the above makes it clear that explicit congestion signalling All the above makes it clear that explicit congestion signalling
is only advantageous for latency if it does not have to be is only advantageous for latency if it does not have to be
considered 'the same as' drop (as was required with Classic ECN considered 'equivalent to' drop (as was required with Classic
[RFC3168]). Therefore, in a DualQ AQM, the L4S queue uses a new ECN [RFC3168]). Therefore, in an L4S AQM, the L4S queue uses a
L4S variant of ECN that is not equivalent to drop new L4S variant of ECN that is not equivalent to
[I-D.ietf-tsvwg-ecn-l4s-id], while the Classic queue uses either drop [I-D.ietf-tsvwg-ecn-l4s-id], while the Classic queue uses
classic ECN [RFC3168] or drop, which are equivalent. either classic ECN [RFC3168] or drop, which are equivalent to each
other.
Before Classic ECN was standardized, there were various proposals Before Classic ECN was standardized, there were various proposals
to give an ECN mark a different meaning from drop. However, there to give an ECN mark a different meaning from drop. However, there
was no particular reason to agree on any one of the alternative was no particular reason to agree on any one of the alternative
meanings, so 'the same as drop' was the only compromise that could meanings, so 'equivalent to drop' was the only compromise that
be reached. RFC 3168 contains a statement that: could be reached. RFC 3168 contains a statement that:
"An environment where all end nodes were ECN-Capable could "An environment where all end nodes were ECN-Capable could
allow new criteria to be developed for setting the CE allow new criteria to be developed for setting the CE
codepoint, and new congestion control mechanisms for end-node codepoint, and new congestion control mechanisms for end-node
reaction to CE packets. However, this is a research issue, and reaction to CE packets. However, this is a research issue, and
as such is not addressed in this document." as such is not addressed in this document."
Latency isolation with coupled congestion notification (network): Latency isolation (network): L4S congestion controls keep queue
Using just two queues is not essential to L4S (more would be delay low whereas Classic congestion controls need a queue of the
possible), but it is the simplest way to isolate all the L4S order of the RTT to avoid under-utilization. One queue cannot
traffic that keeps latency low from all the legacy Classic traffic have two lengths, therefore L4S traffic needs to be isolated in a
that does not. separate queue (e.g. DualQ) or queues (e.g. FQ).
Similarly, coupling the congestion notification between the queues Coupled congestion notification: Coupling the congestion
is not necessarily essential, but it is a clever and simple way to notification between two queues as in the DualQ Coupled AQM is not
allow senders to determine their rate, packet-by-packet, rather necessarily essential, but it is a simple way to allow senders to
than be overridden by a network scheduler. Because otherwise a determine their rate, packet by packet, rather than be overridden
network scheduler would have to inspect at least transport layer by a network scheduler. An alternative is for a network scheduler
headers, and it would have to continually assign a rate to each to control the rate of each application flow (see discussion in
flow without any easy way to understand application intent. Section 5.2).
L4S packet identifier (protocol): Once there are at least two L4S packet identifier (protocol): Once there are at least two
separate treatments in the network, hosts need an identifier at treatments in the network, hosts need an identifier at the IP
the IP layer to distinguish which treatment they intend to use. layer to distinguish which treatment they intend to use.
Scalable congestion notification (host): A scalable congestion Scalable congestion notification: A scalable congestion control in
control keeps the signalling frequency high so that rate the host keeps the signalling frequency from the network high so
variations can be small when signalling is stable, and rate can that rate variations can be small when signalling is stable, and
track variations in available capacity as rapidly as possible rate can track variations in available capacity as rapidly as
otherwise. possible otherwise.
Low loss: Latency is not the only concern of L4S. The 'Low Loss" Low loss: Latency is not the only concern of L4S. The 'Low Loss"
part of the name denotes that L4S generally achieves zero part of the name denotes that L4S generally achieves zero
congestion loss due to its use of ECN. Otherwise, loss would congestion loss due to its use of ECN. Otherwise, loss would
itself cause delay, particularly for short flows, due to itself cause delay, particularly for short flows, due to
retransmission delay [RFC2884]. retransmission delay [RFC2884].
Scalable throughput: The "Scalable throughput" part of the name Scalable throughput: The "Scalable throughput" part of the name
denotes that the per-flow throughput of scalable congestion denotes that the per-flow throughput of scalable congestion
controls should scale indefinitely, avoiding the imminent scaling controls should scale indefinitely, avoiding the imminent scaling
problems with Reno-friendly congestion control algorithms problems with Reno-friendly congestion control
[RFC3649]. It was known when TCP congestion avoidance was first algorithms [RFC3649]. It was known when TCP congestion avoidance
developed that it would not scale to high bandwidth-delay products was first developed that it would not scale to high bandwidth-
(see footnote 6 in [TCP-CA]). Today, regular broadband bit-rates delay products (see footnote 6 in [TCP-CA]). Today, regular
over WAN distances are already beyond the scaling range of Classic broadband bit-rates over WAN distances are already beyond the
Reno congestion control. So `less unscalable' Cubic [RFC8312] and scaling range of Classic Reno congestion control. So `less
Compound [I-D.sridharan-tcpm-ctcp] variants of TCP have been unscalable' Cubic [RFC8312] and Compound [I-D.sridharan-tcpm-ctcp]
successfully deployed. However, these are now approaching their variants of TCP have been successfully deployed. However, these
scaling limits. For instance, at 800Mb/s with a 20ms round trip, are now approaching their scaling limits. As the examples in
Cubic induces a congestion signal only every 500 round trips or 10 Section 3 demonstrate, as flow rate scales Classic congestion
seconds, which makes its dynamic control very sloppy. In contrast controls like Reno or Cubic induce a congestion signal more and
on average a scalable congestion control like DCTCP or TCP Prague more infrequently (hundreds of round trips at today's flow rates
induces 2 congestion signals per round trip, which remains and growing), which makes dynamic control very sloppy. In
invariant for any flow rate, keeping dynamic control very tight. contrast on average a scalable congestion control like DCTCP or
TCP Prague induces 2 congestion signals per round trip, which
remains invariant for any flow rate, keeping dynamic control very
tight.
5.2. Why Not Alternative Approaches? Although work on scaling congestion controls tends to start with
TCP as the transport, the above is not intended to exclude other
transports (e.g. SCTP, QUIC) or less elastic algorithms
(e.g. RMCAT), which all tend to adopt the same or similar
developments.
5.2. What L4S adds to Existing Approaches
All the following approaches address some part of the same problem All the following approaches address some part of the same problem
space as L4S. In each case, it is shown that L4S complements them or space as L4S. In each case, it is shown that L4S complements them or
improves on them, rather than being a mutually exclusive alternative: improves on them, rather than being a mutually exclusive alternative:
Diffserv: Diffserv addresses the problem of bandwidth apportionment Diffserv: Diffserv addresses the problem of bandwidth apportionment
for important traffic as well as queuing latency for delay- for important traffic as well as queuing latency for delay-
sensitive traffic. L4S solely addresses the problem of queuing sensitive traffic. Of these, L4S solely addresses the problem of
latency (as well as loss and throughput scaling). Diffserv will queuing latency. Diffserv will still be necessary where important
still be necessary where important traffic requires priority (e.g. traffic requires priority (e.g. for commercial reasons, or for
for commercial reasons, or for protection of critical protection of critical infrastructure traffic) - see
infrastructure traffic) - see [I-D.briscoe-tsvwg-l4s-diffserv]. [I-D.briscoe-tsvwg-l4s-diffserv]. Nonetheless, the L4S approach
Nonetheless, if there are Diffserv classes for important traffic, can provide low latency for _all_ traffic within each Diffserv
the L4S approach can provide low latency for _all_ traffic within class (including the case where there is only the one default
each Diffserv class (including the case where there is only one
Diffserv class). Diffserv class).
Also, as already explained, Diffserv only works for a small subset Also, Diffserv only works for a small subset of the traffic on a
of the traffic on a link. It is not applicable when all the link. As already explained, it is not applicable when all the
applications in use at one time at a single site (home, small applications in use at one time at a single site (home, small
business or mobile device) require low latency. Also, because L4S business or mobile device) require low latency. In contrast,
is for all traffic, it needs none of the management baggage because L4S is for all traffic, it needs none of the management
(traffic policing, traffic contracts) associated with favouring baggage (traffic policing, traffic contracts) associated with
some packets over others. This baggage has held Diffserv back favouring some packets over others. This baggage has probably
from widespread end-to-end deployment. held Diffserv back from widespread end-to-end deployment.
State-of-the-art AQMs: AQMs such as PIE and fq_CoDel give a In particular, because networks tend not to trust end systems to
identify which packets should be favoured over others, where
networks assign packets to Diffserv classes they often use packet
inspection of application flow identifiers or deeper inspection of
application signatures. Thus, nowadays, Diffserv doesn't always
sit well with encryption of the layers above IP. So users have to
choose between privacy and QoS.
As with Diffserv, the L4S identifier is in the IP header. But, in
contrast to Diffserv, the L4S identifier does not convey a want or
a need for a certain level of quality. Rather, it promises a
certain behaviour (scalable congestion response), which networks
can objectively verify if they need to. This is because low delay
depends on collective host behaviour, whereas bandwidth priority
depends on network behaviour.
State-of-the-art AQMs: AQMs such as PIE and FQ-CoDel give a
significant reduction in queuing delay relative to no AQM at all. significant reduction in queuing delay relative to no AQM at all.
L4S is intended to complement these AQMs, and should not distract L4S is intended to complement these AQMs, and should not distract
from the need to deploy them as widely as possible. Nonetheless, from the need to deploy them as widely as possible. Nonetheless,
without addressing the large saw-toothing rate variations of AQMs alone cannot reduce queuing delay too far without
Classic congestion controls, AQMs alone cannot reduce queuing significantly reducing link utilization, because the root cause of
delay too far without significantly reducing link utilization. the problem is on the host - where Classic congestion controls use
The L4S approach resolves this tension by ensuring hosts can large saw-toothing rate variations. The L4S approach resolves
minimize the size of their sawteeth without appearing so this tension by ensuring hosts can minimize the size of their
aggressive to legacy flows that they starve them. sawteeth without appearing so aggressive to Classic flows that
they starve them.
Per-flow queuing: Similarly, per-flow queuing is not incompatible Per-flow queuing or marking: Similarly, per-flow approaches such as
with the L4S approach. However, one queue for every flow can be FQ-CoDel or Approx Fair CoDel [AFCD] are not incompatible with the
thought of as overkill compared to the minimum of two queues for L4S approach. However, per-flow queuing alone is not enough - it
all traffic needed for the L4S approach. The overkill of per-flow only isolates the queuing of one flow from others; not from
queuing has side-effects: itself. Per-flow implementations still need to have support for
scalable congestion control added, which has already been done in
FQ-CoDel (see Sec.5.2.7 of [RFC8290]). Without this simple
modification, per-flow AQMs like FQ-CoDel would still not be able
to support applications that need both ultra-low delay and high
bandwidth, e.g. video-based control of remote procedures, or
interactive cloud-based video (see Note 1 below).
A. fq makes high performance networking equipment costly Although per-flow techniques are not incompatible with L4S, it is
(processing and memory) - in contrast dual queue code can be important to have the DualQ alternative. This is because handling
very simple; end-to-end (layer 4) flows in the network (layer 3 or 2) precludes
some important end-to-end functions. For instance:
B. fq requires packet inspection into the end-to-end transport A. Per-flow forms of L4S like FQ-CoDel are incompatible with full
layer, which doesn't sit well alongside encryption for privacy end-to-end encryption of transport layer identifiers for
- in contrast the use of ECN as the classifier for L4S privacy and confidentiality (e.g. IPSec or encrypted VPN
requires no deeper inspection than the IP layer; tunnels), because they require packet inspection to access the
end-to-end transport flow identifiers.
C. fq isolates the queuing of each flow from the others but not In contrast, the DualQ form of L4S requires no deeper
from itself so existing FQ implementations still need to have inspection than the IP layer. So, as long as operators take
support for scalable congestion control added. the DualQ approach, their users can have both ultra-low
queuing delay and full end-to-end encryption [RFC8404].
It might seem that self-inflicted queuing delay should not B. With per-flow forms of L4S, the network takes over control of
count, because if the delay wasn't in the network it would the relative rates of each application flow. Some see it as
just shift to the sender. However, modern adaptive an advantage that the network will prevent some flows running
applications, e.g. HTTP/2 [RFC7540] or the interactive media faster than others. Others consider it an inherent part of
applications described in Section 6, can keep low latency the Internet's appeal that applications can control their rate
objects at the front of their local send queue by shuffling while taking account of the needs of others via congestion
priorities of other objects dependent on the progress of other signals. They maintain that this has allowed applications
transfers. They cannot shuffle packets once they have with interesting rate behaviours to evolve, for instance,
released them into the network. variable bit-rate video that varies around an equal share
rather than being forced to remain equal at every instant, or
scavenger services that use less than an equal share of
capacity [LEDBAT_AQM].
D. fq prevents any one flow from consuming more than 1/N of the The L4S architecture does not require the IETF to commit to
capacity at any instant, where N is the number of flows. This one approach over the other, because it supports both, so that
is fine if all flows are elastic, but it does not sit well the market can decide. Nonetheless, in the spirit of 'Do one
with a variable bit rate real-time multimedia flow, which thing and do it well' [McIlroy78], the DualQ option provides
requires wriggle room to sometimes take more and other times low delay without prejudging the issue of flow-rate control.
less than a 1/N share. Then, flow rate policing can be added separately if desired.
This allows application control up to a point, but the network
can still choose to set the point at which it intervenes to
prevent one flow completely starving another.
It might seem that an fq scheduler offers the benefit that it Note:
prevents individual flows from hogging all the bandwidth.
However, L4S has been deliberately designed so that policing 1. It might seem that self-inflicted queuing delay within a per-
of individual flows can be added as a policy choice, rather flow queue should not be counted, because if the delay wasn't
than requiring one specific policy choice as the mechanism in the network it would just shift to the sender. However,
itself. A scheduler (like fq) has to decide packet-by-packet modern adaptive applications, e.g. HTTP/2 [RFC7540] or some
which flow to schedule without knowing application intent. interactive media applications (see Section 6.1), can keep low
Whereas a separate policing function can be configured less latency objects at the front of their local send queue by
strictly, so that senders can still control the instantaneous shuffling priorities of other objects dependent on the
rate of each flow dependent on the needs of each application progress of other transfers. They cannot shuffle objects once
(e.g. variable rate video), giving more wriggle-room before a they have released them into the network.
flow is deemed non-compliant. Also policing of queuing and of
flow-rates can be applied independently.
Alternative Back-off ECN (ABE): Here again, L4S is not an Alternative Back-off ECN (ABE): Here again, L4S is not an
alternative to ABE but a complement that introduces much lower alternative to ABE but a complement that introduces much lower
queuing delay. ABE [RFC8511] alters the host behaviour in queuing delay. ABE [RFC8511] alters the host behaviour in
response to ECN marking to utilize a link better and give ECN response to ECN marking to utilize a link better and give ECN
flows faster throughput. It uses ECT(0) and assumes the network flows faster throughput. It uses ECT(0) and assumes the network
still treats ECN and drop the same. Therefore ABE exploits any still treats ECN and drop the same. Therefore ABE exploits any
lower queuing delay that AQMs can provide. But as explained lower queuing delay that AQMs can provide. But as explained
above, AQMs still cannot reduce queuing delay too far without above, AQMs still cannot reduce queuing delay too far without
losing link utilization (to allow for other, non-ABE, flows). losing link utilization (to allow for other, non-ABE, flows).
BBRv1: v1 of Bottleneck Bandwidth and Round-trip propagation time BBR: Bottleneck Bandwidth and Round-trip propagation time
(BBR [I-D.cardwell-iccrg-bbr-congestion-control]) controls queuing (BBR [I-D.cardwell-iccrg-bbr-congestion-control]) controls queuing
delay end-to-end without needing any special logic in the network, delay end-to-end without needing any special logic in the network,
such as an AQM - so it works pretty-much on any path. Setting such as an AQM. So it works pretty-much on any path (although it
some problems with capacity sharing aside, queuing delay is good has not been without problems, particularly capacity sharing in
with BBRv1, but perhaps not quite as low as with state-of-the-art BBRv1). BBR keeps queuing delay reasonably low, but perhaps not
AQMs such as PIE or fq_CoDel, and certainly nowhere near as low as quite as low as with state-of-the-art AQMs such as PIE or FQ-
with L4S. Queuing delay is also not consistently low, due to its CoDel, and certainly nowhere near as low as with L4S. Queuing
regular bandwidth probes and the aggressive flow start-up phase. delay is also not consistently low, due to BBR's regular bandwidth
probing spikes and its aggressive flow start-up phase.
L4S is a complement to BBRv1. Indeed BBRv2 uses L4S ECN and a L4S complements BBR. Indeed BBRv2 uses L4S ECN and a scalable L4S
scalable L4S congestion control behaviour in response to any ECN congestion control behaviour in response to any ECN signalling
signalling from the path. from the path. The L4S ECN signal complements the delay based
congestion control aspects of BBR with an explicit indication that
hosts can use, both to converge on a fair rate and to keep below a
shallow queue target set by the network. Without L4S ECN, both
these aspects need to be assumed or estimated.
6. Applicability 6. Applicability
6.1. Applications 6.1. Applications
A transport layer that solves the current latency issues will provide A transport layer that solves the current latency issues will provide
new service, product and application opportunities. new service, product and application opportunities.
With the L4S approach, the following existing applications will With the L4S approach, the following existing applications also
experience significantly better quality of experience under load: experience significantly better quality of experience under load:
o Gaming, including cloud based gaming; o Gaming, including cloud based gaming;
o VoIP; o VoIP;
o Video conferencing; o Video conferencing;
o Web browsing; o Web browsing;
skipping to change at page 17, line 26 skipping to change at page 18, line 41
are not credible at all without very low queuing delay. No amount of are not credible at all without very low queuing delay. No amount of
extra access bandwidth or local processing can make up for lost time. extra access bandwidth or local processing can make up for lost time.
6.2. Use Cases 6.2. Use Cases
The following use-cases for L4S are being considered by various The following use-cases for L4S are being considered by various
interested parties: interested parties:
o Where the bottleneck is one of various types of access network: o Where the bottleneck is one of various types of access network:
DSL, cable, mobile, satellite e.g. DSL, Passive Optical Networks (PON), DOCSIS cable, mobile,
satellite (see Section 6.3 for some technology-specific details)
* Radio links (cellular, WiFi, satellite) that are distant from
the source are particularly challenging. The radio link
capacity can vary rapidly by orders of magnitude, so it is
often desirable to hold a buffer to utilize sudden increases of
capacity;
* cellular networks are further complicated by a perceived need
to buffer in order to make hand-overs imperceptible;
* Satellite networks generally have a very large base RTT, so
even with minimal queuing, overall delay can never be extremely
low;
* Nonetheless, it is certainly desirable not to hold a buffer
purely because of the sawteeth of Classic congestion controls,
when it is more than is needed for all the above reasons.
o Private networks of heterogeneous data centres, where there is no o Private networks of heterogeneous data centres, where there is no
single administrator that can arrange for all the simultaneous single administrator that can arrange for all the simultaneous
changes to senders, receivers and network needed to deploy DCTCP: changes to senders, receivers and network needed to deploy DCTCP:
* a set of private data centres interconnected over a wide area * a set of private data centres interconnected over a wide area
with separate administrations, but within the same company with separate administrations, but within the same company
* a set of data centres operated by separate companies * a set of data centres operated by separate companies
interconnected by a community of interest network (e.g. for the interconnected by a community of interest network (e.g. for the
skipping to change at page 18, line 21 skipping to change at page 19, line 21
o Different types of transport (or application) congestion control: o Different types of transport (or application) congestion control:
* elastic (TCP/SCTP); * elastic (TCP/SCTP);
* real-time (RTP, RMCAT); * real-time (RTP, RMCAT);
* query (DNS/LDAP). * query (DNS/LDAP).
o Where low delay quality of service is required, but without o Where low delay quality of service is required, but without
inspecting or intervening above the IP layer inspecting or intervening above the IP
[I-D.smith-encrypted-traffic-management]: layer [I-D.smith-encrypted-traffic-management]:
* mobile and other networks have tended to inspect higher layers * mobile and other networks have tended to inspect higher layers
in order to guess application QoS requirements. However, with in order to guess application QoS requirements. However, with
growing demand for support of privacy and encryption, L4S growing demand for support of privacy and encryption, L4S
offers an alternative. There is no need to select which offers an alternative. There is no need to select which
traffic to favour for queuing, when L4S gives favourable traffic to favour for queuing, when L4S gives favourable
queuing to all traffic. queuing to all traffic.
o If queuing delay is minimized, applications with a fixed delay o If queuing delay is minimized, applications with a fixed delay
budget can communicate over longer distances, or via a longer budget can communicate over longer distances, or via a longer
chain of service functions [RFC7665] or onion routers. chain of service functions [RFC7665] or onion routers.
6.3. Deployment Considerations 6.3. Applicability with Specific Link Technologies
The DualQ is, in itself, an incremental deployment framework for L4S Certain link technologies aggregate data from multiple packets into
AQMs so that L4S traffic can coexist with existing Classic (Reno- bursts, and buffer incoming packets while building each burst. WiFi,
friendly) traffic. Section 6.3.1 explains why only deploying a DualQ PON and cable all involve such packet aggregation, whereas fixed
AQM [I-D.ietf-tsvwg-aqm-dualq-coupled] in one node at each end of the Ethernet and DSL do not. No sender, whether L4S or not, can do
access link will realize nearly all the benefit of L4S. anything to reduce the buffering needed for packet aggregation. So
an AQM should not count this buffering as part of the queue that it
controls, given no amount of congestion signals will reduce it.
L4S involves both end systems and the network, so Section 6.3.2 Certain link technologies also add buffering for other reasons,
specifically:
o Radio links (cellular, WiFi, satellite) that are distant from the
source are particularly challenging. The radio link capacity can
vary rapidly by orders of magnitude, so it is considered desirable
to hold a standing queue that can utilize sudden increases of
capacity;
o Cellular networks are further complicated by a perceived need to
buffer in order to make hand-overs imperceptible;
L4S cannot remove the need for all these different forms of
buffering. However, by removing 'the longest pole in the tent'
(buffering for the large sawteeth of Classic congestion controls),
L4S exposes all these 'shorter poles' to greater scrutiny.
Until now, the buffering needed for these additional reasons tended
to be over-specified - with the excuse that none were 'the longest
pole in the tent'. But having removed the 'longest pole', it becomes
worthwhile to minimize them, for instance reducing packet aggregation
burst sizes and MAC scheduling intervals.
6.4. Deployment Considerations
L4S AQMs, whether DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled] or FQ,
e.g. [RFC8290] are, in themselves, an incremental deployment
mechanism for L4S - so that L4S traffic can coexist with existing
Classic (Reno-friendly) traffic. Section 6.4.1 explains why only
deploying an L4S AQM in one node at each end of the access link will
realize nearly all the benefit of L4S.
L4S involves both end systems and the network, so Section 6.4.2
suggests some typical sequences to deploy each part, and why there suggests some typical sequences to deploy each part, and why there
will be an immediate and significant benefit after deploying just one will be an immediate and significant benefit after deploying just one
part. part.
If an ECN-enabled DualQ AQM has not been deployed at a bottleneck, an Section 6.4.3 and Section 6.4.4 describe the converse incremental
L4S flow is required to include a fall-back strategy to Classic deployment case where there is no L4S AQM at the network bottleneck,
behaviour. Section 6.3.3 describes how an L4S flow detects this, and so any L4S flow traversing this bottleneck has to take care in case
how to minimize the effect of false negative detection. it is competing with Classic traffic.
6.3.1. Deployment Topology 6.4.1. Deployment Topology
DualQ AQMs will not have to be deployed throughout the Internet L4S AQMs will not have to be deployed throughout the Internet before
before L4S will work for anyone. Operators of public Internet access L4S will work for anyone. Operators of public Internet access
networks typically design their networks so that the bottleneck will networks typically design their networks so that the bottleneck will
nearly always occur at one known (logical) link. This confines the nearly always occur at one known (logical) link. This confines the
cost of queue management technology to one place. cost of queue management technology to one place.
The case of mesh networks is different and will be discussed later in The case of mesh networks is different and will be discussed later in
this section. But the known bottleneck case is generally true for this section. But the known bottleneck case is generally true for
Internet access to all sorts of different 'sites', where the word Internet access to all sorts of different 'sites', where the word
'site' includes home networks, small-to-medium sized campus or 'site' includes home networks, small- to medium-sized campus or
enterprise networks and even cellular devices (Figure 2). Also, this enterprise networks and even cellular devices (Figure 2). Also, this
known-bottleneck case tends to be applicable whatever the access link known-bottleneck case tends to be applicable whatever the access link
technology; whether xDSL, cable, cellular, line-of-sight wireless or technology; whether xDSL, cable, PON, cellular, line of sight
satellite. wireless or satellite.
Therefore, the full benefit of the L4S service should be available in Therefore, the full benefit of the L4S service should be available in
the downstream direction when the DualQ AQM is deployed at the the downstream direction when an L4S AQM is deployed at the ingress
ingress to this bottleneck link (or links for multihomed sites). And to this bottleneck link. And similarly, the full upstream service
similarly, the full upstream service will be available once the DualQ will be available once an L4S AQM is deployed at the ingress into the
is deployed at the upstream ingress. upstream link. (Of course, multi-homed sites would only see the full
benefit once all their access links were covered.)
______ ______
( ) ( )
__ __ ( ) __ __ ( )
|DQ\________/DQ|( enterprise ) |DQ\________/DQ|( enterprise )
___ |__/ \__| ( /campus ) ___ |__/ \__| ( /campus )
( ) (______) ( ) (______)
( ) ___||_ ( ) ___||_
+----+ ( ) __ __ / \ +----+ ( ) __ __ / \
| DC |-----( Core )|DQ\_______________/DQ|| home | | DC |-----( Core )|DQ\_______________/DQ|| home |
skipping to change at page 19, line 51 skipping to change at page 21, line 37
\/ \__|||_||device \/ \__|||_||device
| o | | o |
`---' `---'
Figure 2: Likely location of DualQ (DQ) Deployments in common access Figure 2: Likely location of DualQ (DQ) Deployments in common access
topologies topologies
Deployment in mesh topologies depends on how over-booked the core is. Deployment in mesh topologies depends on how over-booked the core is.
If the core is non-blocking, or at least generously provisioned so If the core is non-blocking, or at least generously provisioned so
that the edges are nearly always the bottlenecks, it would only be that the edges are nearly always the bottlenecks, it would only be
necessary to deploy the DualQ AQM at the edge bottlenecks. For necessary to deploy an L4S AQM at the edge bottlenecks. For example,
example, some data-centre networks are designed with the bottleneck some data-centre networks are designed with the bottleneck in the
in the hypervisor or host NICs, while others bottleneck at the top- hypervisor or host NICs, while others bottleneck at the top-of-rack
of-rack switch (both the output ports facing hosts and those facing switch (both the output ports facing hosts and those facing the
the core). core).
The DualQ would eventually also need to be deployed at any other An L4S AQM would eventually also need to be deployed at any other
persistent bottlenecks such as network interconnections, e.g. some persistent bottlenecks such as network interconnections, e.g. some
public Internet exchange points and the ingress and egress to WAN public Internet exchange points and the ingress and egress to WAN
links interconnecting data-centres. links interconnecting data-centres.
6.3.2. Deployment Sequences 6.4.2. Deployment Sequences
For any one L4S flow to work, it requires 3 parts to have been For any one L4S flow to work, it requires 3 parts to have been
deployed. This was the same deployment problem that ECN faced deployed. This was the same deployment problem that ECN
[RFC8170] so we have learned from this. faced [RFC8170] so we have learned from that experience.
Firstly, L4S deployment exploits the fact that DCTCP already exists Firstly, L4S deployment exploits the fact that DCTCP already exists
on many Internet hosts (Windows, FreeBSD and Linux); both servers and on many Internet hosts (Windows, FreeBSD and Linux); both servers and
clients. Therefore, just deploying DualQ AQM at a network bottleneck clients. Therefore, just deploying an L4S AQM at a network
immediately gives a working deployment of all the L4S parts. DCTCP bottleneck immediately gives a working deployment of all the L4S
needs some safety concerns to be fixed for general use over the parts. DCTCP needs some safety concerns to be fixed for general use
public Internet (see Section 2.3 of [I-D.ietf-tsvwg-ecn-l4s-id]), but over the public Internet (see Section 2.3 of
DCTCP is not on by default, so these issues can be managed within [I-D.ietf-tsvwg-ecn-l4s-id]), but DCTCP is not on by default, so
controlled deployments or controlled trials. these issues can be managed within controlled deployments or
controlled trials.
Secondly, the performance improvement with L4S is so significant that Secondly, the performance improvement with L4S is so significant that
it enables new interactive services and products that were not it enables new interactive services and products that were not
previously possible. It is much easier for companies to initiate new previously possible. It is much easier for companies to initiate new
work on deployment if there is budget for a new product trial. If, work on deployment if there is budget for a new product trial. If,
in contrast, there were only an incremental performance improvement in contrast, there were only an incremental performance improvement
(as with Classic ECN), spending on deployment tends to be much harder (as with Classic ECN), spending on deployment tends to be much harder
to justify. to justify.
Thirdly, the L4S identifier is defined so that initially network Thirdly, the L4S identifier is defined so that initially network
operators can enable L4S exclusively for certain customers or certain operators can enable L4S exclusively for certain customers or certain
applications. But this is carefully defined so that it does not applications. But this is carefully defined so that it does not
compromise future evolution towards L4S as an Internet-wide service. compromise future evolution towards L4S as an Internet-wide service.
This is because the L4S identifier is defined not only as the end-to- This is because the L4S identifier is defined not only as the end-to-
end ECN field, but it can also optionally be combined with any other end ECN field, but it can also optionally be combined with any other
packet header or some status of a customer or their access link packet header or some status of a customer or their access
[I-D.ietf-tsvwg-ecn-l4s-id]. Operators could do this anyway, even if link [I-D.ietf-tsvwg-ecn-l4s-id]. Operators could do this anyway,
it were not blessed by the IETF. However, it is best for the IETF to even if it were not blessed by the IETF. However, it is best for the
specify that they must use their own local identifier in combination IETF to specify that, if they use their own local identifier, it must
with the IETF's identifier. Then, if an operator enables the be in combination with the IETF's identifier. Then, if an operator
optional local-use approach, they only have to remove this extra rule has opted for an exclusive local-use approach, later they only have
to make the service work Internet-wide - it will already traverse to remove this extra rule to make the service work Internet-wide - it
middleboxes, peerings, etc. will already traverse middleboxes, peerings, etc.
+-+--------------------+----------------------+---------------------+ +-+--------------------+----------------------+---------------------+
| | Servers or proxies | Access link | Clients | | | Servers or proxies | Access link | Clients |
+-+--------------------+----------------------+---------------------+ +-+--------------------+----------------------+---------------------+
|1| DCTCP (existing) | | DCTCP (existing) | |0| DCTCP (existing) | | DCTCP (existing) |
| | | DualQ AQM downstream | | +-+--------------------+----------------------+---------------------+
|1| |Add L4S AQM downstream| |
| | WORKS DOWNSTREAM FOR CONTROLLED DEPLOYMENTS/TRIALS | | | WORKS DOWNSTREAM FOR CONTROLLED DEPLOYMENTS/TRIALS |
+-+--------------------+----------------------+---------------------+ +-+--------------------+----------------------+---------------------+
|2| TCP Prague | | AccECN (already in | |2| Upgrade DCTCP to | |Replace DCTCP feedb'k|
| | | | progress:DCTCP/BBR) | | | TCP Prague | | with AccECN |
| | FULLY WORKS DOWNSTREAM | | | FULLY WORKS DOWNSTREAM |
+-+--------------------+----------------------+---------------------+ +-+--------------------+----------------------+---------------------+
|3| | DualQ AQM upstream | TCP Prague | | | | | Upgrade DCTCP to |
|3| | Add L4S AQM upstream | TCP Prague |
| | | | | | | | | |
| | FULLY WORKS UPSTREAM AND DOWNSTREAM | | | FULLY WORKS UPSTREAM AND DOWNSTREAM |
+-+--------------------+----------------------+---------------------+ +-+--------------------+----------------------+---------------------+
Figure 3: Example L4S Deployment Sequences Figure 3: Example L4S Deployment Sequence
Figure 3 illustrates some example sequences in which the parts of L4S Figure 3 illustrates some example sequences in which the parts of L4S
might be deployed. It consists of the following stages: might be deployed. It consists of the following stages:
1. Here, the immediate benefit of a single AQM deployment can be 1. Here, the immediate benefit of a single AQM deployment can be
seen, but limited to a controlled trial or controlled deployment. seen, but limited to a controlled trial or controlled deployment.
In this example downstream deployment is first, but in other In this example downstream deployment is first, but in other
scenarios the upstream might be deployed first. If no AQM at all scenarios the upstream might be deployed first. If no AQM at all
was previously deployed for the downstream access, the DualQ AQM was previously deployed for the downstream access, an L4S AQM
greatly improves the Classic service (as well as adding the L4S greatly improves the Classic service (as well as adding the L4S
service). If an AQM was already deployed, the Classic service service). If an AQM was already deployed, the Classic service
will be unchanged (and L4S will add an improvement on top). will be unchanged (and L4S will add an improvement on top).
2. In this stage, the name 'TCP Prague' is used to represent a 2. In this stage, the name 'TCP Prague' [PragueLinux] is used to
variant of DCTCP that is safe to use in a production environment. represent a variant of DCTCP that is safe to use in a production
If the application is primarily unidirectional, 'TCP Prague' at Internet environment. If the application is primarily
one end will provide all the benefit needed. Accurate ECN unidirectional, 'TCP Prague' at one end will provide all the
feedback (AccECN) [I-D.ietf-tcpm-accurate-ecn] is needed at the benefit needed. For TCP transports, Accurate ECN feedback
other end, but it is a generic ECN feedback facility that is (AccECN) [I-D.ietf-tcpm-accurate-ecn] is needed at the other end,
already planned to be deployed for other purposes, e.g. DCTCP, but it is a generic ECN feedback facility that is already planned
BBR [I-D.cardwell-iccrg-bbr-congestion-control]. The two ends to be deployed for other purposes, e.g. DCTCP, BBR. The two ends
can be deployed in either order, because, in TCP, an L4S can be deployed in either order, because, in TCP, an L4S
congestion control only enables itself if it has negotiated the congestion control only enables itself if it has negotiated the
use of AccECN feedback with the other end during the connection use of AccECN feedback with the other end during the connection
handshake. Thus, deployment of TCP Prague on a server enables handshake. Thus, deployment of TCP Prague on a server enables
L4S trials to move to a production service in one direction, L4S trials to move to a production service in one direction,
wherever AccECN is deployed at the other end. This stage might wherever AccECN is deployed at the other end. This stage might
be further motivated by the performance improvements of TCP be further motivated by the performance improvements of TCP
Prague relative to DCTCP (see Appendix A.2 of Prague relative to DCTCP (see Appendix A.2 of
[I-D.ietf-tsvwg-ecn-l4s-id]). [I-D.ietf-tsvwg-ecn-l4s-id]).
3. This is a two-move stage to enable L4S upstream. The DualQ or Unlike TCP, from the outset, QUIC ECN
feedback [I-D.ietf-quic-transport] has supported L4S. Therefore,
if the transport is QUIC, one-ended deployment of a Prague
congestion control at this stage is simple and sufficient.
3. This is a two-move stage to enable L4S upstream. An L4S AQM or
TCP Prague can be deployed in either order as already explained. TCP Prague can be deployed in either order as already explained.
To motivate the first of two independent moves, the deferred To motivate the first of two independent moves, the deferred
benefit of enabling new services after the second move has to be benefit of enabling new services after the second move has to be
worth it to cover the first mover's investment risk. As worth it to cover the first mover's investment risk. As
explained already, the potential for new interactive services explained already, the potential for new interactive services
provides this motivation. The DualQ AQM also greatly improves provides this motivation. An L4S AQM also improves the upstream
the upstream Classic service, assuming no other AQM has already Classic service - significantly if no other AQM has already been
been deployed. deployed.
Note that other deployment sequences might occur. For instance: the Note that other deployment sequences might occur. For instance: the
upstream might be deployed first; a non-TCP protocol might be used upstream might be deployed first; a non-TCP protocol might be used
end-to-end, e.g. QUIC, RMCAT; a body such as the 3GPP might require end-to-end, e.g. QUIC, RTP; a body such as the 3GPP might require L4S
L4S to be implemented in 5G user equipment, or other random acts of to be implemented in 5G user equipment, or other random acts of
kindness. kindness.
6.3.3. L4S Flow but Non-L4S Bottleneck 6.4.3. L4S Flow but Non-ECN Bottleneck
If L4S is enabled between two hosts but there is no L4S AQM at the If L4S is enabled between two hosts, the L4S sender is required to
bottleneck, any drop from the bottleneck will trigger the L4S sender coexist safely with Reno in response to any drop (see Section 4.3 of
to fall back to a classic ('Reno-friendly') behaviour (see [I-D.ietf-tsvwg-ecn-l4s-id]).
Appendix A.1.3 of [I-D.ietf-tsvwg-ecn-l4s-id]).
Unfortunately, as well as protecting legacy traffic, this rule Unfortunately, as well as protecting Classic traffic, this rule
degrades the L4S service whenever there is a loss, even if the loss degrades the L4S service whenever there is any loss, even if the
was not from a non-DualQ bottleneck (false negative). And cause is not persistent congestion at a bottleneck, e.g.:
unfortunately, prevalent drop can be due to other causes, e.g.:
o congestion loss at other transient bottlenecks, e.g. due to bursts o congestion loss at other transient bottlenecks, e.g. due to bursts
in shallower queues; in shallower queues;
o transmission errors, e.g. due to electrical interference; o transmission errors, e.g. due to electrical interference;
o rate policing. o rate policing.
Three complementary approaches are in progress to address this issue, Three complementary approaches are in progress to address this issue,
but they are all currently research: but they are all currently research:
o In Prague congestion control, ignore certain losses deemed o In Prague congestion control, ignore certain losses deemed
unlikely to be due to congestion (using some ideas from BBR unlikely to be due to congestion (using some ideas from
[I-D.cardwell-iccrg-bbr-congestion-control] but with no need to BBR [I-D.cardwell-iccrg-bbr-congestion-control] regarding isolated
ignore nearly all losses). This could mask any of the above types losses). This could mask any of the above types of loss while
of loss (requires consensus on how to safely interoperate with still coexisting with drop-based congestion controls.
drop-based congestion controls).
o A combination of RACK, reconfigured link retransmission and L4S o A combination of RACK, L4S and link retransmission without
could address transmission errors [UnorderedLTE], resequencing could repair transmission errors without the head of
[I-D.ietf-tsvwg-ecn-l4s-id]; line blocking delay usually associated with link-layer
retransmission [UnorderedLTE], [I-D.ietf-tsvwg-ecn-l4s-id];
o Hybrid ECN/drop rate policers (see Section 8.3). o Hybrid ECN/drop rate policers (see Section 8.3).
L4S deployment scenarios that minimize these issues (e.g. over L4S deployment scenarios that minimize these issues (e.g. over
wireline networks) can proceed in parallel to this research, in the wireline networks) can proceed in parallel to this research, in the
expectation that research success could continually widen L4S expectation that research success could continually widen L4S
applicability. applicability.
6.4.4. L4S Flow but Classic ECN Bottleneck
Classic ECN support is starting to materialize on the Internet as an Classic ECN support is starting to materialize on the Internet as an
increased level of CE marking. Given some of this Classic ECN might increased level of CE marking. It is hard to detect whether this is
be due to single-queue ECN deployment, an L4S sender will have to
fall back to a classic ('Reno-friendly') behaviour if it detects that
ECN marking is accompanied by greater queuing delay or greater delay
variation than would be expected with L4S (see Appendix A.1.4 of
[I-D.ietf-tsvwg-ecn-l4s-id]). It is hard to detect whether this is
all due to the addition of support for ECN in the Linux all due to the addition of support for ECN in the Linux
implementation of FQ-CoDel, which would not require fall-back to implementation of FQ-CoDel, which is not problematic, because FQ
Classic behaviour, because FQ inherently forces the throughput of inherently forces the throughput of each flow to be equal
each flow to be equal irrespective of its aggressiveness. irrespective of its aggressiveness. However, some of this Classic
ECN marking might be due to single-queue ECN deployment. This case
is discussed in Section 4.3 of [I-D.ietf-tsvwg-ecn-l4s-id]).
6.3.4. Other Potential Deployment Issues 6.4.5. L4S AQM Deployment within Tunnels
An L4S AQM uses the ECN field to signal congestion. So, in common An L4S AQM uses the ECN field to signal congestion. So, in common
with Classic ECN, if the AQM is within a tunnel or at a lower layer, with Classic ECN, if the AQM is within a tunnel or at a lower layer,
correct functioning of ECN signalling requires correct propagation of correct functioning of ECN signalling requires correct propagation of
the ECN field up the layers [RFC6040], the ECN field up the layers [RFC6040],
[I-D.ietf-tsvwg-rfc6040update-shim],
[I-D.ietf-tsvwg-ecn-encap-guidelines]. [I-D.ietf-tsvwg-ecn-encap-guidelines].
7. IANA Considerations 7. IANA Considerations (to be removed by RFC Editor)
This specification contains no IANA considerations. This specification contains no IANA considerations.
8. Security Considerations 8. Security Considerations
8.1. Traffic (Non-)Policing 8.1. Traffic Rate (Non-)Policing
Because the L4S service can serve all traffic that is using the Because the L4S service can serve all traffic that is using the
capacity of a link, it should not be necessary to police access to capacity of a link, it should not be necessary to rate-police access
the L4S service. In contrast, Diffserv only works if some packets to the L4S service. In contrast, Diffserv only works if some packets
get less favourable treatment than others. So Diffserv has to use get less favourable treatment than others. So Diffserv has to use
traffic rate policers to limit how much traffic can be favoured. In traffic rate policers to limit how much traffic can be favoured. In
turn, traffic policers require traffic contracts between users and turn, traffic policers require traffic contracts between users and
networks as well as pairwise between networks. Because L4S will lack networks as well as pairwise between networks. Because L4S will lack
all this management complexity, it is more likely to work end-to-end. all this management complexity, it is more likely to work end-to-end.
During early deployment (and perhaps always), some networks will not During early deployment (and perhaps always), some networks will not
offer the L4S service. These networks do not need to police or re- offer the L4S service. In general, these networks should not need to
mark L4S traffic - they just forward it unchanged as best efforts police L4S traffic - they are required not to change the L4S
traffic, as they already forward traffic with ECT(1) today. At a identifier, merely treating the traffic as best efforts traffic, as
bottleneck, such networks will introduce some queuing and dropping. they already treat traffic with ECT(1) today. At a bottleneck, such
When a scalable congestion control detects a drop it will have to networks will introduce some queuing and dropping. When a scalable
respond as if it is a Classic congestion control (as required in congestion control detects a drop it will have to respond safely with
Section 2.3 of [I-D.ietf-tsvwg-ecn-l4s-id]). This will ensure safe respect to Classic congestion controls (as required in Section 4.3 of
interworking with other traffic at the 'legacy' bottleneck, but it [I-D.ietf-tsvwg-ecn-l4s-id]). This will degrade the L4S service to
will degrade the L4S service to no better (but never worse) than be no better (but never worse) than Classic best efforts, whenever a
classic best efforts, whenever a legacy (non-L4S) bottleneck is non-ECN bottleneck is encountered on a path (see Section 6.4.3).
encountered on a path.
In some cases, networks that solely support Classic ECN [RFC3168] in
a single queue bottleneck might opt to police L4S traffic in order to
protect competing Classic ECN traffic.
Certain network operators might choose to restrict access to the L4S Certain network operators might choose to restrict access to the L4S
class, perhaps only to selected premium customers as a value-added class, perhaps only to selected premium customers as a value-added
service. Their packet classifier (item 2 in Figure 1) could identify service. Their packet classifier (item 2 in Figure 1) could identify
such customers against some other field (e.g. source address range) such customers against some other field (e.g. source address range)
as well as ECN. If only the ECN L4S identifier matched, but not the as well as ECN. If only the ECN L4S identifier matched, but not the
source address (say), the classifier could direct these packets (from source address (say), the classifier could direct these packets (from
non-premium customers) into the Classic queue. Clearly explaining non-premium customers) into the Classic queue. Explaining clearly
how operators can use an additional local classifiers (see how operators can use an additional local classifiers (see
[I-D.ietf-tsvwg-ecn-l4s-id]) is intended to remove any tendency to [I-D.ietf-tsvwg-ecn-l4s-id]) is intended to remove any motivation to
bleach the L4S identifier. Then at least the L4S ECN identifier will bleach the L4S identifier. Then at least the L4S ECN identifier will
be more likely to survive end-to-end even though the service may not be more likely to survive end-to-end even though the service may not
be supported at every hop. Such arrangements would only require be supported at every hop. Such local arrangements would only
simple registered/not-registered packet classification, rather than require simple registered/not-registered packet classification,
the managed, application-specific traffic policing against customer- rather than the managed, application-specific traffic policing
specific traffic contracts that Diffserv uses. against customer-specific traffic contracts that Diffserv uses.
8.2. 'Latency Friendliness' 8.2. 'Latency Friendliness'
Like the Classic service, the L4S service relies on self-constraint - Like the Classic service, the L4S service relies on self-constraint -
limiting rate in response to congestion. In addition, the L4S limiting rate in response to congestion. In addition, the L4S
service requires self-constraint in terms of limiting latency service requires self-constraint in terms of limiting latency
(burstiness). It is hoped that self-interest and standardization of (burstiness). It is hoped that self-interest and guidance on dynamic
dynamic behaviour (especially flow start-up) will be sufficient to behaviour (especially flow start-up, which might need to be
prevent transports from sending excessive bursts of L4S traffic, standardized) will be sufficient to prevent transports from sending
given the application's own latency will suffer most from such excessive bursts of L4S traffic, given the application's own latency
behaviour. will suffer most from such behaviour.
Whether burst policing becomes necessary remains to be seen. Without Whether burst policing becomes necessary remains to be seen. Without
it, there will be potential for attacks on the low latency of the L4S it, there will be potential for attacks on the low latency of the L4S
service. However it may only be necessary to apply such policing service.
reactively, e.g. punitively targeted at any deployments of new bursty
malware.
A per-flow (5-tuple) queue protection function If needed, various arrangements could be used to address this
[I-D.briscoe-docsis-q-protection] has been developed for the low concern:
latency queue in DOCSIS, which has adopted the DualQ L4S
architecture. It protects the low latency service from any queue-
building flows that accidentally or maliciously classify themselves
into the low latency queue. It is designed to score flows based
solely on their contribution to queuing (not flow rate in itself).
Then, if the shared low latency queue is at risk of exceeding a
threshold, the function redirects enough packets of the highest
scoring flow(s) into the Classic queue to preserve low latency.
Such a queue protection function is not considered a necessary part Local bottleneck queue protection: A per-flow (5-tuple) queue
of the L4S architecture, which works without it (in a similar way to protection function [I-D.briscoe-docsis-q-protection] has been
how the Internet works without per-flow rate policing). Indeed, developed for the low latency queue in DOCSIS, which has adopted
under normal circumstances, DOCSIS queue protection does not the DualQ L4S architecture. It protects the low latency service
intervene, and if operators find it is not necessary they can disable from any queue-building flows that accidentally or maliciously
it. Part of the L4S experiment will be to see whether such a classify themselves into the low latency queue. It is designed to
function is necessary. score flows based solely on their contribution to queuing (not
flow rate in itself). Then, if the shared low latency queue is at
risk of exceeding a threshold, the function redirects enough
packets of the highest scoring flow(s) into the Classic queue to
preserve low latency.
Distributed traffic scrubbing: Rather than policing locally at each
bottleneck, it may only be necessary to address problems
reactively, e.g. punitively target any deployments of new bursty
malware, in a similar way to how traffic from flooding attack
sources is rerouted via scrubbing facilities.
Local bottleneck per-flow scheduling: Per-flow scheduling should
inherently isolate non-bursty flows from bursty (see Section 5.2
for discussion of the merits of per-flow scheduling relative to
per-flow policing).
Distributed access subnet queue protection: Per-flow queue
protection could be arranged for a queue structure distributed
across a subnet inter-communicating using lower layer control
messages (see Section 2.1.4 of [QDyn]). For instance, in a radio
access network user equipment already sends regular buffer status
reports to a radio network controller, which could use this
information to remotely police individual flows.
Distributed Congestion Exposure to Ingress Policers: The Congestion
Exposure (ConEx) architecture [RFC7713] which uses egress audit to
motivate senders to truthfully signal path congestion in-band
where it can be used by ingress policers. An edge-to-edge variant
of this architecture is also possible.
Distributed Domain-edge traffic conditioning: An architecture
similar to Diffserv [RFC2475] may be preferred, where traffic is
proactively conditioned on entry to a domain, rather than
reactively policed only if it is leads to queuing once combined
with other traffic at a bottleneck.
Distributed core network queue protection: The policing function
could be divided between per-flow mechanisms at the network
ingress that characterize the burstiness of each flow into a
signal carried with the traffic, and per-class mechanisms at
bottlenecks that act on these signals if queuing actually occurs
once the traffic converges. This would be somewhat similar to the
idea behind core stateless fair queuing, which is in turn similar
to [Nadas20].
None of these possible queue protection capabilities are considered a
necessary part of the L4S architecture, which works without them (in
a similar way to how the Internet works without per-flow rate
policing). Indeed, under normal circumstances, latency policers
would not intervene, and if operators found they were not necessary
they could disable them. Part of the L4S experiment will be to see
whether such a function is necessary, and which arrangements are most
appropriate to the size of the problem.
8.3. Interaction between Rate Policing and L4S 8.3. Interaction between Rate Policing and L4S
As mentioned in Section 5.2, L4S should remove the need for low As mentioned in Section 5.2, L4S should remove the need for low
latency Diffserv classes. However, those Diffserv classes that give latency Diffserv classes. However, those Diffserv classes that give
certain applications or users priority over capacity, would still be certain applications or users priority over capacity, would still be
applicable in certain scenarios (e.g. corporate networks). Then, applicable in certain scenarios (e.g. corporate networks). Then,
within such Diffserv classes, L4S would often be applicable to give within such Diffserv classes, L4S would often be applicable to give
traffic low latency and low loss as well. Within such a Diffserv traffic low latency and low loss as well. Within such a Diffserv
class, the bandwidth available to a user or application is often class, the bandwidth available to a user or application is often
limited by a rate policer. Similarly, in the default Diffserv class, limited by a rate policer. Similarly, in the default Diffserv class,
rate policers are used to partition shared capacity. rate policers are used to partition shared capacity.
A classic rate policer drops any packets exceeding a set rate, A classic rate policer drops any packets exceeding a set rate,
usually also giving a burst allowance (variants exist where the usually also giving a burst allowance (variants exist where the
policer re-marks non-compliant traffic to a discard-eligible Diffserv policer re-marks non-compliant traffic to a discard-eligible Diffserv
codepoint, so they may be dropped elsewhere during contention). codepoint, so they may be dropped elsewhere during contention).
Whenever L4S traffic encounters one of these rate policers, it will Whenever L4S traffic encounters one of these rate policers, it will
experience drops and the source has to fall back to a Classic experience drops and the source will have to fall back to a Classic
congestion control, thus losing the benefits of L4S. So, in networks congestion control, thus losing the benefits of L4S (Section 6.4.3).
that already use rate policers and plan to deploy L4S, it will be So, in networks that already use rate policers and plan to deploy
preferable to redesign these rate policers to be more friendly to the L4S, it will be preferable to redesign these rate policers to be more
L4S service. friendly to the L4S service.
L4S-friendly rate policing is currently a research area (note that L4S-friendly rate policing is currently a research area (note that
this is not the same as latency policing). It might be achieved by this is not the same as latency policing). It might be achieved by
setting a threshold where ECN marking is introduced, such that it is setting a threshold where ECN marking is introduced, such that it is
just under the policed rate or just under the burst allowance where just under the policed rate or just under the burst allowance where
drop is introduced. This could be applied to various types of rate drop is introduced. This could be applied to various types of rate
policer, e.g. [RFC2697], [RFC2698] or the 'local' (non-ConEx) policer, e.g. [RFC2697], [RFC2698] or the 'local' (non-ConEx) variant
variant of the ConEx congestion policer [I-D.briscoe-conex-policing]. of the ConEx congestion policer [I-D.briscoe-conex-policing]. It
It might also be possible to design scalable congestion controls to might also be possible to design scalable congestion controls to
respond less catastrophically to loss that has not been preceded by a respond less catastrophically to loss that has not been preceded by a
period of increasing delay. period of increasing delay.
The design of L4S-friendly rate policers will require a separate The design of L4S-friendly rate policers will require a separate
dedicated document. For further discussion of the interaction dedicated document. For further discussion of the interaction
between L4S and Diffserv, see [I-D.briscoe-tsvwg-l4s-diffserv]. between L4S and Diffserv, see [I-D.briscoe-tsvwg-l4s-diffserv].
8.4. ECN Integrity 8.4. ECN Integrity
Receiving hosts can fool a sender into downloading faster by Receiving hosts can fool a sender into downloading faster by
skipping to change at page 26, line 24 skipping to change at page 29, line 30
transport feedback integrity have been developed. For instance: transport feedback integrity have been developed. For instance:
o The sender can test the integrity of the receiver's feedback by o The sender can test the integrity of the receiver's feedback by
occasionally setting the IP-ECN field to the congestion occasionally setting the IP-ECN field to the congestion
experienced (CE) codepoint, which is normally only set by a experienced (CE) codepoint, which is normally only set by a
congested link. Then the sender can test whether the receiver's congested link. Then the sender can test whether the receiver's
feedback faithfully reports what it expects (see 2nd para of feedback faithfully reports what it expects (see 2nd para of
Section 20.2 of [RFC3168]). Section 20.2 of [RFC3168]).
o A network can enforce a congestion response to its ECN markings o A network can enforce a congestion response to its ECN markings
(or packet losses) by auditing congestion exposure (ConEx) (or packet losses) by auditing congestion exposure
[RFC7713]. (ConEx) [RFC7713].
o The TCP authentication option (TCP-AO [RFC5925]) can be used to o The TCP authentication option (TCP-AO [RFC5925]) can be used to
detect tampering with TCP congestion feedback. detect tampering with TCP congestion feedback.
o The ECN Nonce [RFC3540] was proposed to detect tampering with o The ECN Nonce [RFC3540] was proposed to detect tampering with
congestion feedback, but it has been reclassified as historic congestion feedback, but it has been reclassified as
[RFC8311]. historic [RFC8311].
Appendix C.1 of [I-D.ietf-tsvwg-ecn-l4s-id] gives more details of Appendix C.1 of [I-D.ietf-tsvwg-ecn-l4s-id] gives more details of
these techniques including their applicability and pros and cons. these techniques including their applicability and pros and cons.
8.5. Privacy Considerations
As discussed in Section 5.2, the L4S architecture does not preclude
approaches that inspect end-to-end transport layer identifiers. For
instance it is simple to add L4S support to FQ-CoDel, which
classifies by application flow ID in the network. However, the main
innovation of L4S is the DualQ AQM framework that does not need to
inspect any deeper than the outermost IP header, because the L4S
identifier is in the IP-ECN field.
Thus, the L4S architecture enables ultra-low queuing delay without
_requiring_ inspection of information above the IP layer. This means
that users who want to encrypt application flow identifiers, e.g. in
IPSec or other encrypted VPN tunnels, don't have to sacrifice low
delay [RFC8404].
Because L4S can provide low delay for a broad set of applications
that choose to use it, there is no need for individual applications
or classes within that broad set to be distinguishable in any way
while traversing networks. This removes much of the ability to
correlate between the delay requirements of traffic and other
identifying features [RFC6973]. There may be some types of traffic
that prefer not to use L4S, but the coarse binary categorization of
traffic reveals very little that could be exploited to compromise
privacy.
9. Acknowledgements 9. Acknowledgements
Thanks to Richard Scheffenegger, Wes Eddy, Karen Nielsen, David Black Thanks to Richard Scheffenegger, Wes Eddy, Karen Nielsen, David Black
and Jake Holland for their useful review comments. and Jake Holland for their useful review comments.
Bob Briscoe and Koen De Schepper were part-funded by the European Bob Briscoe and Koen De Schepper were part-funded by the European
Community under its Seventh Framework Programme through the Reducing Community under its Seventh Framework Programme through the Reducing
Internet Transport Latency (RITE) project (ICT-317700). Bob Briscoe Internet Transport Latency (RITE) project (ICT-317700). Bob Briscoe
was also part-funded by the Research Council of Norway through the was also part-funded by the Research Council of Norway through the
TimeIn project, partly by CableLabs and partly by the Comcast TimeIn project, partly by CableLabs and partly by the Comcast
Innovation Fund. The views expressed here are solely those of the Innovation Fund. The views expressed here are solely those of the
authors. authors.
10. References 10. Informative References
10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<https://www.rfc-editor.org/info/rfc2119>.
10.2. Informative References [AFCD] Xue, L., Kumar, S., Cui, C., Kondikoppa, P., Chiu, C-H.,
and S-J. Park, "Towards fair and low latency next
generation high speed networks: AFCD queuing", Journal of
Network and Computer Applications 70:183--193, July 2016.
[DCttH15] De Schepper, K., Bondarenko, O., Briscoe, B., and I. [DCttH15] De Schepper, K., Bondarenko, O., Briscoe, B., and I.
Tsang, "`Data Centre to the Home': Ultra-Low Latency for Tsang, "`Data Centre to the Home': Ultra-Low Latency for
All", RITE project Technical Report , 2015, All", RITE project Technical Report , 2015,
<http://riteproject.eu/publications/>. <http://riteproject.eu/publications/>.
[DOCSIS3.1] [DOCSIS3.1]
CableLabs, "MAC and Upper Layer Protocols Interface CableLabs, "MAC and Upper Layer Protocols Interface
(MULPI) Specification, CM-SP-MULPIv3.1", Data-Over-Cable (MULPI) Specification, CM-SP-MULPIv3.1", Data-Over-Cable
Service Interface Specifications DOCSIS(R) 3.1 Version i17 Service Interface Specifications DOCSIS(R) 3.1 Version i17
skipping to change at page 28, line 18 skipping to change at page 31, line 41
draft-briscoe-tsvwg-l4s-diffserv-02 (work in progress), draft-briscoe-tsvwg-l4s-diffserv-02 (work in progress),
November 2018. November 2018.
[I-D.cardwell-iccrg-bbr-congestion-control] [I-D.cardwell-iccrg-bbr-congestion-control]
Cardwell, N., Cheng, Y., Yeganeh, S., and V. Jacobson, Cardwell, N., Cheng, Y., Yeganeh, S., and V. Jacobson,
"BBR Congestion Control", draft-cardwell-iccrg-bbr- "BBR Congestion Control", draft-cardwell-iccrg-bbr-
congestion-control-00 (work in progress), July 2017. congestion-control-00 (work in progress), July 2017.
[I-D.ietf-quic-transport] [I-D.ietf-quic-transport]
Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed
and Secure Transport", draft-ietf-quic-transport-27 (work and Secure Transport", draft-ietf-quic-transport-32 (work
in progress), February 2020. in progress), October 2020.
[I-D.ietf-tcpm-accurate-ecn] [I-D.ietf-tcpm-accurate-ecn]
Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More Briscoe, B., Kuehlewind, M., and R. Scheffenegger, "More
Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate- Accurate ECN Feedback in TCP", draft-ietf-tcpm-accurate-
ecn-11 (work in progress), March 2020. ecn-11 (work in progress), March 2020.
[I-D.ietf-tcpm-generalized-ecn] [I-D.ietf-tcpm-generalized-ecn]
Bagnulo, M. and B. Briscoe, "ECN++: Adding Explicit Bagnulo, M. and B. Briscoe, "ECN++: Adding Explicit
Congestion Notification (ECN) to TCP Control Packets", Congestion Notification (ECN) to TCP Control Packets",
draft-ietf-tcpm-generalized-ecn-05 (work in progress), draft-ietf-tcpm-generalized-ecn-05 (work in progress),
November 2019. November 2019.
[I-D.ietf-tsvwg-aqm-dualq-coupled] [I-D.ietf-tsvwg-aqm-dualq-coupled]
Schepper, K., Briscoe, B., and G. White, "DualQ Coupled Schepper, K., Briscoe, B., and G. White, "DualQ Coupled
AQMs for Low Latency, Low Loss and Scalable Throughput AQMs for Low Latency, Low Loss and Scalable Throughput
(L4S)", draft-ietf-tsvwg-aqm-dualq-coupled-10 (work in (L4S)", draft-ietf-tsvwg-aqm-dualq-coupled-12 (work in
progress), July 2019. progress), July 2020.
[I-D.ietf-tsvwg-ecn-encap-guidelines] [I-D.ietf-tsvwg-ecn-encap-guidelines]
Briscoe, B., Kaippallimalil, J., and P. Thaler, Briscoe, B., Kaippallimalil, J., and P. Thaler,
"Guidelines for Adding Congestion Notification to "Guidelines for Adding Congestion Notification to
Protocols that Encapsulate IP", draft-ietf-tsvwg-ecn- Protocols that Encapsulate IP", draft-ietf-tsvwg-ecn-
encap-guidelines-13 (work in progress), May 2019. encap-guidelines-13 (work in progress), May 2019.
[I-D.ietf-tsvwg-ecn-l4s-id] [I-D.ietf-tsvwg-ecn-l4s-id]
Schepper, K. and B. Briscoe, "Identifying Modified Schepper, K. and B. Briscoe, "Identifying Modified
Explicit Congestion Notification (ECN) Semantics for Explicit Congestion Notification (ECN) Semantics for
Ultra-Low Queuing Delay (L4S)", draft-ietf-tsvwg-ecn-l4s- Ultra-Low Queuing Delay (L4S)", draft-ietf-tsvwg-ecn-l4s-
id-09 (work in progress), February 2020. id-10 (work in progress), March 2020.
[I-D.ietf-tsvwg-rfc6040update-shim]
Briscoe, B., "Propagating Explicit Congestion Notification
Across IP Tunnel Headers Separated by a Shim", draft-ietf-
tsvwg-rfc6040update-shim-10 (work in progress), March
2020.
[I-D.morton-tsvwg-codel-approx-fair]
Morton, J. and P. Heist, "Controlled Delay Approximate
Fairness AQM", draft-morton-tsvwg-codel-approx-fair-01
(work in progress), March 2020.
[I-D.smith-encrypted-traffic-management] [I-D.smith-encrypted-traffic-management]
Smith, K., "Network management of encrypted traffic", Smith, K., "Network management of encrypted traffic",
draft-smith-encrypted-traffic-management-05 (work in draft-smith-encrypted-traffic-management-05 (work in
progress), May 2016. progress), May 2016.
[I-D.sridharan-tcpm-ctcp] [I-D.sridharan-tcpm-ctcp]
Sridharan, M., Tan, K., Bansal, D., and D. Thaler, Sridharan, M., Tan, K., Bansal, D., and D. Thaler,
"Compound TCP: A New TCP Congestion Control for High-Speed "Compound TCP: A New TCP Congestion Control for High-Speed
and Long Distance Networks", draft-sridharan-tcpm-ctcp-02 and Long Distance Networks", draft-sridharan-tcpm-ctcp-02
skipping to change at page 29, line 34 skipping to change at page 33, line 23
tsvwg-nqb-02 (work in progress), June 2019. tsvwg-nqb-02 (work in progress), June 2019.
[L4Sdemo16] [L4Sdemo16]
Bondarenko, O., De Schepper, K., Tsang, I., and B. Bondarenko, O., De Schepper, K., Tsang, I., and B.
Briscoe, "orderedUltra-Low Delay for All: Live Experience, Briscoe, "orderedUltra-Low Delay for All: Live Experience,
Live Analysis", Proc. MMSYS'16 pp33:1--33:4, May 2016, Live Analysis", Proc. MMSYS'16 pp33:1--33:4, May 2016,
<http://dl.acm.org/citation.cfm?doid=2910017.2910633 <http://dl.acm.org/citation.cfm?doid=2910017.2910633
(videos of demos: (videos of demos:
https://riteproject.eu/dctth/#1511dispatchwg )>. https://riteproject.eu/dctth/#1511dispatchwg )>.
[LEDBAT_AQM]
Al-Saadi, R., Armitage, G., and J. But, "Characterising
LEDBAT Performance Through Bottlenecks Using PIE, FQ-CoDel
and FQ-PIE Active Queue Management", Proc. IEEE 42nd
Conference on Local Computer Networks (LCN) 278--285,
2017, <https://ieeexplore.ieee.org/document/8109367>.
[Mathis09] [Mathis09]
Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , Mathis, M., "Relentless Congestion Control", PFLDNeT'09 ,
May 2009, <https://www.gdt.id.au/~gdt/ May 2009, <https://www.gdt.id.au/~gdt/
presentations/2010-07-06-questnet-tcp/reference- presentations/2010-07-06-questnet-tcp/reference-
materials/papers/mathis-relentless-congestion- materials/papers/mathis-relentless-congestion-
control.pdf>. control.pdf>.
[McIlroy78]
McIlroy, M., Pinson, E., and B. Tague, "UNIX Time-Sharing
System: Foreword", The Bell System Technical Journal
57:6(1902--1903), July 1978,
<https://archive.org/details/bstj57-6-1899>.
[Nadas20] Nadas, S., Gombos, G., Fejes, F., and S. Laki, "A
Congestion Control Independent L4S Scheduler", Proc.
Applied Networking Research Workshop (ANRW '20) 45--51,
July 2020.
[NewCC_Proc] [NewCC_Proc]
Eggert, L., "Experimental Specification of New Congestion Eggert, L., "Experimental Specification of New Congestion
Control Algorithms", IETF Operational Note ion-tsv-alt-cc, Control Algorithms", IETF Operational Note ion-tsv-alt-cc,
July 2007. July 2007.
[PragueLinux] [PragueLinux]
Briscoe, B., De Schepper, K., Albisser, O., Misund, J., Briscoe, B., De Schepper, K., Albisser, O., Misund, J.,
Tilmans, O., Kuehlewind, M., and A. Ahmed, "Implementing Tilmans, O., Kuehlewind, M., and A. Ahmed, "Implementing
the `TCP Prague' Requirements for Low Latency Low Loss the `TCP Prague' Requirements for Low Latency Low Loss
Scalable Throughput (L4S)", Proc. Linux Netdev 0x13 , Scalable Throughput (L4S)", Proc. Linux Netdev 0x13 ,
March 2019, <https://www.netdevconf.org/0x13/ March 2019, <https://www.netdevconf.org/0x13/
session.html?talk-tcp-prague-l4s>. session.html?talk-tcp-prague-l4s>.
[QDyn] Briscoe, B., "Rapid Signalling of Queue Dynamics",
bobbriscoe.net Technical Report TR-BB-2017-001;
arXiv:1904.07044 [cs.NI], September 2017,
<https://arxiv.org/abs/1904.07044>.
[RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z.,
and W. Weiss, "An Architecture for Differentiated
Services", RFC 2475, DOI 10.17487/RFC2475, December 1998,
<https://www.rfc-editor.org/info/rfc2475>.
[RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color [RFC2697] Heinanen, J. and R. Guerin, "A Single Rate Three Color
Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999, Marker", RFC 2697, DOI 10.17487/RFC2697, September 1999,
<https://www.rfc-editor.org/info/rfc2697>. <https://www.rfc-editor.org/info/rfc2697>.
[RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color [RFC2698] Heinanen, J. and R. Guerin, "A Two Rate Three Color
Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999, Marker", RFC 2698, DOI 10.17487/RFC2698, September 1999,
<https://www.rfc-editor.org/info/rfc2698>. <https://www.rfc-editor.org/info/rfc2698>.
[RFC2884] Hadi Salim, J. and U. Ahmed, "Performance Evaluation of [RFC2884] Hadi Salim, J. and U. Ahmed, "Performance Evaluation of
Explicit Congestion Notification (ECN) in IP Networks", Explicit Congestion Notification (ECN) in IP Networks",
skipping to change at page 31, line 32 skipping to change at page 36, line 5
[RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion [RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion
Notification", RFC 6040, DOI 10.17487/RFC6040, November Notification", RFC 6040, DOI 10.17487/RFC6040, November
2010, <https://www.rfc-editor.org/info/rfc6040>. 2010, <https://www.rfc-editor.org/info/rfc6040>.
[RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P.,
and K. Carlberg, "Explicit Congestion Notification (ECN) and K. Carlberg, "Explicit Congestion Notification (ECN)
for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August
2012, <https://www.rfc-editor.org/info/rfc6679>. 2012, <https://www.rfc-editor.org/info/rfc6679>.
[RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J.,
Morris, J., Hansen, M., and R. Smith, "Privacy
Considerations for Internet Protocols", RFC 6973,
DOI 10.17487/RFC6973, July 2013,
<https://www.rfc-editor.org/info/rfc6973>.
[RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext
Transfer Protocol Version 2 (HTTP/2)", RFC 7540, Transfer Protocol Version 2 (HTTP/2)", RFC 7540,
DOI 10.17487/RFC7540, May 2015, DOI 10.17487/RFC7540, May 2015,
<https://www.rfc-editor.org/info/rfc7540>. <https://www.rfc-editor.org/info/rfc7540>.
[RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe,
"Problem Statement and Requirements for Increased Accuracy "Problem Statement and Requirements for Increased Accuracy
in Explicit Congestion Notification (ECN) Feedback", in Explicit Congestion Notification (ECN) Feedback",
RFC 7560, DOI 10.17487/RFC7560, August 2015, RFC 7560, DOI 10.17487/RFC7560, August 2015,
<https://www.rfc-editor.org/info/rfc7560>. <https://www.rfc-editor.org/info/rfc7560>.
skipping to change at page 32, line 11 skipping to change at page 36, line 38
Concepts, Abstract Mechanism, and Requirements", RFC 7713, Concepts, Abstract Mechanism, and Requirements", RFC 7713,
DOI 10.17487/RFC7713, December 2015, DOI 10.17487/RFC7713, December 2015,
<https://www.rfc-editor.org/info/rfc7713>. <https://www.rfc-editor.org/info/rfc7713>.
[RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White,
"Proportional Integral Controller Enhanced (PIE): A "Proportional Integral Controller Enhanced (PIE): A
Lightweight Control Scheme to Address the Bufferbloat Lightweight Control Scheme to Address the Bufferbloat
Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017,
<https://www.rfc-editor.org/info/rfc8033>. <https://www.rfc-editor.org/info/rfc8033>.
[RFC8034] White, G. and R. Pan, "Active Queue Management (AQM) Based
on Proportional Integral Controller Enhanced PIE) for
Data-Over-Cable Service Interface Specifications (DOCSIS)
Cable Modems", RFC 8034, DOI 10.17487/RFC8034, February
2017, <https://www.rfc-editor.org/info/rfc8034>.
[RFC8170] Thaler, D., Ed., "Planning for Protocol Adoption and [RFC8170] Thaler, D., Ed., "Planning for Protocol Adoption and
Subsequent Transitions", RFC 8170, DOI 10.17487/RFC8170, Subsequent Transitions", RFC 8170, DOI 10.17487/RFC8170,
May 2017, <https://www.rfc-editor.org/info/rfc8170>. May 2017, <https://www.rfc-editor.org/info/rfc8170>.
[RFC8257] Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., [RFC8257] Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L.,
and G. Judd, "Data Center TCP (DCTCP): TCP Congestion and G. Judd, "Data Center TCP (DCTCP): TCP Congestion
Control for Data Centers", RFC 8257, DOI 10.17487/RFC8257, Control for Data Centers", RFC 8257, DOI 10.17487/RFC8257,
October 2017, <https://www.rfc-editor.org/info/rfc8257>. October 2017, <https://www.rfc-editor.org/info/rfc8257>.
[RFC8290] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, [RFC8290] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys,
skipping to change at page 32, line 40 skipping to change at page 37, line 25
[RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion [RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion
Notification (ECN) Experimentation", RFC 8311, Notification (ECN) Experimentation", RFC 8311,
DOI 10.17487/RFC8311, January 2018, DOI 10.17487/RFC8311, January 2018,
<https://www.rfc-editor.org/info/rfc8311>. <https://www.rfc-editor.org/info/rfc8311>.
[RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and
R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", R. Scheffenegger, "CUBIC for Fast Long-Distance Networks",
RFC 8312, DOI 10.17487/RFC8312, February 2018, RFC 8312, DOI 10.17487/RFC8312, February 2018,
<https://www.rfc-editor.org/info/rfc8312>. <https://www.rfc-editor.org/info/rfc8312>.
[RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of
Pervasive Encryption on Operators", RFC 8404,
DOI 10.17487/RFC8404, July 2018,
<https://www.rfc-editor.org/info/rfc8404>.
[RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, [RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst,
"TCP Alternative Backoff with ECN (ABE)", RFC 8511, "TCP Alternative Backoff with ECN (ABE)", RFC 8511,
DOI 10.17487/RFC8511, December 2018, DOI 10.17487/RFC8511, December 2018,
<https://www.rfc-editor.org/info/rfc8511>. <https://www.rfc-editor.org/info/rfc8511>.
[TCP-CA] Jacobson, V. and M. Karels, "Congestion Avoidance and [TCP-CA] Jacobson, V. and M. Karels, "Congestion Avoidance and
Control", Laurence Berkeley Labs Technical Report , Control", Laurence Berkeley Labs Technical Report ,
November 1988, <http://ee.lbl.gov/papers/congavoid.pdf>. November 1988, <http://ee.lbl.gov/papers/congavoid.pdf>.
[TCP-sub-mss-w] [TCP-sub-mss-w]
skipping to change at page 33, line 39 skipping to change at page 38, line 27
WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg" WG: The IETF WG most relevant to this requirement. The "tcpm/iccrg"
combination refers to the procedure typically used for congestion combination refers to the procedure typically used for congestion
control changes, where tcpm owns the approval decision, but uses control changes, where tcpm owns the approval decision, but uses
the iccrg for expert review [NewCC_Proc]; the iccrg for expert review [NewCC_Proc];
TCP: Applicable to all forms of TCP congestion control; TCP: Applicable to all forms of TCP congestion control;
DCTCP: Applicable to Data Center TCP as currently used (in DCTCP: Applicable to Data Center TCP as currently used (in
controlled environments); controlled environments);
DCTCP bis: Applicable to an future Data Center TCP congestion DCTCP bis: Applicable to any future Data Center TCP congestion
control intended for controlled environments; control intended for controlled environments;
XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT) XXX Prague: Applicable to a Scalable variant of XXX (TCP/SCTP/RMCAT)
congestion control. congestion control.
+-----+------------------------+------------------------------------+ +-----+------------------------+------------------------------------+
| Req | Requirement | Reference | | Req | Requirement | Reference |
| # | | | | # | | |
+-----+------------------------+------------------------------------+ +-----+------------------------+------------------------------------+
| 0 | ARCHITECTURE | | | 0 | ARCHITECTURE | |
 End of changes. 139 change blocks. 
523 lines changed or deleted 735 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/