draft-ietf-bmwg-ippm-treno-btc-00.txt   draft-ietf-bmwg-ippm-treno-btc-01.txt 
INTERNET-DRAFT Expires May 1997 INTERNET-DRAFT
Network Working Group Matt Mathis Network Working Group Matt Mathis
INTERNET-DRAFT Pittsburgh Supercomputing Center INTERNET-DRAFT Pittsburgh Supercomputing Center
Expiration Date: May 1997 Nov 1996 Expiration Date: Jan 1998 July 1997
Empirical Bulk Transfer Capacity Empirical Bulk Transfer Capacity
< draft-ietf-bmwg-ippm-treno-btc-01.txt >
< draft-ietf-bmwg-ippm-treno-btc-00.txt >
Status of this Document Status of this Document
This document is an Internet-Draft. Internet-Drafts are working This document is an Internet-Draft. Internet-Drafts are working documents
documents of the Internet Engineering Task Force (IETF), its of the Internet Engineering Task Force (IETF), its areas, and its working
areas, and its working groups. Note that other groups may also groups. Note that other groups may also distribute working documents as
distribute working documents as Internet-Drafts. Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six months and
months and may be updated, replaced, or obsoleted by other may be updated, replaced, or obsoleted by other documents at any time. It
documents at any time. It is inappropriate to use Internet- is inappropriate to use Internet-Drafts as reference material or to cite
Drafts as reference material or to cite them other than as them other than as "work in progress."
``work in progress.''
To learn the current status of any Internet-Draft, please check To learn the current status of any Internet-Draft, please check the
the ``1id-abstracts.txt'' listing contained in the Internet- "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow
Drafts Shadow Directories on ftp.is.co.za (Africa), Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), munnari.oz.au
nic.nordu.net (Europe), munnari.oz.au (Pacific Rim), (Pacific Rim), ds.internic.net (US East Coast), or ftp.isi.edu (US West
ds.internic.net (US East Coast), or ftp.isi.edu (US West Coast). Coast).
Abstract: Abstract:
Bulk Transport Capacity (BTC) is a measure of a network's ability Bulk Transport Capacity (BTC) is a measure of a network's ability to
to transfer significant quantities of data with a single transfer significant quantities of data with a single congestion-aware
congestion-aware transport connection (e.g. state-of-the-art transport connection (e.g. state-of-the-art TCP). For many applications
TCP). For many applications the BTC of the underlying network the BTC of the underlying network dominates the overall elapsed time for
dominates the the overall elapsed time for the application, and the application, and thus dominates the performance as perceived by a user.
thus dominates the performance as perceived by a user. The BTC is a property of an IP cloud (links, routers, switches, etc.)
between a pair of hosts. It does not include the hosts themselves (or
their transport-layer software). However, congestion control is crucial to
the BTC metric because the Internet depends on the end systems to fairly
divide the available bandwidth on the basis of common congestion behavior.
The BTC metric is based on the performance of a reference congestion
control algorithm that has particularly uniform and stable behavior.
The BTC is a property of an IP cloud (links, routers, switches, Introduction:
etc) between a pair of hosts. It does not include the hosts
themselves (or their transport-layer software). However,
congestion control is crucial to the BTC metric because the
Internet depends on the end systems to fairly divide the
available bandwidth on the basis of common congestion behavior.
The BTC metric is based on the performance of a reference
congestion control algorithm that has particularly uniform and
stable behavior.
Introduction Bulk Transport Capacity (BTC) is a measure of a network's ability to
transfer significant quantities of data with a single congestion-aware
transport connection (e.g. state-of-the-art TCP). For many applications
the BTC of the underlying network dominates the overall elapsed time for
the application, and thus dominates the performance as perceived by a user.
Examples of such applications include FTP and other network copy
utilities.
This Internet-draft is likely to become one section of some The BTC is a property of an IP cloud (links, routers, switches, etc.)
future, larger document covering several different metrics. between a pair of hosts. It does not include the hosts themselves (or
their transport-layer software). However, congestion control is crucial to
the BTC metric because the Internet depends on the end systems to fairly
divide the available bandwidth on the basis of common congestion behavior.
Motivation: Four standard control congestion algorithms are described in RFC2001:
Slow-start, Congestion Avoidance, Fast Retransmit and Fast Recovery. Of
these algorithms, Congestion Avoidance drives the steady-state bulk
transfer behavior of TCP. It calls for opening the congestion window by 1
segment size on each round trip time, and closing it by 1/2 on congestion,
as signaled by lost segments.
Bulk Transport Capacity (BTC) is a measure of a network's ability Slow-start is part of TCP's transient behavior. It is used to quickly
to transfer significant quantities of data with a single bring new or recently timed out connections up to an appropriate congestion
congestion-aware transport connection (e.g. state-of-the-art window.
TCP). For many applications the BTC of the underlying network
dominates the the overall elapsed time for the application, and
thus dominates the performance as perceived by a user. Examples
of such applications include ftp and other network copy
utilities.
The BTC is a property of an IP cloud (links, routers, switches, In Reno TCP, Fast Retransmit and Fast Recovery are used to support the
etc) between a pair of hosts. It does not include the hosts Congestion Avoidance algorithm during recovery from lost segments. During
themselves (or their transport-layer software). However, the recovery interval the data receiver sends duplicate acknowledgements,
congestion control is crucial to the BTC metric because the which the data sender must use to identify missing segments as well as to
Internet depends on the end systems to fairly divide the estimate the quantity of outstanding data in the network. The research
available bandwidth on the basis of common congestion behavior. community has observed unpredictable or unstable TCP performance caused by
The BTC metric is based on the performance of a reference errors and uncertainties in the estimation of outstanding data [Lakshman94,
congestion control algorithm that has particularly uniform and Floyd95, Hoe95]. Simulations of reference TCP implementations have
stable behavior. The reference algorithm is documented in uncovered situations where incidental changes in other parts of the network
appendix A, and can be implemented in TCP using the SACK option have a large effect on performance [Mathis96]. Other simulations have
[RFC2018]. It is similar in style and behavior to the congestion shown that under some conditions, slightly better networks (higher
control algorithm which have been in standard use [Jacoboson88, bandwidth or lower delay) yield lower throughput [This is easy to
Stevens94, Stevens96] in the Internet. construct, but has it been published?]. As a consequence, even reference
TCP implementations do not make good metrics.
Since the behavior of the reference congestion control algorithm Furthermore, many TCP implementations in use in the Internet today have
is well defined and implementation independent, it will be outright bugs which can have arbitrary and unpredictable effects on
possible confirm that different measurements only reflect performance [Comer94, Brakmo95, Paxson97a, Paxson97b].
properties of the network and not the end-systems. As such BTC
will be a true network metric. [A strong definition of "network
metric" belongs in the framework document: - truly indicative of
what *could* be done with TCP or another good transport layer -
sensitive to weaknesses in the routers, switches, links, etc. of
the IP cloud that would also cause problems for production
transport layers - *not* be sensitive to weaknesses in common
host hardware or software, such as current production TCP
implementations, that can be removed by doing transport right on
the hosts - complete as a methodology in that little/no
additional deep knowledge of state-of-the-art measurement
technology is needed Others that may come to mind. - Guy Almes]
Implementing standard congestion control algorithms within the The difficulties with using TCP for measurement can be overcome by using
diagnostic eliminates calibration problems associated with the the Congestion Avoidance algorithm by itself, in isolation from other
non-uniformity of current TCP implementations. However, like all algorithms. In [Mathis97] it is shown that the performance of the
empirical metrics it introduces new problems, most notably the Congestion Avoidance algorithm can be predicted by a simple analytical
need to certify the correctness of the implementation and to model. The model was derived in [Ott96a, Ott96b]. The model predicts the
verify that there are not systematic errors due to limitations of performance of the Congestion Avoidance algorithm as a function of the
the tester. round trip time, and the TCP segment size and the probability of receiving
a congestion signal (i.e. packet loss). The paper shows that the model
accurately predicts the performance of TCP using the SACK option [RFC2018]
under a wide range of conditions. If losses are isolated (no more than one
per round trip) then Fast Recovery successfully estimates the actual
congestion window during recovery, and Reno TCP also fits the model.
This version of the metric is based on the tool TReno (pronounced This version of the BTC metric is based on the TReno ("tree-no")
tree-no), which implements the reference congestion control diagnostic, which implements a protocol-independent version of the
algorithm over either traceroute-style UDP and ICMP messages or Congestion Avoidance algorithm. TReno's internal protocol is designed to
ICMP ping packets. accurately implement the Congestion Avoidance algorithm under a very wide
range of conditions, and to diagnose timeouts when they interrupt
Congestion Avoidance. In [Mathis97] it is observed that TReno fits the
same performance model as SACK and Reno TCPs. [Although the paper was
written using an older version of TReno, which has less accurate internal
measurements.]
Many of the calibration checks can be included in the measurement Implementing the Congestion Avoidance algorithm within a diagnostic tool
process itself. The TReno program includes error and warning eliminates calibration problems associated with the non-uniformity of
messages for many conditions which indicate either problems with current TCP implementations. However, like all empirical metrics it
the infrastructure or in some cases problems with the measurement introduces new problems, most notably the need to certify the correctness
process. Other checks need to be performed manually. of the implementation and to verify that there are not systematic errors
due to limitations of the tester.
Many of the calibration checks can be included in the measurement process
itself. The TReno program includes error and warning messages for many
conditions that indicate either problems with the infrastructure or in some
cases problems with the measurement process. Other checks need to be
performed manually.
Metric Name: TReno-Type-P-Bulk-Transfer-Capacity Metric Name: TReno-Type-P-Bulk-Transfer-Capacity
(e.g. TReno-UDP-BTC) (e.g. TReno-UDP-BTC)
Metric Parameters: A pair of IP addresses, Src (aka "tester") Metric Parameters: A pair of IP addresses, Src (aka "tester")
and Dst (aka "target"), a start time T and initial and Dst (aka "target"), a start time T and initial MTU.
MTU.
[The framework document needs a general way to address additional
constraints that may be applied to metrics: E.g. for a
NetNow-style test between hosts on two exchange points, some
indication of/control over the first hop is needed.]
Definition: The average data rate attained by the reference Definition: The average data rate attained by the Congestion
congestion control algorithm, while using type-P Avoidance algorithm, while using type-P packets to probe the forward (Src
packets to probe the forward (Src to Dst) path. to Dst) path. In the case of ICMP ping, these messages also probe the
In the case of ICMP ping, these messages also probe return path.
the return path.
Metric Units: bits per second Metric Units: bits per second
Ancillary results and output used to verify Ancillary results:
the proper measurement procedure and calibration: * Statistics over the entire test
- Statistics over the entire test
(data transferred, duration and average rate) (data transferred, duration and average rate)
- Statistics from the equilibrium portion of the test * Statistics over the Congestion Avoidance portion of the test (data
(data transferred, duration, average rate, and number transferred, duration and average rate)
of equilibrium congestion control cycles) * Path property statistics (MTU, Min RTT, max cwnd during Congestion
- Path property statistics (MTU, Min RTT, max cwnd in Avoidance and max cwnd during Slow-start)
equilibrium and max cwnd during Slow-start) * Direct measures of the analytic model parameters (Number of congestion
- Statistics from the non-equilibrium portion of the signals, average RTT)
test (nature and number of non-equilibrium events). * Indications of which TCP algorithms must be present to attain the same
- Estimated load/BW/buffering used on the return path. performance.
- Warnings about data transmission abnormalities. * The estimated load/BW/buffering used on the return path
(e.g packets out-of-order) * Warnings about data transmission abnormalities.
- Warning about conditions which may effect metric (e.g. packets out-of-order, events that cause timeouts)
accuracy. (e.g insufficient tester buffering) * Warnings about conditions which may affect metric accuracy. (e.g.
- Alarms about serious data transmission abnormalities. insufficient tester buffering)
* Alarms about serious data transmission abnormalities.
(e.g. data duplicated in the network) (e.g. data duplicated in the network)
- Alarms about tester internal inconsistencies and events * Alarms about internal inconsistencies of the tester and events which
which might invalidate the results. might invalidate the results.
- IP address/name of the responding target. * IP address/name of the responding target.
- TReno version. * TReno version.
Method: Run the treno program on the tester with the chosen Method: Run the TReno program on the tester with the chosen packet type
packet type addressed to the target. Record both the addressed to the target. Record both the BTC and the ancillary results.
BTC and the ancillary results. Manual calibration checks: (See detailed explanations below).
Manual calibration checks. (See detailed explanations below). * Verify that the tester and target have sufficient raw bandwidth to
- Verify that the tester and target have sufficient raw sustain the test.
bandwidth to sustain the test. * Verify that the tester and target have sufficient buffering to support
- Verify that the tester and target have sufficient the window needed by the test.
buffering to support the window needed by the test. * Verify that there is not any other system activity on the tester or
- Verify that there is not any other system activity on the target.
tester or target. * Verify that the return path is not a bottleneck at the load needed to
- Verify that the return path is not a bottleneck at the sustain the test.
load needed to sustain the test. * Verify that the IP address reported in the replies is an appropriate
- Verify that the IP address reported in the replies is some
interface of the selected target. interface of the selected target.
Version control: Version control:
- Record the precise TReno version (-V switch)
- Record the precise tester OS version, CPU version and * Record the precise TReno version (-V switch)
speed, interface type and version. * Record the precise tester OS version, CPU version and speed, interface
type and version.
Discussion: Discussion:
We do not use existing TCP implementations due to a number of Note that the BTC metric is defined specifically to be the average data
problems which make them difficult to calibrate as metrics. The rate between the source and destination hosts. The ancillary results are
Reno congestion control algorithms are subject to a number of designed to detect possible measurement problems, and to help diagnose the
chaotic or turbulent behaviors which introduce non-uniform network. The ancillary results should not be used as metrics in their own
performance [Floyd95, Hoe95, mathis96]. Non-uniform performance right.
introduces substantial non-calibratable uncertainty when used as
a metric. Furthermore a number of people [Paxon:testing,
Comer:testing, ??others??] have observed extreme diversity
between different TCP implementations, raising doubts about
repeatability and consistency between different TCP based
measures.
There are many possible reasons why a TReno measurement might not The current version of TReno does not include an accurate model for TCP
agree with the performance obtained by a TCP based application. timeouts or their effect on average throughput. TReno takes the view that
Some key ones include: older TCP's missing key algorithms such as timeouts reflect an abnormality in the network, and should be diagnosed as
MTU discovery, support for large windows or SACK, or mistuning of such.
either the data source or sink. Some network conditions which
need the newer TCP algorithms are detected by TReno and reported
in the ancillary results. Other documents will cover methods to
diagnose the difference between TReno and TCP performance.
Note that the BTC metric is defined specifically to be the There are many possible reasons why a TReno measurement might not agree
average data rate between the source and destination hosts. The with the performance obtained by a TCP-based application. Some key ones
ancillary results are designed to detect a number of possible include: older TCPs missing key algorithms such as MTU discovery, support
measurement problems, and in a few case pathological behaviors in for large windows or SACK, or mis-tuning of either the data source or sink.
the network. The ancillary results should not be used as metrics
in their own right. The discussion below assumes that the TReno Some network conditions which require the newer TCP algorithms are
algorithm is implemented as a user mode program running under a detected by TReno and reported in the ancillary results. Other documents
standard operating system. Other implementations, such as a will cover methods to diagnose the difference between TReno and TCP
dedicated measurement instrument, can have stronger builtin performance.
calibration checks.
It would raise the accuracy of TReno's traceroute mode if the ICMP "TTL
exceeded" messages were generated at the target and transmitted along the
return path with elevated priority (reduced losses and queuing delays).
People using the TReno metric as part of procurement documents should be
aware that in many circumstances MTU has an intrinsic and large impact on
overall path performance. Under some conditions the difficulty in meeting
a given performance specifications is inversely proportional to the square
of the path MTU. (e.g. Halving the specified MTU makes meeting the
bandwidth specification 4 times harder.)
When used as an end-to-end metric TReno presents exactly the same load to
the network as a properly tuned state-of-the-art bulk TCP stream between
the same pair of hosts. Although the connection is not transferring useful
data, it is no more wasteful than fetching an unwanted web page with the
same transfer time.
Calibration checks:
The following discussion assumes that the TReno diagnostic is implemented
as a user mode program running under a standard operating system. Other
implementations, such as thoes in dedicated measurement instruments, can
have stronger built-in calibration checks.
The raw performance (bandwidth) limitations of both the tester The raw performance (bandwidth) limitations of both the tester
and target SHOULD be measured by running TReno in a controlled and target should be measured by running TReno in a controlled
environment (e.g. a bench test). Ideally the observed environment (e.g. a bench test). Ideally the observed
performance limits should be validated by diagnosing the nature performance limits should be validated by diagnosing the nature
of the bottleneck and verifying that it agrees with other of the bottleneck and verifying that it agrees with other
benchmarks of the tester and target (e.g. That TReno performance benchmarks of the tester and target (e.g. That TReno performance
agrees with direct measures of backplane or memory bandwidth or agrees with direct measures of backplane or memory bandwidth or
other bottleneck as appropriate.) These raw performance other bottleneck as appropriate). These raw performance
limitations MAY be obtained in advance and recorded for later limitations may be obtained in advance and recorded for later
reference. Currently no routers are reliable targets, although reference. Currently no routers are reliable targets, although
under some conditions they can be used for meaningful under some conditions they can be used for meaningful measurements. When
measurements. For most people testing between a pair of modern testing between a pair of modern computer systems at a few megabits per
computer systems at a few megabits per second or less, the tester second or less, the tester and target are unlikely to be the bottleneck.
and target are unlikely to be the bottleneck. TReno may not be accurate, and should not be used as a formal metric, at
rates above half of the known tester or target limits. This is because
during the initial Slow-start TReno needs to be able to send bursts which
are twice the average data rate.
TReno may not be accurate, and SHOULD NOT be used as a formal Likewise, if the link to the first hop is not more than twice as fast as
metric at rates above half of the known tester or target limits. the entire path, some of the path properties such as max cwnd during
This is because during Slow-start TReno needs to be able to send Slow-start may reflect the testers link interface, and not the path itself.
bursts which are twice the average data rate. Verifying that the tester and target have sufficient buffering is
difficult. If they do not have sufficient buffer space, then losses at
their own queues may contribute to the apparent losses along the path.
There are several difficulties in verifying the tester and target buffer
capacity. First, there are no good tests of the target's buffer capacity
at all. Second, all validation of the testers buffering depends in some
way on the accuracy of reports by the tester's own operating system.
Third, there is the confusing result that in many circumstances
(particularly when there is much more than sufficient average tester
performance) insufficient buffering in the tester does not adversely impact
measured performance.
[need exception if the 1st hop LAN is the limit in all cases?] TReno reports (as calibration alarms) any events in which transmit packets
were refused due to insufficient buffer space. It reports a warning if the
maximum measured congestion window is larger than the reported buffer
space. Although these checks are likely to be sufficient in most cases
they are probably not sufficient in all cases, and will be the subject of
future research.
Verifying that the tester and target have sufficient buffering is Note that on a timesharing or multi-tasking system, other activity on the
difficult. If they do not have sufficient buffer space, then tester introduces burstiness due to operating system scheduler latency.
losses at their own queues may contribute to the apparent losses Since some queuing disciplines discriminate against bursty sources, it is
along the path. There several difficulties in verifying the important that there be no other system activity during a test. This
tester and target buffer capacity. First, there are no good should be confirmed with other operating system specific tools.
tests of the target's buffer capacity at all. Second, all
validation of the testers buffering depend in some way on the
accuracy of reports by the tester's own operating system. Third,
there is the confusing result that in many circumstances
(particularly when there is more than sufficient average
performance) where insufficient buffering does not adversely
impact measured performance.
TReno separately instruments the performance of the equilibrium In ICMP mode TReno measures the net effect of both the forward and return
and non-equilibrium portions of the test. This is because paths on a single data stream. Bottlenecks and packet losses in the
TReno's behavior is intrinsicly more accurate during equilibrium. forward and return paths are treated equally.
If TReno can not sustain equilibrium, it either suggests serious
problems with the network or that the expected performance is
lower than can be accurately measures by TReno.
TReno reports (as calibration alarms) any events where transmit In traceroute mode, TReno computes and reports the load it contributes to
packets were refused due to insufficient buffer space. It the return path. Unlike real TCP, TReno can not distinguish between losses
reports a warning if the maximum measured congestion window is on the forward and return paths, so ideally we want the return path to
larger than the reported buffer space. Although these checks are introduce as little loss as possible. A good way to test to see if the
likely to be sufficient in most cases they are probably not return path has a large effect on a measurement is to reduce the forward
sufficient in all cases, and will be subject of future research. path messages down to ACK size (40 bytes), and verify that the measured
packet rate is improved by at least factor of two. [More research is
needed.]
Note that on a timesharing or multi-tasking system, other References
activity on the tester introduces burstyness due to operating
system scheduler latency. Therefore, it is very important that
there be no other system activity during a test. This SHOULD be
confirmed with other operating system specific tools.
In traceroute mode, TReno computes and reports the load on the [Brakmo95], Brakmo, S., Peterson, L., "Performance problems in BSD4.4 TCP",
return path. Unlike real TCP, TReno can not distinguish between Proceedings of ACM SIGCOMM '95, October 1995.
losses on the forward and return paths, so idealy we want the
return path to introduce as little loss as possible. The best
way to test the return path is with TReno ICMP mode using ACK
sized messages, and verify that the measured packet rate is
improved by a factor of two. [More research needed]
In ICMP mode TReno measures the net effect of both the forward [Comer94], Comer, C., Lin, J., "Probing TCP Implementations", USENIX Summer
and return paths on a single data stream. Bottlenecks and packet 1994, June 1994.
losses in the forward and return paths are treated equally.
It would raise the accuracy of TReno traceroute mode if the ICMP [Floyd95] Floyd, S., "TCP and successive fast retransmits", February 1995,
TTL execeded messages were generated at the target and Obtain via ftp://ftp.ee.lbl.gov/papers/fastretrans.ps.
transmitted along the return path with elevated priority (reduced
losses and queuing delays).
People using the TReno metric as part of procurement documents [Hoe95] Hoe, J., "Startup dynamics of TCP's congestion control and
should be aware that in many circumstances MTU has an intrinsic avoidance schemes". Master's thesis, Massachusetts Institute of
and large impact on overall path performance. Under some Technology, June 1995.
conditions the difficulty in meeting a given performance
specifications is inversely proportional to the square of the
path MTU. (e.g. halving the specified MTU makes meeting the
bandwidth specification 4 times harder.)
In metric mode, TReno presents exactly the same load to the [Jacobson88] Jacobson, V., "Congestion Avoidance and Control", Proceedings
network as a properly tuned state-of-the-art TCP between the same of SIGCOMM '88, Stanford, CA., August 1988.
pair of hosts. Although the connection is not transferring
useful data, it is no more wasteful than fetching an un-wanted
web page takes the same time to transfer.
References [mathis96] Mathis, M. and Mahdavi, J. "Forward acknowledgment:
Refining TCP congestion control", Proceedings of ACM SIGCOMM '96,
Stanford, CA., August 1996.
[RFC2018] Mathis, M., Mahdavi, J. Floyd, S., Romanow, A., "TCP [RFC2018] Mathis, M., Mahdavi, J. Floyd, S., Romanow, A., "TCP Selective
Selective Acknowledgment Options", Acknowledgment Options", 1996 Obtain via:
ftp://ds.internic.net/rfc/rfc2018.txt ftp://ds.internic.net/rfc/rfc2018.txt
[Jacobson88] Jacobson, V., "Congestion Avoidance and Control", [Mathis97] Mathis, M., Semke, J., Mahdavi, J., Ott, T., "The Macroscopic
Proceedings of SIGCOMM '88, Stanford, CA., August 1988. Behavior of the TCP Congestion Avoidance Algorithm", Computer
Communications Review, 27(3), July 1997.
[Stevens94] Stevens, W., "TCP/IP Illustrated, Volume 1: The [Ott96a], Ott, T., Kemperman, J., Mathis, M., "The Stationary
Protocols", Addison-Wesley, 1994. Behavior of Ideal TCP Congestion Avoidance", In progress, August
1996. Obtain via pub/tjo/TCPwindow.ps using anonymous ftp to
ftp.bellcore.com
[Stevens96] Stevens, W., "TCP Slow Start, Congestion Avoidance, [Ott96b], Ott, T., Kemperman, J., Mathis, M., "Window Size Behavior in
Fast Retransmit, and Fast Recovery Algorithms", Work in progress TCP/IP with Constant Loss Probability", DIMACS Special Year on Networks,
ftp://ietf.org/internet-drafts/draft-stevens-tcpca-spec-01.txt Workshop on Performance of Real-Time Applications on the Internet, Nov
1996.
[Floyd95] Floyd, S., "TCP and successive fast retransmits", [Paxson97a] Paxson, V "Automated Packet Trace Analysis of TCP
February 1995, Obtain via ftp://ftp.ee.lbl.gov/papers/fastretrans.ps. Implementations", Proceedings of ACM SIGCOMM '97, August 1997.
[Hoe95] Hoe, J., "Startup dynamics of TCP's congestion control [Paxson97b] Paxson, V, editor "Known TCP Implementation Problems",
and avoidance schemes". Master's thesis, Massachusetts Institute Work in progress: http://reality.sgi.com/sca/tcp-impl/prob-01.txt
of Technology, June 1995.
[mathis96] Mathis, M. and Mahdavi, J. "Forward acknowledgment: [Stevens94] Stevens, W., "TCP/IP Illustrated, Volume 1: The Protocols",
Refining tcp congestion control", Proceedings of ACM SIGCOMM '96, Addison-Wesley, 1994.
Stanford, CA., August 1996.
Author's Address [RFC2001] Stevens, W., "TCP Slow Start, Congestion Avoidance,
Fast Retransmit, and Fast Recovery Algorithms",
ftp://ds.internic.net/rfc/rfc2001.txt
Author's Address
Matt Mathis Matt Mathis
email: mathis@psc.edu email: mathis@psc.edu
Pittsburgh Supercomputing Center Pittsburgh Supercomputing Center
4400 Fifth Ave. 4400 Fifth Ave.
Pittsburgh PA 15213 Pittsburgh PA 15213
----------------------------------------------------------------
Appendix A:
Currently the best existing description of the algorithm is in
the "FACK technical note" below http://www.psc.edu/networking/tcp.html.
Within TReno, all invocations of "bounding parameters" will be
reported as warnings.
The FACK technical note will be revised for TReno, supplemented by a
code fragment and included here.
 End of changes. 

This html diff was produced by rfcdiff 1.23, available from http://www.levkowetz.com/ietf/tools/rfcdiff/