The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1985)
DOCUMENT: TCP-IP Distribution List for June 1985 (266 messages, 74203 bytes)
SOURCE: http://securitydigest.org/exec/display?f=tcp-ip/archive/1985/06.txt&t=text/plain
NOTICE: securitydigest.org recognises the rights of all third-party works.

START OF DOCUMENT

-----------[000000][next][prev][last][first]----------------------------------------------------
Date:      Sat, 1-Jun-85 12:46:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   ARPANET/MILNET performance statistics

From: mills@dcn6.arpa

Folks,

Responding to Vint's request, here are some relevant data covering the
ARPANET/MILNET gateway performance. The data have been extracted from the
lastest weekly report produced by BBN and cover only the ARPANET/MILNET
gateways, which represent only seven out of the 38 operational BBN core
gateways. (Who knows how many non-core gateways there are out there...)

The period covered by these data cover just short of a six-day period
and detail the average and peak throughputs and loss rates. The totals shown
are for all of the 38 gateways. Comments follow the tables.

Total Throughput

GWY         RCVD           RCVD     IP       % IP         DEST   % DST
NAME        DGRAMS         BYTES    ERRORS  ERRORS       UNRCH   UNRCH
----------------------------------------------------------------------
MILARP   4,169,046   306,185,112       273   0.00%       7,153   0.17%
MILBBN   4,638,747   272,396,860       458   0.00%      30,045   0.65%
MILDCE   3,952,555   280,374,422       372   0.00%      23,747   0.60%
MILISI   5,282,635   624,869,302       779   0.01%      20,353   0.39%
MILLBL   2,896,764   175,123,126       143   0.00%       6,639   0.23%
MILSAC   2,765,136   157,981,916     1,122   0.04%      10,588   0.38%
MILSRI   2,133,985   117,968,018       169   0.00%      13,832   0.65%
----------------------------------------------------------------------
TOTALS  92,368,009 5,768,504,913 1,556,736   1.69%     190,545   0.21%

GWY         SENT           SENT    DROPPED    % DROPPED
NAME        DGRAMS         BYTES   DGRAMS        DGRAMS
-------------------------------------------------------
MILARP   4,146,989   295,751,188   101,471        2.39%
MILBBN   4,669,813   276,807,235   157,068        3.25%
MILDCE   3,942,271   284,077,034    59,404        1.48%
MILISI   5,138,585   577,311,096   247,222        4.59%
MILLBL   2,877,744   174,574,553    55,537        1.89%
MILSAC   2,792,073   165,159,590    13,393        0.48%
MILSRI   2,156,255   127,256,463    53,483        2.42%
-------------------------------------------------------
TOTALS  92,523,789 5,721,526,805 1,466,274        1.56%

Note that the load balancing, while not optimal, is not too bad. The data do
now show, of course, the extent of the double-hop inefficiencies pointed out
previously. The ARPANET/MILNET gateways see fewer IP errors than average, but
somewhat more broken networks and dropped packets than average.

======================================================

Mean Throughput (per second) and Size (bytes per datagram)

GWY         RCVD         RCVD       IP         AVG BYTES
NAME        DGRAMS       BYTES      ERRORS     PER DGRAM
--------------------------------------------------------
MILARP        8.14      597.90        0.00       73.44
MILBBN        9.06      531.92        0.00       58.72
MILDCE        7.72      547.50        0.00       70.93
MILISI       10.32     1220.21        0.00      118.29
MILLBL        5.66      341.97        0.00       60.45
MILSAC        5.40      308.50        0.00       57.13
MILSRI        4.17      230.36        0.00       55.28

GWY         SENT         SENT     DROPPED     AVG BYTES
NAME        DGRAMS       BYTES    DGRAMS      PER DGRAM
-------------------------------------------------------
MILARP        8.10      577.53        0.20       71.32
MILBBN        9.12      540.53        0.31       59.28
MILDCE        7.70      554.73        0.12       72.06
MILISI       10.03     1127.34        0.48      112.35
MILLBL        5.62      340.90        0.11       60.66
MILSAC        5.45      322.51        0.03       59.15
MILSRI        4.21      248.50        0.10       59.02

These values are way below the maximum throughput of the LSI-11 gateways
(about 200 packets/sec); however, the average size is very small relative to
the maximum ARPANET/MILNET packet size of 1007 octets. One would expect the
resource crunch to be the limited buffer memory available in the present
LSI-11 implementation. Note that BBN is working actively toward a dramatic
increase in available memory, as noted previously.

======================================================

Peak Throughput (sum of datagrams/sec, input + output,
	  time is time of data collection)

GWY          TOTAL            TIME               DROP          TIME
NAME         T'PUT           OF DAY              RATE          OF DAY
------------------------------------------------------------------------
MILARP       47.28         5/24 09:16           27.26%        5/25 22:04
MILBBN       39.53         5/23 15:32           20.70%        5/24 02:18
MILDCE       36.67         5/24 08:02           26.12%        5/24 17:59
MILISI       44.45         5/23 15:02           32.39%        5/21 16:08
MILLBL       37.76         5/22 19:43           34.91%        5/24 12:02
MILSAC       36.91         5/23 13:03            5.75%        5/21 08:53
MILSRI       22.78         5/24 08:47           24.89%        5/21 16:08

Even under peak loads the gateway horsepower is not particularily taxed;
however, the buffering is obviously suffering a good deal. The times of peak
throughputs do not seem to correlate with the times of peak drop rates,
which tends to confirm that most of the drops occur in bunches under
conditions approaching congestive collapse.

The instrumentation in our gateway beteen the ARPANET and four local nets,
some of which are connected by medium-speed (4800/9600 bps) lines, tends to
support the above observations and conclusions. We see intense spasms on the
part of some hosts (names provided upon request) which clearly are to blame
for almost all of the congestion observed here. These hosts apparently have
been optimized to operate well on local Ethernets with small delays and tend
to bombard the long-haul paths with vast numbers of retransmissions over very
short intervals. I would bet a wadge of packets against a MicroVAX-II that the
prime cause for the braindamage is ARP and the unfortunately common
implementation that loses the first data packet during the address-resolution
cycle. If this is fixed, I bet the tendency to err on the low side of
retransmission estimates would go away.

There are other causes of packet spasms that have been detailed in many of my
previous messages. Happily, some have gone away. Those remaining symptoms
indicate continuing inefficiencies in piggybacking and send/ack policies
leading to tinygram floods (with TELNET, in particular). The sad fact is that
these problems have been carefully documented and are not hard to fix;
however, it takes only a few bandits without these fixes to torpedo the entire
Internet performance.

Dave
-------

-----------[000001][next][prev][last][first]----------------------------------------------------
Date:      1 Jun 85 17:59:49 PDT
From:      Murray.pa@Xerox.ARPA
To:        "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
Cc:        Murray.pa@Xerox.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: MILNET/ARPANET performance
I think "adjusting the timers" would help more than you give it credit
for. From my experience, the single biggest problem on large networks is
retransmitting too soon and too often.

Most code gets debugged in a local environment that doesn't have
gateways dropping packets because the next phone line (or net or..) is
overloaded. People tighten down the timers to make things "work better".

Unfortunatley, the sociology of this problem doesn't help to get it
fixed. If you increase your timeouts, you don't get any positive
rewards. It's only when almost everybody does it that anybody will get
the benefits. Even then, the finks that don't cooperate get as much
benefit as everybody else.

-----------[000002][next][prev][last][first]----------------------------------------------------
Date:      Sat, 1-Jun-85 21:53:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: Murray.pa@Xerox.ARPA

I think "adjusting the timers" would help more than you give it credit
for. From my experience, the single biggest problem on large networks is
retransmitting too soon and too often.

Most code gets debugged in a local environment that doesn't have
gateways dropping packets because the next phone line (or net or..) is
overloaded. People tighten down the timers to make things "work better".

Unfortunatley, the sociology of this problem doesn't help to get it
fixed. If you increase your timeouts, you don't get any positive
rewards. It's only when almost everybody does it that anybody will get
the benefits. Even then, the finks that don't cooperate get as much
benefit as everybody else.

-----------[000003][next][prev][last][first]----------------------------------------------------
Date:      Sat 1 Jun 85 22:33:56-EDT
From:      Lixia Zhang <Lixia@MIT-XX.ARPA>
To:        Murray.pa@XEROX.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: MILNET/ARPANET performance
I would support Noel's view point that "adjusting the timer will not help
much" in an overloaded net.  Consider the following arguments:

- Timers, in general, are not shorter that a normal round-trip delay,
  even in the case as you mentioned, "most code gets debugged in a local
  environment".

- Therefore in most cases, if there is a series of timeouts, it is
  started with jammed or lost packets.  This means that somewhere in the
  net gets overloaded by CURRENTLY offered data traffic.

- The window size = outstanding data = traffic load offered to the net.

- Therefore without reducing the window size
                                 -> no reduction on network load
                                          -> no help to the overloaded net.

- It is true that retransmitting too soon and too often will further damage
  the situation, but simply adjusting the timer to hold up retransmission
  longer will NOT resolve the congestion.

Lixia
-------
-----------[000004][next][prev][last][first]----------------------------------------------------
Date:      Sun, 2-Jun-85 00:15:33 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: Lixia Zhang <Lixia@MIT-XX.ARPA>

I would support Noel's view point that "adjusting the timer will not help
much" in an overloaded net.  Consider the following arguments:

- Timers, in general, are not shorter that a normal round-trip delay,
  even in the case as you mentioned, "most code gets debugged in a local
  environment".

- Therefore in most cases, if there is a series of timeouts, it is
  started with jammed or lost packets.  This means that somewhere in the
  net gets overloaded by CURRENTLY offered data traffic.

- The window size = outstanding data = traffic load offered to the net.

- Therefore without reducing the window size
                                 -> no reduction on network load
                                          -> no help to the overloaded net.

- It is true that retransmitting too soon and too often will further damage
  the situation, but simply adjusting the timer to hold up retransmission
  longer will NOT resolve the congestion.

Lixia
-------

-----------[000005][next][prev][last][first]----------------------------------------------------
Date:      2 Jun 1985 05:04-EDT
From:      CERF@USC-ISI.ARPA
To:        mills@DCN6.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
Dave,

Thanks very much for a most helpful summary. One thing which I note
about the MILISI gateway, in addition to its having far more traffic
that most others is that its packet size (per datagram) is larger
than a single packet message, assuming the avg bytes you show do NOT
include all the TCP/IP header bytes which must fit inside the text
of a single packet message. If memory serves, these messages have room
for at most 1008 bits or 126 bytes, some of which have to be given over
to IP or TCP header.

If I'm correct in this analysis, the MILISI gateway is injecting a good
deal of multi-packet traffic which puts a potential strain on the 
current imp end/end protocol. I note that MILISI has a higher rate of
datagram dropping - one might guess this is a result of increased
buffer demands and possibly increased incidence of timeouts for multipacket
transmission permission?

Dave is right about the need to fix those wayward implementations which
try to treat all nets as if they are identical in performance. To borrow
from Jack Haverty: You can fool some of gateways all of the time and
all of the gateways, some of the time, but you can't...

The whole SYSTEM has to be thought of as a system and all parts need
to meet some standards of performance and adaptation.  If this is
not achievable (and it's a big challenge) then nets and gateways need
to find ways to cut off identifiable abusers.

Vint
-----------[000006][next][prev][last][first]----------------------------------------------------
Date:      2 Jun 1985 13:22:55 PDT
From:      POSTEL@USC-ISIF.ARPA
To:        TCP-IP@SRI-NIC.ARPA
Subject:   re: MILNET/ARPANET Performance

Folks:

I think that adjusting your times may still have a big effect on your
own performance.  Looking at the numbers Dave Mills forwarded from the
Gateway monitoring data collected by BBN one can see that the typical
gateway is receiving about 5 to 10 datagrams per second (maybe 20
datagrams per second during the peak hour of the week).  If one is
sending retransmissions at the rate of one per second then one is
contributing about 10% to 20% of the load on the gateway (maybe only
5% at the gateway's buisest time).  I think these numbers are still
big enough that ones' own traffic is not totally lost in the vast sea
of traffic contributed by others.  I think there is not as much going
on in the network as we commonly asume, and i think that one still as
a little bit of leverage on influencing the destiny of one's own
datagrams.

--jon.
-------
-----------[000007][next][prev][last][first]----------------------------------------------------
Date:      Sun, 2-Jun-85 17:17:33 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   re: MILNET/ARPANET Performance

From: POSTEL@USC-ISIF.ARPA


Folks:

I think that adjusting your times may still have a big effect on your
own performance.  Looking at the numbers Dave Mills forwarded from the
Gateway monitoring data collected by BBN one can see that the typical
gateway is receiving about 5 to 10 datagrams per second (maybe 20
datagrams per second during the peak hour of the week).  If one is
sending retransmissions at the rate of one per second then one is
contributing about 10% to 20% of the load on the gateway (maybe only
5% at the gateway's buisest time).  I think these numbers are still
big enough that ones' own traffic is not totally lost in the vast sea
of traffic contributed by others.  I think there is not as much going
on in the network as we commonly asume, and i think that one still as
a little bit of leverage on influencing the destiny of one's own
datagrams.

--jon.
-------

-----------[000008][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 06:03:27 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: CERF@USC-ISI.ARPA

Dave,

Thanks very much for a most helpful summary. One thing which I note
about the MILISI gateway, in addition to its having far more traffic
that most others is that its packet size (per datagram) is larger
than a single packet message, assuming the avg bytes you show do NOT
include all the TCP/IP header bytes which must fit inside the text
of a single packet message. If memory serves, these messages have room
for at most 1008 bits or 126 bytes, some of which have to be given over
to IP or TCP header.

If I'm correct in this analysis, the MILISI gateway is injecting a good
deal of multi-packet traffic which puts a potential strain on the 
current imp end/end protocol. I note that MILISI has a higher rate of
datagram dropping - one might guess this is a result of increased
buffer demands and possibly increased incidence of timeouts for multipacket
transmission permission?

Dave is right about the need to fix those wayward implementations which
try to treat all nets as if they are identical in performance. To borrow
from Jack Haverty: You can fool some of gateways all of the time and
all of the gateways, some of the time, but you can't...

The whole SYSTEM has to be thought of as a system and all parts need
to meet some standards of performance and adaptation.  If this is
not achievable (and it's a big challenge) then nets and gateways need
to find ways to cut off identifiable abusers.

Vint

-----------[000009][next][prev][last][first]----------------------------------------------------
Date:      Monday,  3 Jun 1985 10:59-PDT
From:      imagen!geof@SU-SHASTA.ARPA
To:        shasta!tcp-ip@sri-nic
Subject:   Re: MILNET/ARPANET performance

To summarize the last few messages:

	1.Currently, many hosts retransmit too often.  This is
	  a major source of congestion, which can be alleviated by
	  forcing hosts to use better algorithms for their timeouts,
	  including (but not limited to) longer initial timeouts.

	2.After we do this (if we can do this), congestion in the
	  network will still be a problem, which according to
	  Lixia Zhang's and Noel Chiappa's arguments, can only be
	  solved by controlling the entry of packets into the internet.

Clearly item 1 is important, and easier to carry out.  Item 2 is an
equally valid problem.

- Geof Cooper
-----------[000010][next][prev][last][first]----------------------------------------------------
Date:      Monday,  3 Jun 1985 11:23-PDT
From:      imagen!geof@Berkeley
To:        tcp-ip@sri-nic
Subject:   Re: Floods of tinygrams from telnet hosts

The worst offenders of the tinygram explosions seen by Dave Mills are
probably Unix systems running server telnet.  Every Unix TCP
implementation I've seen (save one) has the characteristic that packets
are sent whenever a unix WRITE(I) call is made.  In many applications,
this happens once per character.  Unix also makes it very difficult to
heuristically combine this flood of characters into larger packets
(except on retransmission), through a combination of factors including
interfaces that were designed for blocking system calls, and incredibly
poor resolution of application-level timers.

The one implementation I've seen that solves this problem uses the
algorithm that John Nagle of Ford Aerospace developed.  My
understanding of it is (John, please correct any innaccuracies) that a
TCP implementation will emit a new packet only under the following
situations:
	- Its internal buffers are full (i.e., it is ready to send a
	  full sized packet)
	- All outstanding data it has sent has been acknowledged by the
	  remote TCP.
When communicating over a local net, this algorithm works fine for
telnet, since the intercharacter time is typically less than the
connection round trip time.  Whenever the intercharacter time of the TCP
client becomes greater than the round trip time, the algorithm
naturally divides the data to be sent into equally sized packets, based
on the ratio of intercharacter time to round trip time.

In the case of FTP-like connections, the algorithm degenerates into the
current behavior, since the internal buffers are always full.

Will someone please implement this on a 4.2 unix?  Then maybe I'll be
able to get decent response from APL when I telnet into the 4.2 machine
on the local ethernet!

- Geof
-----------[000011][next][prev][last][first]----------------------------------------------------
Date:      3 Jun 1985 10:49:57 EDT
From:      MILLS@USC-ISID.ARPA
To:        Lixia@MIT-XX.ARPA, Murray.pa@XEROX.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re: MILNET/ARPANET performance
In response to the message sent  Sat 1 Jun 85 22:33:56-EDT from Lixia@MIT-XX.ARPA

Lixia,

A large number of hosts have been observed here using initial retransmission
timeouts in the one-to-two second range, which has been repeatedly noted as
being too short (see RFC-889). When a couple of these WMWs gang up on a
busy gateway, instant congestion occurs and doesn't go away until the
hosts time out the ACK for their SYN, usually a minute or so. The SYNfull
gateway meanwhile is dropping lots of packets for other clients, who
themselves are ratcheting the retransmission-timeout estimate upwards.
The system is obviously unstable, even when the gateway was comfortably
underloaded to begin with. All it takes is a pulse of traffic sufficient
to topple the gateway over its buffer limit.

In other words, your argument has great merit; however the assumption that
retransmission timeouts are always longer than the roundtrip time is not
correct for many players in this circus.

Dave
-------
-----------[000012][next][prev][last][first]----------------------------------------------------
Date:      3 Jun 1985 10:59:37 EDT
From:      MILLS@USC-ISID.ARPA
To:        CERF@USC-ISI.ARPA, mills@DCN6.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
In response to the message sent  2 Jun 1985 05:04-EDT from CERF@USC-ISI.ARPA

Vint,

The datagram sizes shown in my data include the IP and TCP headers, so a
lot more gateways than just ISI are chirping multi-packet messages. From
previous reports, I would not count the ISI data as typical - there might
be something special going on there. I'll alert the field operatives...

Dave
-------
-----------[000013][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 11:51:43 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: MILLS@USC-ISID.ARPA

In response to the message sent  Sat 1 Jun 85 22:33:56-EDT from Lixia@MIT-XX.ARPA

Lixia,

A large number of hosts have been observed here using initial retransmission
timeouts in the one-to-two second range, which has been repeatedly noted as
being too short (see RFC-889). When a couple of these WMWs gang up on a
busy gateway, instant congestion occurs and doesn't go away until the
hosts time out the ACK for their SYN, usually a minute or so. The SYNfull
gateway meanwhile is dropping lots of packets for other clients, who
themselves are ratcheting the retransmission-timeout estimate upwards.
The system is obviously unstable, even when the gateway was comfortably
underloaded to begin with. All it takes is a pulse of traffic sufficient
to topple the gateway over its buffer limit.

In other words, your argument has great merit; however the assumption that
retransmission timeouts are always longer than the roundtrip time is not
correct for many players in this circus.

Dave
-------

-----------[000014][next][prev][last][first]----------------------------------------------------
Date:      Mon 3 Jun 85 11:55:40-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        Murray.pa@XEROX.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA, JNC@MIT-XX.ARPA
Subject:   Re: MILNET/ARPANET performance
	How true that one can ruin it for all. One definite facet of
congestion control (when we eventually implement it) is server penalization
of hosts that don't obey the rules. Gotta have some feedback in the
system to encourage people to fix lossage.

	Noel
-------
-----------[000015][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Jun 85 12:40:10 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        CERF@USC-ISI.ARPA
Cc:        mills@DCN6.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPANET/MILNET performance statistics
One observation:  MILISI is the gateway that EGP tells my gateways to
use for all the nets on the other side of the bridges.  Assuming this is
true for both sides of the house, it would seem that this is the gateway
used for peoples little ethernets to talk through to get to the
otherside. Most local nets have higher MTU's than the IMP 1008
(typically 1536). This results in their gateways sending lots of
fragmented datagrams all over the place. Since most of the LANs derive
choose the MIL/ARPA bridge from the EGP information, ISI may bear the
brunt of the fractured ethernet traffic. IP fragmentation is admittedly
a bad thing to do on a regular basis.  BRL avoids this by keeping most
of our LAN's at 1008 (for historical reasons mostly, the gateways
originally didn't know how to fragment).

-Ron
-----------[000016][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 12:51:50 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: MILLS@USC-ISID.ARPA

In response to the message sent  2 Jun 1985 05:04-EDT from CERF@USC-ISI.ARPA

Vint,

The datagram sizes shown in my data include the IP and TCP headers, so a
lot more gateways than just ISI are chirping multi-packet messages. From
previous reports, I would not count the ISI data as typical - there might
be something special going on there. I'll alert the field operatives...

Dave
-------

-----------[000017][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 13:53:37 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	How true that one can ruin it for all. One definite facet of
congestion control (when we eventually implement it) is server penalization
of hosts that don't obey the rules. Gotta have some feedback in the
system to encourage people to fix lossage.

	Noel
-------

-----------[000018][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Jun 85 14:33:44 EDT
From:      Marianne Gardner <mgardner@BBNCCY.ARPA>
To:        MILLS@usc-isid.arpa
Cc:        CERF@usc-isi.arpa, mills@dcn6.arpa, tcp-ip@sri-nic.arpa, mgardner@BBNCCY.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
Dave,

I must disagree with both you and Vint.  It is typical for ISI to have an
average datagram size over 100 bytes.  ISI does not have a disproportionate
amount of traffic and drops fewer packets than most of the mailbridges.

Marianne

-----------[000019][next][prev][last][first]----------------------------------------------------
Date:      3 Jun 1985 14:41:36 EDT
From:      INCO@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Cc:        protocols@RUTGERS.ARPA
Subject:   DECNET issues

     I am interested in anyone who has used or worked with DECNET
as regards to performance, apllications,functionality, etc.  Specifically,
I am interested in any documentation on these issues over and above
what is covered in literature provoided by DECNET, or opinions by
individuals who have worked with DECNET itself.  Thank you.

Steve Sutkowski
Inco at Usc-Isid
-------
-----------[000020][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 15:04:34 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: MILNET/ARPANET performance

From: imagen!geof@SU-SHASTA.ARPA


To summarize the last few messages:

	1.Currently, many hosts retransmit too often.  This is
	  a major source of congestion, which can be alleviated by
	  forcing hosts to use better algorithms for their timeouts,
	  including (but not limited to) longer initial timeouts.

	2.After we do this (if we can do this), congestion in the
	  network will still be a problem, which according to
	  Lixia Zhang's and Noel Chiappa's arguments, can only be
	  solved by controlling the entry of packets into the internet.

Clearly item 1 is important, and easier to carry out.  Item 2 is an
equally valid problem.

- Geof Cooper

-----------[000021][next][prev][last][first]----------------------------------------------------
Date:      3 Jun 1985 15:33:00 EDT
From:      MILLS@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Multiple homes considered taxable
Folks,

I just caught NIC sending mail to local-host 128.4.0.9 off our ARPANET gateway
from its MILNET address, rather than its ARPANET address. Certainly this 
clutters up the ARPANET/MILNET gateways, as well as offends the Principle of
Least Astonishment; however, I can easily see how this can come about, since
hosts don't know about gateway hops. Our GADS task force should chew on this 
one.

Dave
-------
-----------[000022][next][prev][last][first]----------------------------------------------------
Date:      3 Jun 1985 15:47:08 EDT
From:      MILLS@USC-ISID.ARPA
To:        mgardner@BBNCCY.ARPA
Cc:        CERF@USC-ISI.ARPA, mills@DCN6.ARPA, tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
In response to your message sent  Mon, 3 Jun 85 14:33:44 EDT

Marianne,

Your comments do not jibe with the data in my message - MILISI leads the pack
in throughput, size and almost all categories of breakage. You might have taken
umbrage if I tattled on ISI (as against MILISI), but my message clearly
addressed the mailbridges only.

Dave
-------
-----------[000023][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Jun 85 16:13:14 EDT
From:      Marianne Gardner <mgardner@BBNCCY.ARPA>
To:        MILLS@usc-isid.arpa
Cc:        mgardner@BBNCCY.ARPA, CERF@usc-isi.arpa, mills@dcn6.arpa, tcp-ip@sri-nic.arpa
Subject:   Re: ARPANET/MILNET performance statistics
Dave,

You data suffers taking too small a sample.  Yes, last week MILISI had
more traffic than the other bridges, but this is not the usual case as the
summaries below will show.  Let me say again, that if you look at the trap
reports and throughput summaries for the last few months, you will not find
that MILISI has more problems than the other mailbridges.
Sorry if my shortening the name of MILISI confused you.

The data that follows covers 7 day periods, from Monday to the following
Sunday.  The date indicated is the Sunday that ends the collection period.



date: 4/14 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    162.00    5,491,493     0.09    9.42        9.24   129.35
    MILDCE    162.00    5,445,388     0.72    9.34        9.24    70.18
    MILISI    162.00    4,976,827     0.58    8.53        8.48   117.84
    MILLBL    162.25    3,800,470     2.64    6.51        6.11    65.16
    MILSAC    162.00    2,591,494     0.17    4.44        4.48    63.04
    MILSRI    162.00    2,162,412     2.08    3.71        3.70    59.10



date: 4/21 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    155.50    4,184,138     0.29    7.47        7.44    79.09
    MILISI    158.75    4,914,047     0.58    8.60        8.66   114.79
    MILLBL    158.75    3,934,242     0.59    6.88        6.76    78.86
    MILSAC    158.75    2,667,978     0.81    4.67        4.70    61.66
    MILSRI    158.75    2,051,680     1.59    3.59        3.62    59.14



date: 5/5 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES
    MILARP    164.25    4,550,474     1.40    7.70        7.75    74.89
    MILBBN    164.25    5,622,661     0.67    9.51        9.56    60.07
    MILDCE    161.75    4,467,564     0.94    7.67        7.66    77.32
    MILISI    164.25    5,190,179     0.46    8.78        8.64   115.16
    MILLBL    164.25    3,797,215     0.41    6.42        5.94    63.03
    MILSAC    164.25    3,120,124     0.49    5.28        5.21    68.68
    MILSRI    164.25    2,058,651     0.42    3.48        3.57    65.51



date: 5/12 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG

    MILARP    161.25    4,765,258     0.34    8.21        8.19    74.53
    MILBBN    161.25    4,809,526     0.81    8.29        8.38    60.88
    MILDCE    161.25    4,643,647     0.56    8.00        7.96    78.24
    MILISI    161.25    5,059,553     0.50    8.72        8.65    96.73
    MILLBL    161.25    3,501,986     0.30    6.03        5.93    62.55
    MILSAC    161.25    3,073,519     0.49    5.29        5.22    66.44
    MILSRI    161.25    1,994,121     0.17    3.44        3.54    66.18



date: 5/19 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    162.00    4,442,049     0.23    7.62        7.62    72.99
    MILBBN    161.75    5,030,583     0.81    8.64        8.78    61.53
    MILDCE    159.50    4,428,993     0.73    7.71        7.67    90.27
    MILISI    162.00    4,866,970     0.55    8.35        8.37   105.50
    MILLBL    162.00    3,103,610     0.15    5.32        5.23    64.42
    MILSAC    162.00    2,990,150     0.50    5.13        5.17    64.61
    MILSRI    162.00    2,046,546     0.74    3.51        3.59    57.72



date: 5/26 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    142.25    4,169,046     0.17    8.14        8.10    71.32
    MILBBN    142.25    4,638,747     0.65    9.06        9.12    59.28


-----------[000024][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 16:16:33 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   DECNET issues

From: INCO@USC-ISID.ARPA


     I am interested in anyone who has used or worked with DECNET
as regards to performance, apllications,functionality, etc.  Specifically,
I am interested in any documentation on these issues over and above
what is covered in literature provoided by DECNET, or opinions by
individuals who have worked with DECNET itself.  Thank you.

Steve Sutkowski
Inco at Usc-Isid
-------

-----------[000025][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Jun 85 16:27:54 edt
From:      James O'Toole <james@gyre>
To:        cerf@usc-isi
Cc:        tcp-ip@nic
Subject:   Re: ARPANET/MILNET performance statistics
Vint,

Your message said "1008 bits or 126 bytes" but the IMP really
handles 1008 *bytes*.  I didn't notice that the average packet
sizes were ever higher than that, which would surprise me.  Am
I missing the point here, or did you mix your units?

  --Jim

P.S. Hmmm, isn't that really 1006, not 1008?
-----------[000026][next][prev][last][first]----------------------------------------------------
Date:      Mon 3 Jun 85 16:36:17-EDT
From:      Ken Rossman <sy.Ken@CU20B.ARPA>
To:        INCO@USC-ISID.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        protocols@RUTGERS.ARPA
Subject:   Re: DECNET issues
We run a fairly extensive DECnet here at Columbia University, which also
stretches to various other schools.  We also run TCP/IP.  The functionality
of the two nets actually complement each other, and they are both useful
to have around for different reasons.  If you can be more specific about
the types of questions you want answered, I can be more specific in my
reply to your queries.  /Ken
-------
-----------[000027][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 17:14:48 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: Marianne Gardner <mgardner@BBNCCY.ARPA>

Dave,

I must disagree with both you and Vint.  It is typical for ISI to have an
average datagram size over 100 bytes.  ISI does not have a disproportionate
amount of traffic and drops fewer packets than most of the mailbridges.

Marianne

-----------[000028][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3 Jun 85 18:23:08 CDT
From:      Mike Caplinger <mike@rice.ARPA>
To:        tcp-ip@sri-nic.ARPA
Does somebody have a 4.2 UDP time setting program, and a daemon for
same?  I'm sure the availability of the former would lessen the load on
DCN fuzzballs.

Also, SMI take note; I assume the Sun rdate is TCP-based.  Maybe there
should be a UDP version.

        - Mike
-----------[000029][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 19:10:41 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: MILLS@USC-ISID.ARPA

In response to your message sent  Mon, 3 Jun 85 14:33:44 EDT

Marianne,

Your comments do not jibe with the data in my message - MILISI leads the pack
in throughput, size and almost all categories of breakage. You might have taken
umbrage if I tattled on ISI (as against MILISI), but my message clearly
addressed the mailbridges only.

Dave
-------

-----------[000030][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 20:42:22 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: James O'Toole <james@gyre>

Vint,

Your message said "1008 bits or 126 bytes" but the IMP really
handles 1008 *bytes*.  I didn't notice that the average packet
sizes were ever higher than that, which would surprise me.  Am
I missing the point here, or did you mix your units?

  --Jim

P.S. Hmmm, isn't that really 1006, not 1008?

-----------[000031][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 21:11:35 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: DECNET issues

From: Ken Rossman <sy.Ken@CU20B.ARPA>

We run a fairly extensive DECnet here at Columbia University, which also
stretches to various other schools.  We also run TCP/IP.  The functionality
of the two nets actually complement each other, and they are both useful
to have around for different reasons.  If you can be more specific about
the types of questions you want answered, I can be more specific in my
reply to your queries.  /Ken
-------

-----------[000032][next][prev][last][first]----------------------------------------------------
Date:      03 Jun 85 22:45:34 EST (Mon)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        Mike Caplinger <mike@rice.arpa>
Cc:        tcp-ip@sri-nic.arpa
I have such a pair of programs; I just dusted off my TCP versions and
made them mumble UDP. Dave should be much happier now.

I'll put them on purdue-merlin for anonymous FTP, in the file
pub/dated.flar.

Cheers,
chris
----------
-----------[000033][next][prev][last][first]----------------------------------------------------
Date:      Mon, 3-Jun-85 22:00:28 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: Marianne Gardner <mgardner@BBNCCY.ARPA>

Dave,

You data suffers taking too small a sample.  Yes, last week MILISI had
more traffic than the other bridges, but this is not the usual case as the
summaries below will show.  Let me say again, that if you look at the trap
reports and throughput summaries for the last few months, you will not find
that MILISI has more problems than the other mailbridges.
Sorry if my shortening the name of MILISI confused you.

The data that follows covers 7 day periods, from Monday to the following
Sunday.  The date indicated is the Sunday that ends the collection period.



date: 4/14 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    162.00    5,491,493     0.09    9.42        9.24   129.35
    MILDCE    162.00    5,445,388     0.72    9.34        9.24    70.18
    MILISI    162.00    4,976,827     0.58    8.53        8.48   117.84
    MILLBL    162.25    3,800,470     2.64    6.51        6.11    65.16
    MILSAC    162.00    2,591,494     0.17    4.44        4.48    63.04
    MILSRI    162.00    2,162,412     2.08    3.71        3.70    59.10



date: 4/21 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    155.50    4,184,138     0.29    7.47        7.44    79.09
    MILISI    158.75    4,914,047     0.58    8.60        8.66   114.79
    MILLBL    158.75    3,934,242     0.59    6.88        6.76    78.86
    MILSAC    158.75    2,667,978     0.81    4.67        4.70    61.66
    MILSRI    158.75    2,051,680     1.59    3.59        3.62    59.14



date: 5/5 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES
    MILARP    164.25    4,550,474     1.40    7.70        7.75    74.89
    MILBBN    164.25    5,622,661     0.67    9.51        9.56    60.07
    MILDCE    161.75    4,467,564     0.94    7.67        7.66    77.32
    MILISI    164.25    5,190,179     0.46    8.78        8.64   115.16
    MILLBL    164.25    3,797,215     0.41    6.42        5.94    63.03
    MILSAC    164.25    3,120,124     0.49    5.28        5.21    68.68
    MILSRI    164.25    2,058,651     0.42    3.48        3.57    65.51



date: 5/12 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG

    MILARP    161.25    4,765,258     0.34    8.21        8.19    74.53
    MILBBN    161.25    4,809,526     0.81    8.29        8.38    60.88
    MILDCE    161.25    4,643,647     0.56    8.00        7.96    78.24
    MILISI    161.25    5,059,553     0.50    8.72        8.65    96.73
    MILLBL    161.25    3,501,986     0.30    6.03        5.93    62.55
    MILSAC    161.25    3,073,519     0.49    5.29        5.22    66.44
    MILSRI    161.25    1,994,121     0.17    3.44        3.54    66.18



date: 5/19 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    162.00    4,442,049     0.23    7.62        7.62    72.99
    MILBBN    161.75    5,030,583     0.81    8.64        8.78    61.53
    MILDCE    159.50    4,428,993     0.73    7.71        7.67    90.27
    MILISI    162.00    4,866,970     0.55    8.35        8.37   105.50
    MILLBL    162.00    3,103,610     0.15    5.32        5.23    64.42
    MILSAC    162.00    2,990,150     0.50    5.13        5.17    64.61
    MILSRI    162.00    2,046,546     0.74    3.51        3.59    57.72



date: 5/26 (Sun)

GTWY           HOURS      RCVD        DST     RCVD PPS   SENT PPS   AVG
              COVERED    DGRAMS      UNRCH    DGRAMS     DGRAMS    BYTES

    MILARP    142.25    4,169,046     0.17    8.14        8.10    71.32
    MILBBN    142.25    4,638,747     0.65    9.06        9.12    59.28

-----------[000034][next][prev][last][first]----------------------------------------------------
Date:      3 Jun 1985 22:27:11 EDT
From:      MILLS@USC-ISID.ARPA
To:        mgardner@BBNCCY.ARPA
Cc:        CERF@USC-ISI.ARPA, mills@DCN6.ARPA, tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
In response to your message sent  Mon, 3 Jun 85 16:13:14 EDT

Marianne,

I surely didn't mean to start a bluster here. Vint's original comment
addressed the curiousity that MILISI datagrams were fatter than the others.
Your data show this to be true in four of the five samples. My comment that
MILISI exibited more breakage than the others was not intended as a blanket
indictment and, in fact is not justified in view of your data. My suspicion
that ISI is indeed special is confirmed both because of the consistently high
datagram size, as well as the increasing share of the system load, going from
number four at the beginning of your sample history consistently upwards to
number one at the end. MILISI is again number one this week in both departments.

My aside to Vint that I would "alert the field operatives" has come to pass and
you have broken cover. Your next assignment, should you choose to accept it, is
to tell us why.

Dave
-------
-----------[000035][next][prev][last][first]----------------------------------------------------
Date:      03 Jun 85 23:35:09 EST (Mon)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   Speaking of 4.2 time programs
Has anyone made the modifications to 4.2 to slave a system's clock to a
Fuzzball host?

chris
----------
-----------[000036][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 01:12:35 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: MILLS@USC-ISID.ARPA

In response to your message sent  Mon, 3 Jun 85 16:13:14 EDT

Marianne,

I surely didn't mean to start a bluster here. Vint's original comment
addressed the curiousity that MILISI datagrams were fatter than the others.
Your data show this to be true in four of the five samples. My comment that
MILISI exibited more breakage than the others was not intended as a blanket
indictment and, in fact is not justified in view of your data. My suspicion
that ISI is indeed special is confirmed both because of the consistently high
datagram size, as well as the increasing share of the system load, going from
number four at the beginning of your sample history consistently upwards to
number one at the end. MILISI is again number one this week in both departments.

My aside to Vint that I would "alert the field operatives" has come to pass and
you have broken cover. Your next assignment, should you choose to accept it, is
to tell us why.

Dave
-------

-----------[000037][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 01:52:22 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Floods of tinygrams from telnet hosts

From: imagen!geof@BERKELEY


The worst offenders of the tinygram explosions seen by Dave Mills are
probably Unix systems running server telnet.  Every Unix TCP
implementation I've seen (save one) has the characteristic that packets
are sent whenever a unix WRITE(I) call is made.  In many applications,
this happens once per character.  Unix also makes it very difficult to
heuristically combine this flood of characters into larger packets
(except on retransmission), through a combination of factors including
interfaces that were designed for blocking system calls, and incredibly
poor resolution of application-level timers.

The one implementation I've seen that solves this problem uses the
algorithm that John Nagle of Ford Aerospace developed.  My
understanding of it is (John, please correct any innaccuracies) that a
TCP implementation will emit a new packet only under the following
situations:
	- Its internal buffers are full (i.e., it is ready to send a
	  full sized packet)
	- All outstanding data it has sent has been acknowledged by the
	  remote TCP.
When communicating over a local net, this algorithm works fine for
telnet, since the intercharacter time is typically less than the
connection round trip time.  Whenever the intercharacter time of the TCP
client becomes greater than the round trip time, the algorithm
naturally divides the data to be sent into equally sized packets, based
on the ratio of intercharacter time to round trip time.

In the case of FTP-like connections, the algorithm degenerates into the
current behavior, since the internal buffers are always full.

Will someone please implement this on a 4.2 unix?  Then maybe I'll be
able to get decent response from APL when I telnet into the 4.2 machine
on the local ethernet!

- Geof

-----------[000038][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 03:21:00 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  ARPANET/MILNET performance statistics

From: Ron Natalie <ron@BRL.ARPA>

One observation:  MILISI is the gateway that EGP tells my gateways to
use for all the nets on the other side of the bridges.  Assuming this is
true for both sides of the house, it would seem that this is the gateway
used for peoples little ethernets to talk through to get to the
otherside. Most local nets have higher MTU's than the IMP 1008
(typically 1536). This results in their gateways sending lots of
fragmented datagrams all over the place. Since most of the LANs derive
choose the MIL/ARPA bridge from the EGP information, ISI may bear the
brunt of the fractured ethernet traffic. IP fragmentation is admittedly
a bad thing to do on a regular basis.  BRL avoids this by keeping most
of our LAN's at 1008 (for historical reasons mostly, the gateways
originally didn't know how to fragment).

-Ron

-----------[000039][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 03:57:01 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Speaking of 4.2 time programs

From: Christopher A Kent <cak@Purdue.ARPA>

Has anyone made the modifications to 4.2 to slave a system's clock to a
Fuzzball host?

chris
----------

-----------[000040][next][prev][last][first]----------------------------------------------------
Date:      4 Jun 1985 1116-PDT (Tuesday)
From:      Jeff Mogul <mogul@Navajo>
To:        Ron Natalie <ron@BRL.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Fragging ARPAnet gateways
					IP fragmentation is admittedly
    a bad thing to do on a regular basis.  BRL avoids this by keeping most
    of our LAN's at 1008 (for historical reasons mostly, the gateways
    originally didn't know how to fragment).

One of the less successful performance "improvements" in 4.2BSD
was that the TCP tried to use 1024 byte segments, since this
allowed 4.2 to use page-remapping techniques instead of copying.
However, this is really a complete loss in our environment, where
we had Vaxen connected to a 3Mb ether which is gatewayed to the
ARPAnet (the current hardware situation is a little different).

FTPing from another 4.2 machine across the ARPAnet would seldom
work, because those 1024-byte segments turned into an almost-1024-
byte fragment followed by a tinygram.  The local gateway dumps the
tinygram onto the ethernet almost immediately following the first
fragment, and the Vax interface (no buffering) drops back-to-back
packets, i.e., the tinygram.  This happens over and over, so even
though 90% of the bytes are getting through, the segment is never
acked and the connection fizzles.

We solved this by hacking the TCP code to force 576-byte segments
unless it is sure that no gateway was in the path, in which
case it uses the LAN MTU.  This works fine, and we can keep our
LAN MTUs high (1500 bytes) to reduce the packet counts in the
dominant case.
-----------[000041][next][prev][last][first]----------------------------------------------------
Date:      04 Jun 85 08:59:46 EDT (Tue)
From:      Mike Brescia <brescia@bbnccv>
To:        James O'Toole <james@gyre>
Cc:        tcp-ip@nic, brescia@bbnccv
Subject:   arpanet/milnet packet and message size
 - arpanet message size

Your local arpanet/milnet imp will accept messages from your host up to
1008-octets-less-one-bit (8063 bits) beyond the AHIP (arpanet host-imp protocol)
leader.  For those sites which use interfaces that handle data in units of 2
octets (vax or pdp11 and lhdh interface), this sets a practical limit of 1006
bytes.

 - significance of 126 octets

The arpanet imps handle data internally and forward them among themselves
using buffers and packets of "1008 bits or 126 bytes".  Also, messages 126
octets or shorter are treated differently in the imp end-to-end exchanges
(exchanges between the imp connected to the source host and that at the
destination).  Briefly, the "single packet" messages (126 or shorter) are sent
directly, while longer "multi-packet" messages are sent only after the source
imp has sent a short allocation request to the destination imp and gotten a
reply.  This is one reason why a data measurement a few years ago showed a
throughput peak at packet size of 126, then smaller throughput until the
packet size was increased beyond 4*126 octets.

The imp end-to-end algorithms are currently being redesigned to relax these
allocation delays, allow more than 8 messages at a time between hosts or
gateways (mentioned in some other notes), and remove some other causes of host
blocking where the imp delays accepting data from the host.
-----------[000042][next][prev][last][first]----------------------------------------------------
Date:      Tue 4 Jun 85 12:25:58-PDT
From:      Mark Crispin <Crispin@SUMEX-AIM.ARPA>
To:        james@GYRE.ARPA
Cc:        cerf@USC-ISI.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
Jim -

     An 1822 packet is 1008 bits, which is not only 126 bytes
but also 28 36-bit words.  Hosts never see packets, they see
messages which can be up to 8159 bytes; that is, a 96 bit leader
plus 8 packets full of data minus 1 bit.

     Many people erroneously confuse messages with packets, and
unfortunately a lot of host software does the same.

-- Mark --
-------
-----------[000043][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 09:55:32 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   arpanet/milnet packet and message size

From: Mike Brescia <brescia@bbnccv>

 - arpanet message size

Your local arpanet/milnet imp will accept messages from your host up to
1008-octets-less-one-bit (8063 bits) beyond the AHIP (arpanet host-imp protocol)
leader.  For those sites which use interfaces that handle data in units of 2
octets (vax or pdp11 and lhdh interface), this sets a practical limit of 1006
bytes.

 - significance of 126 octets

The arpanet imps handle data internally and forward them among themselves
using buffers and packets of "1008 bits or 126 bytes".  Also, messages 126
octets or shorter are treated differently in the imp end-to-end exchanges
(exchanges between the imp connected to the source host and that at the
destination).  Briefly, the "single packet" messages (126 or shorter) are sent
directly, while longer "multi-packet" messages are sent only after the source
imp has sent a short allocation request to the destination imp and gotten a
reply.  This is one reason why a data measurement a few years ago showed a
throughput peak at packet size of 126, then smaller throughput until the
packet size was increased beyond 4*126 octets.

The imp end-to-end algorithms are currently being redesigned to relax these
allocation delays, allow more than 8 messages at a time between hosts or
gateways (mentioned in some other notes), and remove some other causes of host
blocking where the imp delays accepting data from the host.

-----------[000044][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 15:21:45 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Fragging ARPAnet gateways

From: Jeff Mogul <mogul@Navajo>

					IP fragmentation is admittedly
    a bad thing to do on a regular basis.  BRL avoids this by keeping most
    of our LAN's at 1008 (for historical reasons mostly, the gateways
    originally didn't know how to fragment).

One of the less successful performance "improvements" in 4.2BSD
was that the TCP tried to use 1024 byte segments, since this
allowed 4.2 to use page-remapping techniques instead of copying.
However, this is really a complete loss in our environment, where
we had Vaxen connected to a 3Mb ether which is gatewayed to the
ARPAnet (the current hardware situation is a little different).

FTPing from another 4.2 machine across the ARPAnet would seldom
work, because those 1024-byte segments turned into an almost-1024-
byte fragment followed by a tinygram.  The local gateway dumps the
tinygram onto the ethernet almost immediately following the first
fragment, and the Vax interface (no buffering) drops back-to-back
packets, i.e., the tinygram.  This happens over and over, so even
though 90% of the bytes are getting through, the segment is never
acked and the connection fizzles.

We solved this by hacking the TCP code to force 576-byte segments
unless it is sure that no gateway was in the path, in which
case it uses the LAN MTU.  This works fine, and we can keep our
LAN MTUs high (1500 bytes) to reduce the packet counts in the
dominant case.

-----------[000045][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Jun 85 16:17:58 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Christopher A Kent <cak@PURDUE.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Speaking of 4.2 time programs
Perhaps all the MILNET/ARPANET loading is coming from everybody trying
to use DCN machines to set their clocks.

-Ron
-----------[000046][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 16:20:26 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: Mark Crispin <Crispin@SUMEX-AIM.ARPA>

Jim -

     An 1822 packet is 1008 bits, which is not only 126 bytes
but also 28 36-bit words.  Hosts never see packets, they see
messages which can be up to 8159 bytes; that is, a 96 bit leader
plus 8 packets full of data minus 1 bit.

     Many people erroneously confuse messages with packets, and
unfortunately a lot of host software does the same.

-- Mark --
-------

-----------[000047][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Jun 85 16:24:09 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Jeff Mogul <mogul@SU-NAVAJO.ARPA>
Cc:        Ron Natalie <ron@BRL.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Fragging ARPAnet gateways
Yes, we realized this to.  Our gateway had slight software modifications
to cope with this.  Fortunately the Ethernet has a little buffering on
the boards (at least Interlan does), unfortunately some of the other
strange devices we use here, did not.

-Ron
-----------[000048][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Jun 85 16:30:36 EDT
From:      Mike Muuss <mike@BRL.ARPA>
To:        imagen!geof@su-shasta.ARPA
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Re:  Floods of tinygrams from telnet hosts
Berkeley's latest TCP (which will eventually be released as 4.3 BSD)
is improved in this regard;  it also listens to ICMP source quenches.
	-M
-----------[000049][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4-Jun-85 17:19:37 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Floods of tinygrams from telnet hosts

From: Mike Muuss <mike@BRL.ARPA>

Berkeley's latest TCP (which will eventually be released as 4.3 BSD)
is improved in this regard;  it also listens to ICMP source quenches.
	-M

-----------[000050][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Jun 85 19:36:20 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        James O'Toole <james@GYRE.ARPA>
Cc:        cerf@USC-ISI.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPANET/MILNET performance statistics
Actually IMP packets are 1008 bits, IMP regular messages (what we know as packets)
are from 96 to 8159 bits.  Subtracting 96 bits off for the IMP leader leaves
8063 bits which is 1007 octets and 7 bits left over (a septet, I suppose).

-Ron

-----------[000051][next][prev][last][first]----------------------------------------------------
Date:      Tue, 4 Jun 85 20:01 EDT
From:      "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Fragmentation
What will it take to define some protocol for establishing reasonable
segment lengths?  Comments that appear every month or so seem to cry out
for some IP/ICMP do-dah by which two hosts can get a good guess of the
fragmentation situation in between each other.

For a start, any TCP/IP where TCP can't ask for the hardware MTU of the
most likely routing to a given address should be taken out and shot.

After that, perhaps some sort of IP option could be added to which any
gateway involved would add its hardware MTU.  An exchange of these at
the beginning of a connection would improve the picture no end.
-----------[000052][next][prev][last][first]----------------------------------------------------
Date:      5 Jun 85 00:31:03 PDT
From:      Murray.pa@Xerox.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Cc:        Murray.pa@Xerox.ARPA
Subject:   Re: Fragging ARPAnet gateways
There is another hack that's worth remembering if anybody is having
troubles catching back-to-back packets: Slow down the transmitter. Some
of our Ethernet drivers do that with only a few words of microcode. On
the first (re)transmission, set the collision timer to 1ms.

There are obviously variations and optimizations. The point is that it's
trivial to implement and a small delay won't be noticed by most
applications but it can be a lifesaver in a few critical cases.

Credit where it's due dept: Ed Taft came up with this trick when our
Alto based file servers couldn't keep up with our Dorados.
-----------[000053][next][prev][last][first]----------------------------------------------------
Date:      04 Jun 85 22:46:09 EDT (Tue)
From:      Dennis Rockwell <drockwel@CSNET-SH.ARPA>
To:        "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Fragmentation
A solution that I used in the later BBN 4.1 TCP/IP implementation was for
the IP layer to pass to TCP the size of the largest fragment which went to
make up the current IP packet.  TCP then unilaterally restricted its segment
size to twice the fragment size (less an appropriate fudge factor).  This
produces no tinygrams, although interfaces that can't handle back-to-back
packets are still in trouble.

Yes, it's a hack that violates level separation rules, but it made a big
difference in practice.

A useful facility that the SATNET folks have been good enough to provide are
the IP echo hosts on the far side of satellite connections (goonhilly-echo
for example).  These switch the IP source and destination addresses and ship
the packet out again.  Since SATNET has the smallest MTU, the longest delay,
and a penchant for dropping packets (this is not SATNET's fault; the speed
mismatch between the ARPAnet and SATNET is ferocious and the gateway clogs),
it's handy for testing TCP/IP implementations.

It has been known to crash insufficiently bulletproofed software.

Dennis Rockwell
CSNET Technical Staff
-----------[000054][next][prev][last][first]----------------------------------------------------
Date:      5 Jun 1985 00:37:36 EDT
From:      MILLS@USC-ISID.ARPA
To:        cak@PURDUE.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Speaking of 4.2 time programs
In response to the message sent  03 Jun 85 23:35:09 EST (Mon) from cak@Purdue.ARPA

Chris,

Mike O'Connor (oconnor@dcn9) installed a program on our Sun workstation
which mumbles the DCNet protocols via our local Ethernet. Unfortunately,
the Sun clock wanders all over the place and is unsuitable for locking
to anything better than a windmill generator. I don't think you want
the DCNet protocols, anyway, but some sort of similar protocol that
operates over real Internet paths. That's not trivial, as a grok at
RFC-889 should show. The fuzzies have a highly evolved tracking algorithm
involving both linear and nonlinear filtering and estimation in order
to achieve the claimed accuracies even on local nets. It would be a useful
and positively fascinating exercise to adapt these algorithms to real
Internet dynamics using UDP for coarse tracking and ICMP timestamps for
fine synchronization. Someone out there should cop a Master's thesis for
going after this one with hammer and tongs.

Dave
-------
-----------[000055][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 00:41:53 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Speaking of 4.2 time programs

From: Ron Natalie <ron@BRL.ARPA>

Perhaps all the MILNET/ARPANET loading is coming from everybody trying
to use DCN machines to set their clocks.

-Ron

-----------[000056][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 01:13:01 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Fragging ARPAnet gateways

From: Ron Natalie <ron@BRL.ARPA>

Yes, we realized this to.  Our gateway had slight software modifications
to cope with this.  Fortunately the Ethernet has a little buffering on
the boards (at least Interlan does), unfortunately some of the other
strange devices we use here, did not.

-Ron

-----------[000057][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 01:33:13 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Fragmentation

From: "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>

What will it take to define some protocol for establishing reasonable
segment lengths?  Comments that appear every month or so seem to cry out
for some IP/ICMP do-dah by which two hosts can get a good guess of the
fragmentation situation in between each other.

For a start, any TCP/IP where TCP can't ask for the hardware MTU of the
most likely routing to a given address should be taken out and shot.

After that, perhaps some sort of IP option could be added to which any
gateway involved would add its hardware MTU.  An exchange of these at
the beginning of a connection would improve the picture no end.

-----------[000058][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 02:14:36 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  ARPANET/MILNET performance statistics

From: Ron Natalie <ron@BRL.ARPA>

Actually IMP packets are 1008 bits, IMP regular messages (what we know as packets)
are from 96 to 8159 bits.  Subtracting 96 bits off for the IMP leader leaves
8063 bits which is 1007 octets and 7 bits left over (a septet, I suppose).

-Ron

-----------[000059][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 02:49:56 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Speaking of 4.2 time programs

From: MILLS@USC-ISID.ARPA

In response to the message sent  03 Jun 85 23:35:09 EST (Mon) from cak@Purdue.ARPA

Chris,

Mike O'Connor (oconnor@dcn9) installed a program on our Sun workstation
which mumbles the DCNet protocols via our local Ethernet. Unfortunately,
the Sun clock wanders all over the place and is unsuitable for locking
to anything better than a windmill generator. I don't think you want
the DCNet protocols, anyway, but some sort of similar protocol that
operates over real Internet paths. That's not trivial, as a grok at
RFC-889 should show. The fuzzies have a highly evolved tracking algorithm
involving both linear and nonlinear filtering and estimation in order
to achieve the claimed accuracies even on local nets. It would be a useful
and positively fascinating exercise to adapt these algorithms to real
Internet dynamics using UDP for coarse tracking and ICMP timestamps for
fine synchronization. Someone out there should cop a Master's thesis for
going after this one with hammer and tongs.

Dave
-------

-----------[000060][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 03:46:22 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Fragmentation

From: Dennis Rockwell <drockwel@CSNET-SH.ARPA>

A solution that I used in the later BBN 4.1 TCP/IP implementation was for
the IP layer to pass to TCP the size of the largest fragment which went to
make up the current IP packet.  TCP then unilaterally restricted its segment
size to twice the fragment size (less an appropriate fudge factor).  This
produces no tinygrams, although interfaces that can't handle back-to-back
packets are still in trouble.

Yes, it's a hack that violates level separation rules, but it made a big
difference in practice.

A useful facility that the SATNET folks have been good enough to provide are
the IP echo hosts on the far side of satellite connections (goonhilly-echo
for example).  These switch the IP source and destination addresses and ship
the packet out again.  Since SATNET has the smallest MTU, the longest delay,
and a penchant for dropping packets (this is not SATNET's fault; the speed
mismatch between the ARPAnet and SATNET is ferocious and the gateway clogs),
it's handy for testing TCP/IP implementations.

It has been known to crash insufficiently bulletproofed software.

Dennis Rockwell
CSNET Technical Staff

-----------[000061][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 05:40:00 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Fragging ARPAnet gateways

From: Murray.pa@Xerox.ARPA

There is another hack that's worth remembering if anybody is having
troubles catching back-to-back packets: Slow down the transmitter. Some
of our Ethernet drivers do that with only a few words of microcode. On
the first (re)transmission, set the collision timer to 1ms.

There are obviously variations and optimizations. The point is that it's
trivial to implement and a small delay won't be noticed by most
applications but it can be a lifesaver in a few critical cases.

Credit where it's due dept: Ed Taft came up with this trick when our
Alto based file servers couldn't keep up with our Dorados.

-----------[000062][next][prev][last][first]----------------------------------------------------
Date:      5 Jun 1985 1146-PDT (Wednesday)
From:      Jeff Mogul <mogul@Navajo>
To:        MILLS@USC-ISID.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: Speaking of 4.2 time programs
     It would be a useful
    and positively fascinating exercise to adapt these algorithms to real
    Internet dynamics using UDP for coarse tracking and ICMP timestamps for
    fine synchronization. Someone out there should cop a Master's thesis for
    going after this one with hammer and tongs.

Someone already has copped a PhD thesis for devising some very
elegant algorithms to maintain distributed clocks even when
some clocks are lying.  See Keith Marzullo's thesis "Maintaining
the Time in a Distributed System" (Stanford, 1983).  His
implementation used Pup rather than IP protocols, but I think the
mapping is obvious.  Keith's address is Marzullo.PA@Xerox.

Also see Gusella and Zatti, "TEMPO: Time Services for the
Berkeley Local Network", Berkeley EECS report # UCB/CSD 83/163,
which presents a simpler, although in my opinion inferior, algorithm
using UDP.  I can't tell if they got a thesis out of this one or not.
-----------[000063][next][prev][last][first]----------------------------------------------------
Date:      Wed, 5-Jun-85 16:02:07 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Speaking of 4.2 time programs

From: Jeff Mogul <mogul@Navajo>

     It would be a useful
    and positively fascinating exercise to adapt these algorithms to real
    Internet dynamics using UDP for coarse tracking and ICMP timestamps for
    fine synchronization. Someone out there should cop a Master's thesis for
    going after this one with hammer and tongs.

Someone already has copped a PhD thesis for devising some very
elegant algorithms to maintain distributed clocks even when
some clocks are lying.  See Keith Marzullo's thesis "Maintaining
the Time in a Distributed System" (Stanford, 1983).  His
implementation used Pup rather than IP protocols, but I think the
mapping is obvious.  Keith's address is Marzullo.PA@Xerox.

Also see Gusella and Zatti, "TEMPO: Time Services for the
Berkeley Local Network", Berkeley EECS report # UCB/CSD 83/163,
which presents a simpler, although in my opinion inferior, algorithm
using UDP.  I can't tell if they got a thesis out of this one or not.

-----------[000064][next][prev][last][first]----------------------------------------------------
Date:      6 Jun 1985 11:05:48 EDT
From:      MILLS@USC-ISID.ARPA
To:        imagen!geof@SU-SHASTA.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Floods of tinygrams from telnet hosts
In response to the message sent  Monday,  3 Jun 1985 11:23-PDT from  imagen!geof@shasta

Geof,

What implementation are you referring to? John and I discussed this a long
time ago. Subsequently, a send policy similar to this was introduced
along with a companion ack policy in the fuzzball TCP, but so far as I
know, not in any other implementation, although Berkeley claim to be doing
that with 4.3bsd.

The mechanism works as follows:

1. Arriving data from the user is queued temporarily at the transmitter. If
   the size of this queue is at least the MSS for the connection, it is
   packetized and sent immediately (subject to the usual window controls,
   of course). If not, the data are held until all previously sent data
   have been acked. Note that data arriving while a previously sent wadge
   is in flight simply pile up in the queue.

2. The receiver acks incoming data immediately if the amount of data passed on
   to the user (i.e. removed from reassembly buffers) since the last ack
   is at least the MSS for the connection. In addition, the receiver acks
   if some arbitrary time (here, about 500 milliseconds) has elapsed since
   new data arrived and no ack was sent.

As reported previously, these policies dramatically improved the performance
of TELNET over mismatched paths, while sustaining good performance of FTP.
We have been using it for about six months now in the fuzzballs over
the raunchiest of paths (would you believe Amateur packet-radio, which uses
CSMA at 1200 bps at 145 MHz?).

A major caution using these policies is the interaction between the send and
ack policies with respect to the receiver ack delay, which increases the
apparent delay for TELNET tinygrams. The delay is a major factor in improving
performance with remote echo, since the acks normally piggyback on the echo
segment, improving the apparent response time. However, in cases where
no end-end traffic is wandering backwards over the path and segments less
than the maximum are involved (e.g. many SMTP connections), a lot of dead air
results. The issues need to be studied a bit more.

Dave
-------
-----------[000065][next][prev][last][first]----------------------------------------------------
Date:      Thursday,  6 Jun 1985 12:26-EDT
From:      bnsw@Mitre-Bedford
To:        tcp-ip@sri-nic
Cc:        bnsw@Mitre-Bedford
Subject:   ARPANET/LAN Accounting for UNIX TCP Applications
  We are planning to hook up some LAN's to our ARPANET host using TCP/IP.
We want to keep an account of the traffic (TELNET's, FTP's,TFTP's, and
SMTP's) from the LAN hosts in using the ARPANET host.
  Has anyone written any accounting-type programs to gather usage stats for
the hosts on a LAN that use/passthru an ARPANET host?  Or, from a related
viewpoint, to record ARPANET user statistics applicable for accounting from
just the ARPANET host?  We currently run UNIX 4.1bsd; we will soon be updating
to ULTRIX (as a testbed 4.3bsd).
  Thank you for your time.
Barbara Seeber-Wagner
-----------[000066][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6-Jun-85 12:41:57 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Floods of tinygrams from telnet hosts

From: MILLS@USC-ISID.ARPA

In response to the message sent  Monday,  3 Jun 1985 11:23-PDT from  imagen!geof@shasta

Geof,

What implementation are you referring to? John and I discussed this a long
time ago. Subsequently, a send policy similar to this was introduced
along with a companion ack policy in the fuzzball TCP, but so far as I
know, not in any other implementation, although Berkeley claim to be doing
that with 4.3bsd.

The mechanism works as follows:

1. Arriving data from the user is queued temporarily at the transmitter. If
   the size of this queue is at least the MSS for the connection, it is
   packetized and sent immediately (subject to the usual window controls,
   of course). If not, the data are held until all previously sent data
   have been acked. Note that data arriving while a previously sent wadge
   is in flight simply pile up in the queue.

2. The receiver acks incoming data immediately if the amount of data passed on
   to the user (i.e. removed from reassembly buffers) since the last ack
   is at least the MSS for the connection. In addition, the receiver acks
   if some arbitrary time (here, about 500 milliseconds) has elapsed since
   new data arrived and no ack was sent.

As reported previously, these policies dramatically improved the performance
of TELNET over mismatched paths, while sustaining good performance of FTP.
We have been using it for about six months now in the fuzzballs over
the raunchiest of paths (would you believe Amateur packet-radio, which uses
CSMA at 1200 bps at 145 MHz?).

A major caution using these policies is the interaction between the send and
ack policies with respect to the receiver ack delay, which increases the
apparent delay for TELNET tinygrams. The delay is a major factor in improving
performance with remote echo, since the acks normally piggyback on the echo
segment, improving the apparent response time. However, in cases where
no end-end traffic is wandering backwards over the path and segments less
than the maximum are involved (e.g. many SMTP connections), a lot of dead air
results. The issues need to be studied a bit more.

Dave
-------

-----------[000067][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6-Jun-85 13:46:01 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   ARPANET/LAN Accounting for UNIX TCP Applications

From: bnsw@Mitre-Bedford

  We are planning to hook up some LAN's to our ARPANET host using TCP/IP.
We want to keep an account of the traffic (TELNET's, FTP's,TFTP's, and
SMTP's) from the LAN hosts in using the ARPANET host.
  Has anyone written any accounting-type programs to gather usage stats for
the hosts on a LAN that use/passthru an ARPANET host?  Or, from a related
viewpoint, to record ARPANET user statistics applicable for accounting from
just the ARPANET host?  We currently run UNIX 4.1bsd; we will soon be updating
to ULTRIX (as a testbed 4.3bsd).
  Thank you for your time.
Barbara Seeber-Wagner

-----------[000068][next][prev][last][first]----------------------------------------------------
Date:      Thu 6 Jun 85 13:54:13-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        tcp-ip@SRI-NIC.ARPA, local-nets@MIT-MC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Interlan Ethernet card command codes
	The Interlan UNIBUS/QBUS cards (NI1010 and NI2010) went through
a variety of revs of firmware, and new commands were added at some points.
I want a list of which commands were added at which levels of the firmware
(since we have quite a mix of boards here). However, Interlan is unable (!!!)
to provide me with this information. Does anyone out there have a list
of which commands go with with which firmware revision levels?
		Noel
-------
-----------[000069][next][prev][last][first]----------------------------------------------------
Date:      Thu, 6-Jun-85 15:08:06 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Interlan Ethernet card command codes

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	The Interlan UNIBUS/QBUS cards (NI1010 and NI2010) went through
a variety of revs of firmware, and new commands were added at some points.
I want a list of which commands were added at which levels of the firmware
(since we have quite a mix of boards here). However, Interlan is unable (!!!)
to provide me with this information. Does anyone out there have a list
of which commands go with with which firmware revision levels?
		Noel
-------

-----------[000070][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7 Jun 85 01:36:52 edt
From:      bellcore!karn@Berkeley (Phil R. Karn)
To:        tcp-ip@sri-nic.ARPA
Subject:   Tinygrams on 4.2BSD
It is true that TCP on 4.2 effectively does a push every time the user
process does a write() system call. It really has no choice since there are
no semantics by which the user can indicate a push in the write call. But it
doesn't have to be a problem IF programs call the standard I/O library
instead. Stdio tries to fill its 1K buffers completely before calling the
system, and the user is given a push-like subroutine fflush() for when it is
really needed.  The only real offenders (besides character-at-a-time Telnet)
are those few programs that insist on using the bare I/O system calls
directly to write small amounts of data; they should be taken out and shot.
They give lousy performance even when a network isn't involved.

I'm still looking forward to the new version which is rumored to include
John Nagle's transmission algorithms.

Phil
-----------[000071][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7-Jun-85 02:36:39 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Tinygrams on 4.2BSD

From: bellcore!karn@BERKELEY (Phil R. Karn)

It is true that TCP on 4.2 effectively does a push every time the user
process does a write() system call. It really has no choice since there are
no semantics by which the user can indicate a push in the write call. But it
doesn't have to be a problem IF programs call the standard I/O library
instead. Stdio tries to fill its 1K buffers completely before calling the
system, and the user is given a push-like subroutine fflush() for when it is
really needed.  The only real offenders (besides character-at-a-time Telnet)
are those few programs that insist on using the bare I/O system calls
directly to write small amounts of data; they should be taken out and shot.
They give lousy performance even when a network isn't involved.

I'm still looking forward to the new version which is rumored to include
John Nagle's transmission algorithms.

Phil

-----------[000072][next][prev][last][first]----------------------------------------------------
Date:      7 Jun 1985 12:41:26 EDT
From:      MILLS@USC-ISID.ARPA
To:        bellcore!karn@UCB-VAX.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Tinygrams on 4.2BSD
In response to the message sent  Fri, 7 Jun 85 01:36:52 edt from bellcore!karn@Berkeley 

Phill,

Please understand the "John Nagle algorithm" does not in itself represent
a panacea, but only one of many detail-engineering issues involved in making
TCP work well over widely ranging scenarios. That algorithm must be addressed
in the context of a good ack policy. Experiments done here reveal the algorithm
can result in very poor performance with some ack policies and can adversely
affect performance in scenarios in the middle, so to speak, of the TELNET
character-at-a-time and FTP spectrum. Simply stuffing the algorithm in the
system blindly may exchange better performance at these extremes, which
we ordinarily see first-hand, for poorer performance in the middle (e.g. mail),
which chugs in the background slurping up resources we may not notice.

Dave
-------
-----------[000073][next][prev][last][first]----------------------------------------------------
Date:      Fri, 7-Jun-85 13:58:07 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Tinygrams on 4.2BSD

From: MILLS@USC-ISID.ARPA

In response to the message sent  Fri, 7 Jun 85 01:36:52 edt from bellcore!karn@Berkeley 

Phill,

Please understand the "John Nagle algorithm" does not in itself represent
a panacea, but only one of many detail-engineering issues involved in making
TCP work well over widely ranging scenarios. That algorithm must be addressed
in the context of a good ack policy. Experiments done here reveal the algorithm
can result in very poor performance with some ack policies and can adversely
affect performance in scenarios in the middle, so to speak, of the TELNET
character-at-a-time and FTP spectrum. Simply stuffing the algorithm in the
system blindly may exchange better performance at these extremes, which
we ordinarily see first-hand, for poorer performance in the middle (e.g. mail),
which chugs in the background slurping up resources we may not notice.

Dave
-------

-----------[000074][next][prev][last][first]----------------------------------------------------
Date:      8 Jun 1985 03:26-EDT
From:      CERF@USC-ISI.ARPA
To:        mgardner@BBNCCY.ARPA
Cc:        MILLS@USC-ISID.ARPA, mills@DCN6.ARPA tcp-ip@SRI-NIC.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
Marianne,

I don't understand exactly what you are disagreeing with; I based my
comments on the tables that Dave published. Are you disagreeing with
the numbers he shows? It seemed to me that those tables reflected
the ISI gateway bearing a consdierably larger amount of traffic than
any other gateway by at least a factor of two (if memory serves)
and that it dropped a higher percentage of packets.

Please elaborate on your recent short note so it is clearer to me,
at least, how you interpret Dave's tables.

thanks,

Vint
-----------[000075][next][prev][last][first]----------------------------------------------------
Date:      8 Jun 1985 03:34-EDT
From:      CERF@USC-ISI.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Computer Communications Review
Have you all seen the latest edition of the Computer
Communications Review?  There is an article which on the surface
pans the ARPANET/DOD internet architecture.  I think the author
makes some points worth pondering about congestion - although I
also think his model of how datagram networks functin is a bit
off the mark.

Some people think that a routing algorithm is run for every
packet entering an IMP on the ARPANET.  They don't seem to
realize that the routing is done in the background and tables are
produced for purposes of making routing decisions for each
packet.  So the overhead for routing a packet in the ARPANET is
no more than it is for routing in a network which creates a set
menu of routes in a table.

The table look up is all it takes to make the routing decision,
but the contents of the table, in ARPANET, are updated as a
low-level background task.  It doesn't consume much bandwidth
(CPU or line).

Does anyone agree/disagree with the author's other remarks?  Is
anyone considering a response?

Vint
-----------[000076][next][prev][last][first]----------------------------------------------------
Date:      8 Jun 1985 03:41-EDT
From:      CERF@USC-ISI.ARPA
To:        james@GYRE.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
Jim,

unless things have changed dramatically when I wasn't looking,
the IMP takes in up to 1008 bytes but breaks them into 1008 bit
packets.  The X.25 versions of the IMP take in 1024 bytes, so I
suspect the single IMP packet is probably 1024 bits long in data
field rather than 1008, for those IMPs.

The IP datagram sizes therefore tell you whether you are dealing
with single or multi-packet messages in the network.  Any
datagram larger than 126 bytes is therefore a multi-packet
message and subject to the end/end protocol for multipacket
messages.

The 8 messages outstanding problem is usually exacerbated when
there are a lot of small messages - short datagrams carrying
interactive telnet traffic are the worst case.

It's possible I wasn't very clear in what I was trying to say,
but I do believe I'm correct that the IMP still breaks up
messages larger than 126 bytes into multiple small packets of up
to 1008 bits.  I could be wrong about the 1008 bits, it could
have gone up a little, but not much beyond 1024 bits, I bet.

Vint
-----------[000077][next][prev][last][first]----------------------------------------------------
Date:      8 Jun 1985 04:07-EDT
From:      CERF@USC-ISI.ARPA
To:        ron@BRL.ARPA
Cc:        mills@DCN6.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  ARPANET/MILNET performance statistics
Ron,

Hmmmm. the average byte statistics don't tell us about maxima or distribution
of 1822 message (see, I am being careful to distinguish 1822 MESSAGE from
IMP PACKET!) sizes. 

Perhaps Marianne Gardner has available more detailed statistics which
would tell us whether there is any significant amount of IP level
fragmentation going on.  Presumably one would not see the fragmenting
going on at MILISI, for instance. Rather, it would occur at the gateway
which interfaces the local area net to the ARPANET. 

If there were a significant number of 1536 byte IP messages being sent,
these would fragment into two IP datagrams of roughly 1000 bytes and
536 bytes.  [aside: I am no longer sure whether the present preferred
gateway fragmentation algorithm at IP level is to make datagrams
that just fit in the next net, in which case 1000 and 536 would be
more or less right for ARPANET, neglecting headers and stuff; or
whether the policy is to fragment to 576 maximum or to make all
IP fragments uniform in size and less than or equal to 576 or
less than or equal to the next net's maximum message size. Can
a knowledgeable source please set me straight on that? ]

It seems plain that something is causing the average IP datagram
size to be larger at MILISI than at other gateways.

Vint
-----------[000078][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8-Jun-85 04:23:11 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: CERF@USC-ISI.ARPA

Marianne,

I don't understand exactly what you are disagreeing with; I based my
comments on the tables that Dave published. Are you disagreeing with
the numbers he shows? It seemed to me that those tables reflected
the ISI gateway bearing a consdierably larger amount of traffic than
any other gateway by at least a factor of two (if memory serves)
and that it dropped a higher percentage of packets.

Please elaborate on your recent short note so it is clearer to me,
at least, how you interpret Dave's tables.

thanks,

Vint

-----------[000079][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8-Jun-85 05:01:32 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Computer Communications Review

From: CERF@USC-ISI.ARPA

Have you all seen the latest edition of the Computer
Communications Review?  There is an article which on the surface
pans the ARPANET/DOD internet architecture.  I think the author
makes some points worth pondering about congestion - although I
also think his model of how datagram networks functin is a bit
off the mark.

Some people think that a routing algorithm is run for every
packet entering an IMP on the ARPANET.  They don't seem to
realize that the routing is done in the background and tables are
produced for purposes of making routing decisions for each
packet.  So the overhead for routing a packet in the ARPANET is
no more than it is for routing in a network which creates a set
menu of routes in a table.

The table look up is all it takes to make the routing decision,
but the contents of the table, in ARPANET, are updated as a
low-level background task.  It doesn't consume much bandwidth
(CPU or line).

Does anyone agree/disagree with the author's other remarks?  Is
anyone considering a response?

Vint

-----------[000080][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8-Jun-85 05:54:01 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: CERF@USC-ISI.ARPA

Jim,

unless things have changed dramatically when I wasn't looking,
the IMP takes in up to 1008 bytes but breaks them into 1008 bit
packets.  The X.25 versions of the IMP take in 1024 bytes, so I
suspect the single IMP packet is probably 1024 bits long in data
field rather than 1008, for those IMPs.

The IP datagram sizes therefore tell you whether you are dealing
with single or multi-packet messages in the network.  Any
datagram larger than 126 bytes is therefore a multi-packet
message and subject to the end/end protocol for multipacket
messages.

The 8 messages outstanding problem is usually exacerbated when
there are a lot of small messages - short datagrams carrying
interactive telnet traffic are the worst case.

It's possible I wasn't very clear in what I was trying to say,
but I do believe I'm correct that the IMP still breaks up
messages larger than 126 bytes into multiple small packets of up
to 1008 bits.  I could be wrong about the 1008 bits, it could
have gone up a little, but not much beyond 1024 bits, I bet.

Vint

-----------[000081][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8-Jun-85 06:28:33 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  ARPANET/MILNET performance statistics

From: CERF@USC-ISI.ARPA

Ron,

Hmmmm. the average byte statistics don't tell us about maxima or distribution
of 1822 message (see, I am being careful to distinguish 1822 MESSAGE from
IMP PACKET!) sizes. 

Perhaps Marianne Gardner has available more detailed statistics which
would tell us whether there is any significant amount of IP level
fragmentation going on.  Presumably one would not see the fragmenting
going on at MILISI, for instance. Rather, it would occur at the gateway
which interfaces the local area net to the ARPANET. 

If there were a significant number of 1536 byte IP messages being sent,
these would fragment into two IP datagrams of roughly 1000 bytes and
536 bytes.  [aside: I am no longer sure whether the present preferred
gateway fragmentation algorithm at IP level is to make datagrams
that just fit in the next net, in which case 1000 and 536 would be
more or less right for ARPANET, neglecting headers and stuff; or
whether the policy is to fragment to 576 maximum or to make all
IP fragments uniform in size and less than or equal to 576 or
less than or equal to the next net's maximum message size. Can
a knowledgeable source please set me straight on that? ]

It seems plain that something is causing the average IP datagram
size to be larger at MILISI than at other gateways.

Vint

-----------[000082][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8 Jun 85 12:21:26 EDT
From:      Andrew Malis <malis@BBNCCS.ARPA>
To:        CERF@usc-isi.arpa
Cc:        james@gyre.arpa, tcp-ip@sri-nic.arpa, malis@BBNCCS.ARPA
Subject:   Re: ARPANET/MILNET performance statistics
Vint,

You are completely correct about 1822 messages greater than 126
bytes being broken up into packets up to 126 bytes long each, and
that the IMP end-to-end protocol imposes greater overhead
(destination IMP buffer space preallocation, to be precise) for
multipacket messages.

For X.25 traffic, the IMP uses 134-byte packets, so that a
full-sized X.25 message (1024 bytes + some overhead) still fits
in 8 packets.  The same multipacket overhead applies.

Andy Malis

-----------[000083][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8-Jun-85 15:08:17 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: Andrew Malis <malis@BBNCCS.ARPA>

Vint,

You are completely correct about 1822 messages greater than 126
bytes being broken up into packets up to 126 bytes long each, and
that the IMP end-to-end protocol imposes greater overhead
(destination IMP buffer space preallocation, to be precise) for
multipacket messages.

For X.25 traffic, the IMP uses 134-byte packets, so that a
full-sized X.25 message (1024 bytes + some overhead) still fits
in 8 packets.  The same multipacket overhead applies.

Andy Malis

-----------[000084][next][prev][last][first]----------------------------------------------------
Date:      08 Jun 85 17:29:23 EDT (Sat)
From:      Mike Brescia <brescia@bbnccv>
To:        tcp-ip@nic
Cc:        brescia@bbnccv
Subject:   Re: ARPANET/MILNET performance statistics
Vint, Ron, Dave, Marianne, &al.

The statistics collected from the BBN gateways include only total packet
counts and total byte counts.  The average reported is (you guessed it)
byte-count divided by packet count.  There's no facility for histograms,
maxima, standard deviation, or other interesting statistics.

The gateways currently fragment in the manner you recall, Vint, with the first
fragment(s) being as large as possible, and the last one containing the
leftovers.  The idea of splitting a datagram as close to the half-way (or 1/N
if N fragments are needed) was not deemed useful for the arpanet, because the
end-to-end overhead is the same for a 130 byte or 576 byte packet as for a
1000 byte packet.  The best policy for the arpanet was to get the packets
large as possible.

My intuition about MILISI carrying larger packets than others is that ISI has
large hosts (TOPS20) on both sides of the mil-arpa divide, and they may do
lots of file transfers (ftp big packets) or screen outputs (telnet big
packets).  One of the ISI wizards may be able to shoot holes in that
speculation...

	Good hunting,
	Mike
-----------[000085][next][prev][last][first]----------------------------------------------------
Date:      Sat, 8-Jun-85 18:50:09 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: Mike Brescia <brescia@bbnccv>

Vint, Ron, Dave, Marianne, &al.

The statistics collected from the BBN gateways include only total packet
counts and total byte counts.  The average reported is (you guessed it)
byte-count divided by packet count.  There's no facility for histograms,
maxima, standard deviation, or other interesting statistics.

The gateways currently fragment in the manner you recall, Vint, with the first
fragment(s) being as large as possible, and the last one containing the
leftovers.  The idea of splitting a datagram as close to the half-way (or 1/N
if N fragments are needed) was not deemed useful for the arpanet, because the
end-to-end overhead is the same for a 130 byte or 576 byte packet as for a
1000 byte packet.  The best policy for the arpanet was to get the packets
large as possible.

My intuition about MILISI carrying larger packets than others is that ISI has
large hosts (TOPS20) on both sides of the mil-arpa divide, and they may do
lots of file transfers (ftp big packets) or screen outputs (telnet big
packets).  One of the ISI wizards may be able to shoot holes in that
speculation...

	Good hunting,
	Mike

-----------[000086][next][prev][last][first]----------------------------------------------------
Date:      9-Jun-85   23:01-EDT
From:      Joseph Szep   <ccjts%BOSTONU.bitnet@WISCVM.ARPA>
To:        tcp-ip@Berkeley
Subject:   Please remove me....
-----

     from the tcp-ip mailing list.

                         Thanx           Joe Szep
-----
-----------[000087][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Jun-85 00:31:56 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Please remove me....

From: Joseph Szep   <ccjts%BOSTONU.bitnet@WISCVM.ARPA>

-----

     from the tcp-ip mailing list.

                         Thanx           Joe Szep
-----

-----------[000088][next][prev][last][first]----------------------------------------------------
Date:      10 Jun 1985 08:48-PDT
From:      Joel Goldberger <JGoldberger@USC-ISIB.ARPA>
To:        Brescia@BBNCCV.ARPA
Cc:        TCP-IP@SRI-NIC.ARPA
Subject:   Big packets at ISI
We confess !  We do in fact do a far amount of FTPing and especially
Telnetting among our machines on both sides of the diivide.

- Joel Goldberger -
-----------[000089][next][prev][last][first]----------------------------------------------------
Date:      Mon 10 Jun 85 13:21:28-MDT
From:      Mark Crispin <MRC@SIMTEL20.ARPA>
To:        JGoldberger@USC-ISIB.ARPA
Cc:        Brescia@BBNCCV.ARPA, TCP-IP@SRI-NIC.ARPA
Subject:   Re: Big packets at ISI
Considering the important position of the MILISI gateway, perhaps it
might make sense to invest in some form of networking technology to
allow ISI to do internal data transfers without using MILISI (either
an LAN or a private ISI-only gateway)?  ISI's internal needs are
obviously sufficiently great to make this sort of thing desirable.
-------
-----------[000090][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Jun-85 12:51:41 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Big packets at ISI

From: Joel Goldberger <JGoldberger@USC-ISIB.ARPA>

We confess !  We do in fact do a far amount of FTPing and especially
Telnetting among our machines on both sides of the diivide.

- Joel Goldberger -

-----------[000091][next][prev][last][first]----------------------------------------------------
Date:      10 Jun 85 15:55:48 EST (Mon)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   4.2 date setting program
Hi,

Bill Nesheim retrieved a copy of my UDP-based time setting programs for
4.2 and found a lurking bug -- if the client didn't succeed in reaching
any of the servers, the system time would be set back to the time at
which the program started (at least 10 seconds ago.) I've fixed this
and placed a new copy in purdue-merlin:pub/dated.flar (via anonymous
FTP.)

Cheers,
chris

----------
-----------[000092][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10 Jun 85 15:40:53 EDT
From:      "Paul D. Amer" <amer@UDel-Dewey.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        bnsw@MITRE-BEDFORD.ARPA, mizell@usc-isi.ARPA, amer@UDel-Dewey.ARPA
Subject:   arpanet/lan accounting for unix tcp applications
Regarding your question about if anyone has written any accounting-type
programs to gather usage stats for the hosts on a lan:

At Delaware, we have a project sponsored by the Office of Naval Research
to perform a characterization of LAN traffic.  Over the past year, we
have developed a system that monitors the ethernet, saves one minute
summaries (snapshots) of traffic on a disk, and analyzes the traffic
offline.  Our analysis includes a characterization of the higher level
protocols used, the amount of intra vs. interLAN traffic, a summary
of packet interarrival times, and lots more.  We are currently finishing
a characterization of 2 months worth of LAN traffic involving
approximately 80 million packets.  Our results will be written up
in a report available hopefully by the end of the summer.
-----------[000093][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Jun-85 18:51:48 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Big packets at ISI

From: Mark Crispin <MRC@SIMTEL20.ARPA>

Considering the important position of the MILISI gateway, perhaps it
might make sense to invest in some form of networking technology to
allow ISI to do internal data transfers without using MILISI (either
an LAN or a private ISI-only gateway)?  ISI's internal needs are
obviously sufficiently great to make this sort of thing desirable.
-------

-----------[000094][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Jun-85 19:38:04 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   4.2 date setting program

From: Christopher A Kent <cak@Purdue.ARPA>

Hi,

Bill Nesheim retrieved a copy of my UDP-based time setting programs for
4.2 and found a lurking bug -- if the client didn't succeed in reaching
any of the servers, the system time would be set back to the time at
which the program started (at least 10 seconds ago.) I've fixed this
and placed a new copy in purdue-merlin:pub/dated.flar (via anonymous
FTP.)

Cheers,
chris

----------

-----------[000095][next][prev][last][first]----------------------------------------------------
Date:      Mon, 10-Jun-85 20:07:08 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   arpanet/lan accounting for unix tcp applications

From: "Paul D. Amer" <amer@UDel-Dewey.ARPA>

Regarding your question about if anyone has written any accounting-type
programs to gather usage stats for the hosts on a lan:

At Delaware, we have a project sponsored by the Office of Naval Research
to perform a characterization of LAN traffic.  Over the past year, we
have developed a system that monitors the ethernet, saves one minute
summaries (snapshots) of traffic on a disk, and analyzes the traffic
offline.  Our analysis includes a characterization of the higher level
protocols used, the amount of intra vs. interLAN traffic, a summary
of packet interarrival times, and lots more.  We are currently finishing
a characterization of 2 months worth of LAN traffic involving
approximately 80 million packets.  Our results will be written up
in a report available hopefully by the end of the summer.

-----------[000096][next][prev][last][first]----------------------------------------------------
Date:      11 Jun 85 11:59 PDT
From:      Tom Perrine <tom@LOGICON.ARPA>
To:        tcp-ip@sri-nic
Cc:        Tom Perrine <tom@logicon>
Subject:   Passage TCP/IP for UNIX Sys V
According to "The DEC Professional", June '85, pp. 172-173, Uniq Digital
Technologies has produced a full TCP/IP, SMTP, FTP and Telnet for
UNIX Sys V on VAXen. Does anyone have any other info (anyone have any
firsthand experience with this beast) ?

Tom Perrine
Logicon - Operating Systems Division

-----------[000097][next][prev][last][first]----------------------------------------------------
Date:      Tue, 11-Jun-85 15:44:40 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Passage TCP/IP for UNIX Sys V

From: Tom Perrine <tom@LOGICON.ARPA>

According to "The DEC Professional", June '85, pp. 172-173, Uniq Digital
Technologies has produced a full TCP/IP, SMTP, FTP and Telnet for
UNIX Sys V on VAXen. Does anyone have any other info (anyone have any
firsthand experience with this beast) ?

Tom Perrine
Logicon - Operating Systems Division

-----------[000098][next][prev][last][first]----------------------------------------------------
Date:      Wednesday, 12 Jun 1985 06:54:15-PDT
From:      zhang%erlang.DEC@decwrl.ARPA  (Lixia Zhang)
To:        tcp-ip@sri-nic
Subject:   TCP Timer
I have a few questions concerning setting timers for outstanding TCP
segments.

The following paragraph from the TCP spec (rfc-793, page 10) sounds like
that a timer should be set for each outstanding segment:

"When the TCP transmits a segment containing data, it puts a copy on a
retransmission queue and starts a timer;  when the acknowledgment for that 
data is received, the segment is deleted from the queue.  If the
acknowledgment is not received before the timer runs out, the segment is
retransmitted."

My questions are:

- Should we understand it as one timer need be set for each outstanding
  segment?

- What are the situations in real implementations?  From what I've heard, most
  implementations use a single timer per connection - am I right?
  I sort of remember this was mentioned in a recent msg
  that some implementations set a timer for each segment, some others use a
  single timer.  What is the population percentage for each side?

- Does anyone have any idea, or observations, about the performance
  difference, big or small, between the two different implementations?

Lixia
-----------[000099][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 10:37:08 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP Timer

From: zhang%erlang.DEC@decwrl.ARPA  (Lixia Zhang)

I have a few questions concerning setting timers for outstanding TCP
segments.

The following paragraph from the TCP spec (rfc-793, page 10) sounds like
that a timer should be set for each outstanding segment:

"When the TCP transmits a segment containing data, it puts a copy on a
retransmission queue and starts a timer;  when the acknowledgment for that 
data is received, the segment is deleted from the queue.  If the
acknowledgment is not received before the timer runs out, the segment is
retransmitted."

My questions are:

- Should we understand it as one timer need be set for each outstanding
  segment?

- What are the situations in real implementations?  From what I've heard, most
  implementations use a single timer per connection - am I right?
  I sort of remember this was mentioned in a recent msg
  that some implementations set a timer for each segment, some others use a
  single timer.  What is the population percentage for each side?

- Does anyone have any idea, or observations, about the performance
  difference, big or small, between the two different implementations?

Lixia

-----------[000100][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12 Jun 85 11:30 EDT
From:      David C. Plummer in disguise <DCP@SCRC-QUABBIN.ARPA>
To:        Lixia Zhang <zhang%erlang.DEC@DECWRL.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   TCP Timer
    Date: Wednesday, 12 Jun 1985 06:54:15-PDT
    From: zhang%erlang.DEC@decwrl.ARPA  (Lixia Zhang)

    I have a few questions concerning setting timers for outstanding TCP
    segments.

    The following paragraph from the TCP spec (rfc-793, page 10) sounds like
    that a timer should be set for each outstanding segment:

    "When the TCP transmits a segment containing data, it puts a copy on a
    retransmission queue and starts a timer;  when the acknowledgment for that 
    data is received, the segment is deleted from the queue.  If the
    acknowledgment is not received before the timer runs out, the segment is
    retransmitted."

    My questions are:

    - Should we understand it as one timer need be set for each outstanding
      segment?

    - What are the situations in real implementations?  From what I've heard, most
      implementations use a single timer per connection - am I right?
      I sort of remember this was mentioned in a recent msg
      that some implementations set a timer for each segment, some others use a
      single timer.  What is the population percentage for each side?

    - Does anyone have any idea, or observations, about the performance
      difference, big or small, between the two different implementations?

Our experience at Symbolics says that a per-connection timer and only
retransmitting the FIRST segment on the retransmission queue is
sufficient.  In a mostly-reliable network medium, most segments do get
through and are then acknowledged.  If one does get through, it winds up
at the front of the retransmission queue.  The other segments on the
retransmission queue ususlly got through, but can't be acknowledged
until the first segment is acknowledged.  When the first segment DOES
get through, the whole batch is often acknowledged.

-----------[000101][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 12:21:04 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP Timer

From: David C. Plummer in disguise <DCP@SCRC-QUABBIN.ARPA>

    Date: Wednesday, 12 Jun 1985 06:54:15-PDT
    From: zhang%erlang.DEC@decwrl.ARPA  (Lixia Zhang)

    I have a few questions concerning setting timers for outstanding TCP
    segments.

    The following paragraph from the TCP spec (rfc-793, page 10) sounds like
    that a timer should be set for each outstanding segment:

    "When the TCP transmits a segment containing data, it puts a copy on a
    retransmission queue and starts a timer;  when the acknowledgment for that 
    data is received, the segment is deleted from the queue.  If the
    acknowledgment is not received before the timer runs out, the segment is
    retransmitted."

    My questions are:

    - Should we understand it as one timer need be set for each outstanding
      segment?

    - What are the situations in real implementations?  From what I've heard, most
      implementations use a single timer per connection - am I right?
      I sort of remember this was mentioned in a recent msg
      that some implementations set a timer for each segment, some others use a
      single timer.  What is the population percentage for each side?

    - Does anyone have any idea, or observations, about the performance
      difference, big or small, between the two different implementations?

Our experience at Symbolics says that a per-connection timer and only
retransmitting the FIRST segment on the retransmission queue is
sufficient.  In a mostly-reliable network medium, most segments do get
through and are then acknowledged.  If one does get through, it winds up
at the front of the retransmission queue.  The other segments on the
retransmission queue ususlly got through, but can't be acknowledged
until the first segment is acknowledged.  When the first segment DOES
get through, the whole batch is often acknowledged.

-----------[000102][next][prev][last][first]----------------------------------------------------
Date:      12 Jun 1985 1527-PDT (Wednesday)
From:      Jeff Mogul <mogul@Navajo>
To:        tcp-ip@sri-nic
Subject:   Minor nit with RFC948 - ARP and broadcasts
[If someone knows how to forward this to Ira Winston, the author
 of RFC948, I would appreciate it.  He's not listed in the NIC
 WHOIS database.]

RFC948, "Two Methods for the Transmission of IP Datagrams Over
IEEE 802.3 Networks", seems in general to be a codification of
the prevailing philosophy, and I have no complaints about that
(although I still don't understand why they replaced the "type"
field with a useless "length" field!)

However, one statement lit up the warning lights:

      Broadcast Address

         The broadcast Internet address (the address on that network
         with a host part of all binary ones) should be mapped to the
         broadcast 802.3 address (of all binary ones).

>>>>     The use of the ARP dynamic discovery procedure is strongly	<<<<
>>>>     recommended.							<<<<

Given the indentation of the document, I read this as recommending
the use of ARP to discover the mapping between IP and 802.3
broadcast addresses.  In my opinion, this should not only be
strongly "disrecommended", it should be prohibited.

In the best of all possible worlds, there would be nothing wrong
with this.  However, in the real world, we have systems that
	(a) don't know how to recognize some of the IP broadcast
		addresses now in use
	(b) forward packets that they receive, if the destination
		address is not one of their own (or a recognized
		broadcasts.)
Specifically, many 4.2BSD hosts (a) don't know about all 1's
broadcasts, only all 0's, and (b) by default act as gateways.

Imagine what happens when another host sends a broadcast to,
say, 36.255.255.255.  A 4.2 host receives the packet, says "this
ain't for me and I don't recognize it as a broadcast, so I'd better
send it on its way."  It sends an ARP request on net 36, and suppose
it gets a reply: well, then it sends the packet out again, using
the hardware broadcast address provided by ARP, hears the packet
again, ad nauseum.  With just one 4.2 host on the net, we might
see on the order of 255 repetitions (until the TTL field got to zero.)
With N such hosts on a network, we could get up to 255^N repetitions
(or maybe it's N^255, either way it's bad.)

So: an ARP server should NEVER give out a broadcast hardware address!

-Jeff
-----------[000103][next][prev][last][first]----------------------------------------------------
Date:      Wednesday, 12 Jun 1985 16:19-PDT
From:      imagen!geof@su-shasta.ARPA
To:        shasta!tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP Timer

The first packet in the retransmission queue is the important one for
retransmission purposes, as Dave Plummer points out, since it is the
first packet that will be acked (and removed from the retrans. queue).

Dave's point about only retransmitting the first packet from the
retransmission queue is interesting.  I would be stronger than his
statement that this is "good enough" -- if a host always retransmits
everything on the retransmission queue, performance could be
drastically affected in certain situations.  Consider, for example, a
host that is transmitting through a gateway, and can send packets
faster than the gateway can forward them (perhaps a gateway from
Ether->Arpa).  Eventually, the gateway runs out of buffers and starts
to miss packets.  It is not uncommon to see a situation, for example,
where a gateway loses the fourth packet from every batch that is
blasted out by a particular host.

This sort of lossage can be seen at the destination host as well.  It
is a common cause of lost connections when a gateway is fragmenting
packets (in reverse of the above: the fast little gateway sends two
fragments very close together, and the big, slower host always misses
the second).  TCP performance can be affected similarly.

Dave's comment that retransmitting the first packet often results in
everything being acked seems to fit this model.  The first packet on
the retransmission queue (the first one that the foreign host didn't
ack) is probably the single packet that was lost.  [Interestingly,
Xerox' XNS SPP implementation also retransmits only the first packet on
the queue, for similar reasons (they have a serial-line gateway with
one packet buffer in it).]

Unfortunately, retransmitting only the first packet on the
retransmission queue, while it works, also has performance problems.
If the foreign host didn't lose just one packet, but lost a whole
string of them, TCP degenerates to a lock step (cf.  TFTP) protocol for
the rest of the string.  Over a very long-haul connection (e.g.,
satellite), this can cause a delay of seconds every time a packet is
lost.  Maybe in practise the number of packets lost in a row is
statistically close enough to 1 that this is not a problem.

I don't know a real answer to this problem (maybe someone else does...)
other than flow control (congestion control?) between a host the
gateway(s) that it is using on a particular connection (which is not
really possible in the TCP world -- say, wasn't someone asking if there
were any supporting arguments for that anti-TCP article in last month's
SigComm review?:-)).  Perhaps one might arrange to heuristically determine
in the sender that the Nth packet is reliably being lost, and throttle
back on the inter-packet time accordingly.  Sounds complicated.

- Geof
-----------[000104][next][prev][last][first]----------------------------------------------------
Date:      12 Jun 1985 17:54:00 PDT
From:      POSTEL@USC-ISIF.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   re: minor nitwith RFC-948 -- ARP and Broadcasts

Jeff:

OOPS.  Let me take that as an editorial error.  The indenting should have
been to have the use of ARP recommended for dynamic discovery in general
and not specifically under broadcast.  Your point is well taken.  One should
not use ARP in connection with the broadcast IP address.

--jon.
-------
-----------[000105][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12 Jun 85 15:37:25 EDT
From:      BOB STRECKFUSS <rstreckf@bbncc-washington>
To:        tcp-ip@sri-nic
Cc:        streckfuss@BBN-UNIX
Subject:   Add Name to TCP-IP mailing list
Please add my name and address to the TCP-IP mailing list.  

Bob Streckfuss
streckfuss@bbncct

 Thank you.

-----------[000106][next][prev][last][first]----------------------------------------------------
Date:      12 Jun 1985 16:06:05 EDT
From:      INCO@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TAC measurement/performance

     Among the various messages on DDN performance and timing, I need
to find out if anyone has done any timing measures on the TAC (specifically
the c/30) and in what situations.  Thanks.

Inco at Usc-Isid
(Steve Sutkowski)
-------
-----------[000107][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 17:28:03 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Add Name to TCP-IP mailing list

From: BOB STRECKFUSS <rstreckf@bbncc-washington>

Please add my name and address to the TCP-IP mailing list.  

Bob Streckfuss
streckfuss@bbncct

 Thank you.

-----------[000108][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 18:30:48 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TAC measurement/performance

From: INCO@USC-ISID.ARPA


     Among the various messages on DDN performance and timing, I need
to find out if anyone has done any timing measures on the TAC (specifically
the c/30) and in what situations.  Thanks.

Inco at Usc-Isid
(Steve Sutkowski)
-------

-----------[000109][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 20:31:16 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Minor nit with RFC948 - ARP and broadcasts

From: Jeff Mogul <mogul@Navajo>

[If someone knows how to forward this to Ira Winston, the author
 of RFC948, I would appreciate it.  He's not listed in the NIC
 WHOIS database.]

RFC948, "Two Methods for the Transmission of IP Datagrams Over
IEEE 802.3 Networks", seems in general to be a codification of
the prevailing philosophy, and I have no complaints about that
(although I still don't understand why they replaced the "type"
field with a useless "length" field!)

However, one statement lit up the warning lights:

      Broadcast Address

         The broadcast Internet address (the address on that network
         with a host part of all binary ones) should be mapped to the
         broadcast 802.3 address (of all binary ones).

>>>>     The use of the ARP dynamic discovery procedure is strongly	<<<<
>>>>     recommended.							<<<<

Given the indentation of the document, I read this as recommending
the use of ARP to discover the mapping between IP and 802.3
broadcast addresses.  In my opinion, this should not only be
strongly "disrecommended", it should be prohibited.

In the best of all possible worlds, there would be nothing wrong
with this.  However, in the real world, we have systems that
	(a) don't know how to recognize some of the IP broadcast
		addresses now in use
	(b) forward packets that they receive, if the destination
		address is not one of their own (or a recognized
		broadcasts.)
Specifically, many 4.2BSD hosts (a) don't know about all 1's
broadcasts, only all 0's, and (b) by default act as gateways.

Imagine what happens when another host sends a broadcast to,
say, 36.255.255.255.  A 4.2 host receives the packet, says "this
ain't for me and I don't recognize it as a broadcast, so I'd better
send it on its way."  It sends an ARP request on net 36, and suppose
it gets a reply: well, then it sends the packet out again, using
the hardware broadcast address provided by ARP, hears the packet
again, ad nauseum.  With just one 4.2 host on the net, we might
see on the order of 255 repetitions (until the TTL field got to zero.)
With N such hosts on a network, we could get up to 255^N repetitions
(or maybe it's N^255, either way it's bad.)

So: an ARP server should NEVER give out a broadcast hardware address!

-Jeff

-----------[000110][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 21:24:31 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP Timer

From: imagen!geof@su-shasta.ARPA


The first packet in the retransmission queue is the important one for
retransmission purposes, as Dave Plummer points out, since it is the
first packet that will be acked (and removed from the retrans. queue).

Dave's point about only retransmitting the first packet from the
retransmission queue is interesting.  I would be stronger than his
statement that this is "good enough" -- if a host always retransmits
everything on the retransmission queue, performance could be
drastically affected in certain situations.  Consider, for example, a
host that is transmitting through a gateway, and can send packets
faster than the gateway can forward them (perhaps a gateway from
Ether->Arpa).  Eventually, the gateway runs out of buffers and starts
to miss packets.  It is not uncommon to see a situation, for example,
where a gateway loses the fourth packet from every batch that is
blasted out by a particular host.

This sort of lossage can be seen at the destination host as well.  It
is a common cause of lost connections when a gateway is fragmenting
packets (in reverse of the above: the fast little gateway sends two
fragments very close together, and the big, slower host always misses
the second).  TCP performance can be affected similarly.

Dave's comment that retransmitting the first packet often results in
everything being acked seems to fit this model.  The first packet on
the retransmission queue (the first one that the foreign host didn't
ack) is probably the single packet that was lost.  [Interestingly,
Xerox' XNS SPP implementation also retransmits only the first packet on
the queue, for similar reasons (they have a serial-line gateway with
one packet buffer in it).]

Unfortunately, retransmitting only the first packet on the
retransmission queue, while it works, also has performance problems.
If the foreign host didn't lose just one packet, but lost a whole
string of them, TCP degenerates to a lock step (cf.  TFTP) protocol for
the rest of the string.  Over a very long-haul connection (e.g.,
satellite), this can cause a delay of seconds every time a packet is
lost.  Maybe in practise the number of packets lost in a row is
statistically close enough to 1 that this is not a problem.

I don't know a real answer to this problem (maybe someone else does...)
other than flow control (congestion control?) between a host the
gateway(s) that it is using on a particular connection (which is not
really possible in the TCP world -- say, wasn't someone asking if there
were any supporting arguments for that anti-TCP article in last month's
SigComm review?:-)).  Perhaps one might arrange to heuristically determine
in the sender that the Nth packet is reliably being lost, and throttle
back on the inter-packet time accordingly.  Sounds complicated.

- Geof

-----------[000111][next][prev][last][first]----------------------------------------------------
Date:      Wed, 12-Jun-85 22:03:59 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   re: minor nitwith RFC-948 -- ARP and Broadcasts

From: POSTEL@USC-ISIF.ARPA


Jeff:

OOPS.  Let me take that as an editorial error.  The indenting should have
been to have the use of ARP recommended for dynamic discovery in general
and not specifically under broadcast.  Your point is well taken.  One should
not use ARP in connection with the broadcast IP address.

--jon.
-------

-----------[000112][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13 Jun 85 10:14:16 edt
From:      Liudvikas Bukys  <bukys@rochester.arpa>
To:        tcp-ip@sri-nic.arpa
Subject:   vax/vms egp?;  gateways
(1)
Has anyone run/written an EGP daemon under VAX/VMS with the TWG package?
Would such a system be usable as an Arpanet-Ethernet gateway?
(I know the arguments against it; we don't have to repeat them.)

(2)
Would potential sources of other gateways please step forward?
If I could get net addresses and/or phone numbers of the appropriate
people at BBN, MIT, CMU, BRL, Proteon, Bridge, etc, that would be great.
Please limit responses to machines/software which can do
1822 and Ethernet.

-----

No need to clutter the list; reply by mail to me; I will summarize
if there is demand for it.

Liudvikas Bukys
bukys@rochester.arpa
[rochester!bukys via allegra, decvax, seismo (uucp)]
-----------[000113][next][prev][last][first]----------------------------------------------------
Date:      13 Jun 1985 10:27:35 EDT
From:      MILLS@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Retransmission policies
Mail-From: MILLS created at 13-Jun-85 10:22:22
Date: Thu 13 Jun 85 10:22:22-EDT
From: The Mailer Daemon <Mailer@USC-ISID.ARPA>
To: MILLS@USC-ISID.ARPA
Subject: Message of 13-Jun-85 10:21:51

Message failed for the following:
shasta!tcp-ip@SRI-NIC.ARPA: 550 No such local mailbox as "shasta!tcp-ip", recipient rejected
	    ------------
Date: 13 Jun 1985 10:21:51 EDT
From: MILLS@USC-ISID.ARPA
Subject: Re: TCP Timer
To:   imagen!geof@SU-SHASTA.ARPA, shasta!tcp-ip@SRI-NIC.ARPA
cc:   MILLS@USC-ISID.ARPA

In response to the message sent  Wednesday, 12 Jun 1985 16:19-PDT from  imagen!geof@shasta

Geoff,

Having as much experience as anybody with noisy TCP paths, I can certainly
confirm that the best strategy is to retransmit just the head end of the
retransmission queue. However, I gather from the tone of your note that
you are considering only the first packet on that queue, which is what
the TOPS-20s do. We have found much better performance allowing several
combined up to MSS in total length when a retransmission is necessary. This greatly
reduces the gateway loading when TELNET traffic is involved and also helps to
control damage when the retransmission is due to congestive losses in the net.

Another thing we have done is to carefully count outstanding packets and block
further transmission if the total is greater than a magic number (currently
eight). The magic number is reduced if an ICMP Source Quench arrives and
returns to its original value in a controlled way. The bookeeping to accomplish
this is fairly complicated, since segments can be combined, retransmitted, etc.
The fuzzballs used to use this mechanism exclusively to control packet fluxes, but recently
switched to the send/ack policy I described recently to this list. However,
that send/ack policy leads to severly suboptimal performance in many cases.
We are planning to integrate both the old and new policies to see if performance
can be maintained even in these cases.

Dave
-------
-------
-------
-----------[000114][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13 Jun 85 13:38:19 pdt
From:      cbosgd!vilya!am@Berkeley
To:        cbosgd!ucbvax!tcp-ip@Berkeley
Subject:   Re: Add Name to TCP-IP mailing list
Please add my name to the mailing list also. I may be sending this
to the wrong place; If you get it please notify me if you are not the 
right person.
			Avi Malek 
			ATT Bell Labs
			tel (718)435-3405; (201)299-3555; vilya!am

-----------[000115][next][prev][last][first]----------------------------------------------------
Date:      13 Jun 1985 10:40-EDT
From:      CLYNN@BBNA.ARPA
To:        zhang%erlang.DEC@DECWRL.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP Timer
Lixia,
	Working with the DARPA Protocol Suite on TOPS20s has lead to
the same conclusions that Dave has mentioned.  I think that Geof's
statement that "performance could be drastically affected" is weak - it
is drastically affected, especially when the retransmitter does not
have a very good round-trip-time algorithm.

	A refinement of only sending the "first packet" in the queue
is to only retransmit one packet - the "first packet" plus as much
additional data from other packets as will fit into a maximum sized
segment (repacketization).  This is effective in reducing character
per round-trip-time echoing in telnet applications (dribble echo)
since most original packets have few data octets and most (if not all)
of the outstanding data will fit into a maximum sized segment.  As Geof
points out, it does present problems if the packets to be retransmitted
are all full size; more below.

	The statement that when a packet is retransmitted, it plus all
the others that made it the first time will be acked is true some of
the time.  It fails when the receiver is packet-event driven, i.e.,
the receiver responds to each packet it processes.  In such a case it
will ack the retransmitted packet, then proceed to reassemble the
other packets from its reassembly queue, acking each one in turn.  The
sending host then sees several acks arriving instead of one.  If it has
set a timer for each packet in the retransimssion queue, the timer for
the second packet has probably already gone off.  Consequently, when
the ack for the packet which was retransmitted arrives, the second
packet becomes the first and is retransmitted - frequently just before
the ack arrives.

	Retransmission of everything which is outstanding can make
performance worse, due to the interaction of factors, e.g., window
size, available data, round-trip-time algorithm, gateways, etc.  A
large window with lots of data causes, for example, the x
implementation on the x, to send as many packets as fast as possible.
Many gateways cannot handle the large burst of packets, so discard a
few, pass a few, drop a few more, etc.  When the retransmission timer
goes off, everything in the queue is retransmitted - some packets get
through and a few more are dropped.  The lost packets seem to make the
round-trip-time estimate grow very large, so throughput goes way down
(wait a minute, send a flood, wait a minute ...).  If the recieving
host can figure out what is happening it can "discourage" such
behavior by using the window to limit the number of packets so that
the gateway will not be swamped (source quench doesn't seem to be well
enough defined and widely enough implemented to be effective).  In
practice, closing the window to just cover the gap of lost packets and
delaying the ack until the reassembler has to stop seems to help a
lot.  (Of course, the sending host should attempt to limit its rate of
packet generation - delays between packets (or fragments in a gateway),
maximum number of outstanding packets (possibly a function of what had
to be retransmitted), etc.)

	Add TOPS20s to your survey under the "single timer,
retransmits a single repacketized packet, with delayed ack" column.

Charlie
-----------[000116][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Jun-85 11:06:58 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   vax/vms egp?;  gateways

From: Liudvikas Bukys  <bukys@rochester.arpa>

(1)
Has anyone run/written an EGP daemon under VAX/VMS with the TWG package?
Would such a system be usable as an Arpanet-Ethernet gateway?
(I know the arguments against it; we don't have to repeat them.)

(2)
Would potential sources of other gateways please step forward?
If I could get net addresses and/or phone numbers of the appropriate
people at BBN, MIT, CMU, BRL, Proteon, Bridge, etc, that would be great.
Please limit responses to machines/software which can do
1822 and Ethernet.

-----

No need to clutter the list; reply by mail to me; I will summarize
if there is demand for it.

Liudvikas Bukys
bukys@rochester.arpa
[rochester!bukys via allegra, decvax, seismo (uucp)]

-----------[000117][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Jun-85 11:54:20 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Retransmission policies

From: MILLS@USC-ISID.ARPA

Mail-From: MILLS created at 13-Jun-85 10:22:22
Date: Thu 13 Jun 85 10:22:22-EDT
From: The Mailer Daemon <Mailer@USC-ISID.ARPA>
To: MILLS@USC-ISID.ARPA
Subject: Message of 13-Jun-85 10:21:51

Message failed for the following:
shasta!tcp-ip@SRI-NIC.ARPA: 550 No such local mailbox as "shasta!tcp-ip", recipient rejected
	    ------------
Date: 13 Jun 1985 10:21:51 EDT
From: MILLS@USC-ISID.ARPA
Subject: Re: TCP Timer
To:   imagen!geof@SU-SHASTA.ARPA, shasta!tcp-ip@SRI-NIC.ARPA
cc:   MILLS@USC-ISID.ARPA

In response to the message sent  Wednesday, 12 Jun 1985 16:19-PDT from  imagen!geof@shasta

Geoff,

Having as much experience as anybody with noisy TCP paths, I can certainly
confirm that the best strategy is to retransmit just the head end of the
retransmission queue. However, I gather from the tone of your note that
you are considering only the first packet on that queue, which is what
the TOPS-20s do. We have found much better performance allowing several
combined up to MSS in total length when a retransmission is necessary. This greatly
reduces the gateway loading when TELNET traffic is involved and also helps to
control damage when the retransmission is due to congestive losses in the net.

Another thing we have done is to carefully count outstanding packets and block
further transmission if the total is greater than a magic number (currently
eight). The magic number is reduced if an ICMP Source Quench arrives and
returns to its original value in a controlled way. The bookeeping to accomplish
this is fairly complicated, since segments can be combined, retransmitted, etc.
The fuzzballs used to use this mechanism exclusively to control packet fluxes, but recently
switched to the send/ack policy I described recently to this list. However,
that send/ack policy leads to severly suboptimal performance in many cases.
We are planning to integrate both the old and new policies to see if performance
can be maintained even in these cases.

Dave
-------
-------
-------

-----------[000118][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Jun-85 13:29:17 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP Timer

From: CLYNN@BBNA.ARPA

Lixia,
	Working with the DARPA Protocol Suite on TOPS20s has lead to
the same conclusions that Dave has mentioned.  I think that Geof's
statement that "performance could be drastically affected" is weak - it
is drastically affected, especially when the retransmitter does not
have a very good round-trip-time algorithm.

	A refinement of only sending the "first packet" in the queue
is to only retransmit one packet - the "first packet" plus as much
additional data from other packets as will fit into a maximum sized
segment (repacketization).  This is effective in reducing character
per round-trip-time echoing in telnet applications (dribble echo)
since most original packets have few data octets and most (if not all)
of the outstanding data will fit into a maximum sized segment.  As Geof
points out, it does present problems if the packets to be retransmitted
are all full size; more below.

	The statement that when a packet is retransmitted, it plus all
the others that made it the first time will be acked is true some of
the time.  It fails when the receiver is packet-event driven, i.e.,
the receiver responds to each packet it processes.  In such a case it
will ack the retransmitted packet, then proceed to reassemble the
other packets from its reassembly queue, acking each one in turn.  The
sending host then sees several acks arriving instead of one.  If it has
set a timer for each packet in the retransimssion queue, the timer for
the second packet has probably already gone off.  Consequently, when
the ack for the packet which was retransmitted arrives, the second
packet becomes the first and is retransmitted - frequently just before
the ack arrives.

	Retransmission of everything which is outstanding can make
performance worse, due to the interaction of factors, e.g., window
size, available data, round-trip-time algorithm, gateways, etc.  A
large window with lots of data causes, for example, the x
implementation on the x, to send as many packets as fast as possible.
Many gateways cannot handle the large burst of packets, so discard a
few, pass a few, drop a few more, etc.  When the retransmission timer
goes off, everything in the queue is retransmitted - some packets get
through and a few more are dropped.  The lost packets seem to make the
round-trip-time estimate grow very large, so throughput goes way down
(wait a minute, send a flood, wait a minute ...).  If the recieving
host can figure out what is happening it can "discourage" such
behavior by using the window to limit the number of packets so that
the gateway will not be swamped (source quench doesn't seem to be well
enough defined and widely enough implemented to be effective).  In
practice, closing the window to just cover the gap of lost packets and
delaying the ack until the reassembler has to stop seems to help a
lot.  (Of course, the sending host should attempt to limit its rate of
packet generation - delays between packets (or fragments in a gateway),
maximum number of outstanding packets (possibly a function of what had
to be retransmitted), etc.)

	Add TOPS20s to your survey under the "single timer,
retransmits a single repacketized packet, with delayed ack" column.

Charlie

-----------[000119][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13 Jun 85 14:57 EDT
From:      WIBR@MIT-MULTICS.ARPA
To:        tcp-ip@SRI-NIC.ARPA

,.
-----------[000120][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13 Jun 85 15:02 EDT
From:      WIBR@MIT-MULTICS.ARPA
To:        tcp-ip@SRI-NIC.ARPA



q
-----------[000121][next][prev][last][first]----------------------------------------------------
Date:      Thu 13 Jun 85 15:04:26-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        CLYNN@BBNA.ARPA, zhang%erlang.DEC@DECWRL.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, JNC@MIT-XX.ARPA
Subject:   Re: TCP Timer
	The TCP I did for Bridge (the one used in the CS-1/T terminal
concentrator) used the same strategy, and for exactly the same
reasons. It only kept a single timer on the oldest data. On timeout,
it sent up to one full packet of un-ack'd data. So yet another system
in that column. (I'm not sure if it still does this, since they
changed things around and ripped some stuff they didn't udnerstand the
use of, like subnet masks, out.)

	This discussion brings up an interesting point, which is that
on all except the slowest lines network traffic control wants to
deal in units of packets, not bytes, since most overhead is per
packet. Currently TCP is byte oriented because of the window and
flow control; we need to have 'conciousness raising' for higher level
protocol implementations to orient them to this aspect of IP (if
and when IP ever gets traffic control).

	Noel
-------
-----------[000122][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Jun-85 15:48:50 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP Timer

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	The TCP I did for Bridge (the one used in the CS-1/T terminal
concentrator) used the same strategy, and for exactly the same
reasons. It only kept a single timer on the oldest data. On timeout,
it sent up to one full packet of un-ack'd data. So yet another system
in that column. (I'm not sure if it still does this, since they
changed things around and ripped some stuff they didn't udnerstand the
use of, like subnet masks, out.)

	This discussion brings up an interesting point, which is that
on all except the slowest lines network traffic control wants to
deal in units of packets, not bytes, since most overhead is per
packet. Currently TCP is byte oriented because of the window and
flow control; we need to have 'conciousness raising' for higher level
protocol implementations to orient them to this aspect of IP (if
and when IP ever gets traffic control).

	Noel
-------

-----------[000123][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13 Jun 85 16:39:00 EDT
From:      Mike Muuss  <mike@brl>
To:        TCP-IP@sri-nic.arpa
Subject:   Host Table Change
The latest version of the host table changed the
previous format of entries which were formatted like:

HOST : addr : NAME.ARPA,NAME,NICNAME1 : stuff...

to a new format, which seems to be roughly like:

HOST : addr : NAME.ARPA,NICNAME1,...,NAME : stuff...

This has caused all the mailers at BRL to behave oddly.
Of course, our mail system maintainer is traveling, but
as best as I can figure it, if the first and second
name differ (sans any .ARPA), it chooses the SECOND name,
and slaps a .ARPA onto it if needed.  Lots of hosts
(including the NIC) are refusing to accept mail which
is addressed to them at a nicname.ARPA address
(which is wierd, but not a combination in the table,
so that's legit, I guess).

Clearly, there are two factors here:

1)  Bizarre assumptions in our mail table builder.
2)  A (seemingly) unannounced table layout change
at the NIC, (seemingly) without added functionality.

When we transition to using our domain server full-time
(hopefully around the 19th of this month), our reliance
on the NIC host tables will end (and our reliance on
their name server will begin).

Until then, do I need to engineer a hack until Kingston
returns, or can the table format revert?

	-Mike
-----------[000124][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Jun-85 18:36:01 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Add Name to TCP-IP mailing list

From: cbosgd!vilya!am@BERKELEY

Please add my name to the mailing list also. I may be sending this
to the wrong place; If you get it please notify me if you are not the 
right person.
			Avi Malek 
			ATT Bell Labs
			tel (718)435-3405; (201)299-3555; vilya!am

-----------[000125][next][prev][last][first]----------------------------------------------------
Date:      Thu, 13-Jun-85 19:23:13 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Host Table Change

From: Mike Muuss  <mike@brl>

The latest version of the host table changed the
previous format of entries which were formatted like:

HOST : addr : NAME.ARPA,NAME,NICNAME1 : stuff...

to a new format, which seems to be roughly like:

HOST : addr : NAME.ARPA,NICNAME1,...,NAME : stuff...

This has caused all the mailers at BRL to behave oddly.
Of course, our mail system maintainer is traveling, but
as best as I can figure it, if the first and second
name differ (sans any .ARPA), it chooses the SECOND name,
and slaps a .ARPA onto it if needed.  Lots of hosts
(including the NIC) are refusing to accept mail which
is addressed to them at a nicname.ARPA address
(which is wierd, but not a combination in the table,
so that's legit, I guess).

Clearly, there are two factors here:

1)  Bizarre assumptions in our mail table builder.
2)  A (seemingly) unannounced table layout change
at the NIC, (seemingly) without added functionality.

When we transition to using our domain server full-time
(hopefully around the 19th of this month), our reliance
on the NIC host tables will end (and our reliance on
their name server will begin).

Until then, do I need to engineer a hack until Kingston
returns, or can the table format revert?

	-Mike

-----------[000126][next][prev][last][first]----------------------------------------------------
Date:      Fri 14 Jun 85 16:07:28-PDT
From:      HOSTMASTER@SRI-NIC
To:        mike@BRL.ARPA
Cc:        Senior-Staff@BRL.ARPA, LMaybaum@DDN1.ARPA, NIC@SRI-NIC.ARPA, Support@BRL.ARPA, BRL-UNIX@BRL.ARPA, TCP-IP@SRI-NIC.ARPA, Large-List-People@MIT-MC.ARPA, Wancho@SIMTEL20.ARPA
Subject:   Re: Partial BRL Mail Outage:  Explained
Mike,

A minor change in the ordering of the nickname field in version #457 of
the host table proved to be an inconvenience to BRL hosts.  Therefore
the following version (#458) reverted the name field to the following
format:

... :domain name (if any), official name.arpa, official name, nickname: ...

We are sorry for any inconveniences this has caused.

In the future please send all correspondence pertaining to host tables
directly to HOSTMASTER@SRI-NIC.ARPA.  The fact that Hostmaster did not
receive your messages directly caused a delay in correcting the problem.

-------
-----------[000127][next][prev][last][first]----------------------------------------------------
Date:      Fri 14 Jun 85 17:41:01-MDT
From:      Mark Crispin <MRC@SIMTEL20.ARPA>
To:        mike@BRL.ARPA
Cc:        Senior-Staff@BRL.ARPA, LMaybaum@DDN1.ARPA, NIC@SRI-NIC.ARPA, Support@BRL.ARPA, BRL-UNIX@BRL.ARPA, TCP-IP@SRI-NIC.ARPA, Large-List-People@MIT-MC.ARPA, Wancho@SIMTEL20.ARPA
Subject:   Re: Partial BRL Mail Outage:  Explained
     It is interesting that only the TOPS-20 systems were put off by
getting mail referring to "nickname.ARPA".  That means that most of
the non-TOPS-20 sites still have heuristics which recognize ".ARPA"
as a special case.

     When the NIC started distributing a host table with the .ARPA
names, I removed these heuristics, so that the TOPS-20 software would
correspond with the obvious intent of the NIC table.

     BRL's experience should rather dramatically show how bad such
heuristics can be.  Remember, the RFC's are quite clear in stating that:
 . only official names may appear in machine-generated fields (message
   headers, SMTP transactions)
 . only official names may have .ARPA applied

     My guess is that BRL ignored the official entry in the host table
entirely and used the remaining entries as the name list (ala the old
NIC host table).  It then applied .ARPA to traffic going out and
stripped it coming in (the standard heuristic).  Folks, this is a kludge!

     I believe the TOPS-20's were doing the right thing.  Comments?

-- Mark --
-------
-----------[000128][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14 Jun 85 17:21:45 EST
From:      John Leong <leong%CMU-ITC-LINUS@CMU-CS-PT.ARPA>
To:        tcp-ip@sri-nic.arpa
Subject:   rfc948

Re : RFC928 Re: IP on 802.3/802.2 by Ira Winston

Using the 802.2 link layer format, the SAP number for IP is 96.  What
is the SAP number for ARP ??

(Note that they have differnet Ethernet packet types in non-802.2 format
: 0800 and 0806 respectively)

John Leong@*
leong@@cmu-cs-h

-----------[000129][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14 Jun 85 17:49:03 EDT
From:      Mike Muuss <mike@BRL.ARPA>
To:        Senior-Staff@BRL.ARPA, COL <LMaybaum@ddn1.ARPA>, NIC@sri-nic.ARPA
Cc:        Support@BRL.ARPA, BRL-UNIX@BRL.ARPA, TCP-IP@sri-nic.ARPA, Large-List-People@mit-mc.ARPA, Wancho@simtel20.ARPA
Subject:   Partial BRL Mail Outage:  Explained
This message reports on the details of the recent 2-day partial mail outage
at BRL.

SYMPTOMS.

The triggering event was an unannounced set of NIC Host Table updates which
our systems picked up and installed Wednesday night.  These updates changed
the format of the host name entries in the table.  Previously, host table
entries had the form:

HOST : address(es) : formalname.ARPA,formalname,nicname1,nicname2 : stuff :

As part of adding alternate names in other domains, this arrangement got
changed to:

HOST : address(es) : formalname.ARPA,nicname1,nicname2,formalname : stuff :

A secondary problem was that the host-table converter for our MMDF mail
system made some assumptions about the layout of the table, such that it
expected both forms of the formal host name to be listed BEFORE any
nicnames.  The new layout broke that assumption, with the result that all
mail from BRL was having the outbound mail addresses rewritten to
nicname1.ARPA, a string most hosts don't have in the table.

Of the many systems on the network, the only ones that seemed overly put out
about getting mail of this form were the TOPS-20 folks.  They rejected a
TO address of nicname1.ARPA as unknown, causing us to return lots of mail
as undeliverable.  A fair amount of this was official BRL correspondence
being sent to other Army elements, and caused much concern in our front
office.

In addition to the symptoms described above, one of our older machines
running an older version of our mail software (BRL-VLD, aka VLD70) somehow
"forgot" what it's name was due to the new mail tables, and refused to
receive ANY inbound mail AT ALL.  This caused several hundred pieces of
mail, much of it official business, to be returned as undeliverable.

CORRECTIVE ACTION.

As always happens, our mail system maintainer (Doug Kingston) was away
on TDY to the USENIX conference.  The first signs of trouble came early
Thursday morning from BRL-VLD, and George Hartwig began experimenting
to try and allieviate the problem.  The full extent of the problem was
not know until Dave Towson brought to my attention what he felt to be an
abnormally large rate of mail rejections on messages he had sent;  nearly
all going to TOPS-20 sites.

By Thursday evening, I had convinced the VLD70 that it knew it's name again,
and coerced it to once again receive mail.  Alas, in the process it seems
to have lost the ability to SEND mail, but for the time being that struck
me as somewhat of an improvement;  users could access lots of other machines
nearby for mail sending.

Owing to a small stroke of luck, I was able to contact Doug Kingston, and at
2000 he was hard at work tracking the table-builder problem causing the
problems with outbound mail.  He installed a new version of
/usr/mmdf/table/nictable on BRL-VGR, but didn't build any new databases, and
didn't leave any mail on what he had done.  I returned Friday at noontime to
discover that our problems were not any better, and went investigating.
First thing I tried was to generate a new mail database using the new
version of the program Doug installed.  When that database was installed,
BRL-VGR lost the ability to send or receive mail.

This unfortunate turn of events forced me to delve into the source code for
the nictable.c program, and I was unable to determine how the previous
night's code changes were supposed to help, so I implemented my own changes.
After some fiddling and testing, my new version produced tables that seemed
visually correct, so I installed it and generated a new database.  After a
half an hour of waiting for it to finish, and trying to battle the load
average down below 7, I was finally able to test it.  Fortuantely, it
worked, and the first of many test messages wafted off to Wancho@simtel20,
who was the hapless TOPS-20 repipient of all my test mail.

Then, all (!)  that remained was to distribute the new program to the 8
other participating BRLNET machines, and rebuild THEIR databases.  By 1700
on Friday, all mail databases on the VAXen and Goulds had been rebuilt, and
each system had been personally hand-checked to make certain mail could now
flow.

There is no telling how much mail got returned during this fiasco.  Users
should be encouraged to re-send their failed mail now, and it *should* go
through.

STATUS.

The BRL VAXen (plus HEL-ACE) are all seemingly back to working order.
The Gould (BRL-SEM) too.

The 2 remaining PDP-11 systems are still in various states of difficulty.
Both BRL-BMD and BRL-VLD are still unable to send to TOPS-20 sites, until
George Hartwig can determine how to install the new nictable program there.
In addition, BRL-VLD is still unable to send mail ([PARM] No valid author
specification present), but at least mail does seem to be flowing in and
getting delivered properly.

These problems I leave to George;  hopefully when Doug returns on Monday,
the PDP-11s can be rapidly returned to full working order.

COMMENTARY.

The NIC Host Table is currently the single most important file on the
MILNET; if this file is capriciously changed, the potential for network-wide
harm is substantial.  The good folks at the NIC have for many years tended
this file with care and attention to detail; my complaint in this instance
is that there was no warning given to Host Administrators or Technical
Liaisons that this change was impending.  With some advance notice, we could
have arranged to consider the implications beforehand, and either adjusted
our software before anything went wrong, or shut off our automatic
host-table update mechanism until we were prepared to attack the problem in
an orderly manner.

Soon, the NIC Host Table is going to be replaced by the magic domain-server
system, and I'm sure that there will be some growing pains associated with
that.  In the interim, lets finish testing our domain code, and stop having
to put out fires with the existing, nearly obsolete, mechanism.

I just hate Mondays, especially when they happen on Thursday.
	-Mike
-----------[000130][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Jun-85 18:16:15 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   rfc948

From: John Leong <leong%CMU-ITC-LINUS@CMU-CS-PT.ARPA>


Re : RFC928 Re: IP on 802.3/802.2 by Ira Winston

Using the 802.2 link layer format, the SAP number for IP is 96.  What
is the SAP number for ARP ??

(Note that they have differnet Ethernet packet types in non-802.2 format
: 0800 and 0806 respectively)

John Leong@*
leong@@cmu-cs-h

-----------[000131][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Jun-85 19:09:52 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Partial BRL Mail Outage:  Explained

From: Mike Muuss <mike@BRL.ARPA>

This message reports on the details of the recent 2-day partial mail outage
at BRL.

SYMPTOMS.

The triggering event was an unannounced set of NIC Host Table updates which
our systems picked up and installed Wednesday night.  These updates changed
the format of the host name entries in the table.  Previously, host table
entries had the form:

HOST : address(es) : formalname.ARPA,formalname,nicname1,nicname2 : stuff :

As part of adding alternate names in other domains, this arrangement got
changed to:

HOST : address(es) : formalname.ARPA,nicname1,nicname2,formalname : stuff :

A secondary problem was that the host-table converter for our MMDF mail
system made some assumptions about the layout of the table, such that it
expected both forms of the formal host name to be listed BEFORE any
nicnames.  The new layout broke that assumption, with the result that all
mail from BRL was having the outbound mail addresses rewritten to
nicname1.ARPA, a string most hosts don't have in the table.

Of the many systems on the network, the only ones that seemed overly put out
about getting mail of this form were the TOPS-20 folks.  They rejected a
TO address of nicname1.ARPA as unknown, causing us to return lots of mail
as undeliverable.  A fair amount of this was official BRL correspondence
being sent to other Army elements, and caused much concern in our front
office.

In addition to the symptoms described above, one of our older machines
running an older version of our mail software (BRL-VLD, aka VLD70) somehow
"forgot" what it's name was due to the new mail tables, and refused to
receive ANY inbound mail AT ALL.  This caused several hundred pieces of
mail, much of it official business, to be returned as undeliverable.

CORRECTIVE ACTION.

As always happens, our mail system maintainer (Doug Kingston) was away
on TDY to the USENIX conference.  The first signs of trouble came early
Thursday morning from BRL-VLD, and George Hartwig began experimenting
to try and allieviate the problem.  The full extent of the problem was
not know until Dave Towson brought to my attention what he felt to be an
abnormally large rate of mail rejections on messages he had sent;  nearly
all going to TOPS-20 sites.

By Thursday evening, I had convinced the VLD70 that it knew it's name again,
and coerced it to once again receive mail.  Alas, in the process it seems
to have lost the ability to SEND mail, but for the time being that struck
me as somewhat of an improvement;  users could access lots of other machines
nearby for mail sending.

Owing to a small stroke of luck, I was able to contact Doug Kingston, and at
2000 he was hard at work tracking the table-builder problem causing the
problems with outbound mail.  He installed a new version of
/usr/mmdf/table/nictable on BRL-VGR, but didn't build any new databases, and
didn't leave any mail on what he had done.  I returned Friday at noontime to
discover that our problems were not any better, and went investigating.
First thing I tried was to generate a new mail database using the new
version of the program Doug installed.  When that database was installed,
BRL-VGR lost the ability to send or receive mail.

This unfortunate turn of events forced me to delve into the source code for
the nictable.c program, and I was unable to determine how the previous
night's code changes were supposed to help, so I implemented my own changes.
After some fiddling and testing, my new version produced tables that seemed
visually correct, so I installed it and generated a new database.  After a
half an hour of waiting for it to finish, and trying to battle the load
average down below 7, I was finally able to test it.  Fortuantely, it
worked, and the first of many test messages wafted off to Wancho@simtel20,
who was the hapless TOPS-20 repipient of all my test mail.

Then, all (!)  that remained was to distribute the new program to the 8
other participating BRLNET machines, and rebuild THEIR databases.  By 1700
on Friday, all mail databases on the VAXen and Goulds had been rebuilt, and
each system had been personally hand-checked to make certain mail could now
flow.

There is no telling how much mail got returned during this fiasco.  Users
should be encouraged to re-send their failed mail now, and it *should* go
through.

STATUS.

The BRL VAXen (plus HEL-ACE) are all seemingly back to working order.
The Gould (BRL-SEM) too.

The 2 remaining PDP-11 systems are still in various states of difficulty.
Both BRL-BMD and BRL-VLD are still unable to send to TOPS-20 sites, until
George Hartwig can determine how to install the new nictable program there.
In addition, BRL-VLD is still unable to send mail ([PARM] No valid author
specification present), but at least mail does seem to be flowing in and
getting delivered properly.

These problems I leave to George;  hopefully when Doug returns on Monday,
the PDP-11s can be rapidly returned to full working order.

COMMENTARY.

The NIC Host Table is currently the single most important file on the
MILNET; if this file is capriciously changed, the potential for network-wide
harm is substantial.  The good folks at the NIC have for many years tended
this file with care and attention to detail; my complaint in this instance
is that there was no warning given to Host Administrators or Technical
Liaisons that this change was impending.  With some advance notice, we could
have arranged to consider the implications beforehand, and either adjusted
our software before anything went wrong, or shut off our automatic
host-table update mechanism until we were prepared to attack the problem in
an orderly manner.

Soon, the NIC Host Table is going to be replaced by the magic domain-server
system, and I'm sure that there will be some growing pains associated with
that.  In the interim, lets finish testing our domain code, and stop having
to put out fires with the existing, nearly obsolete, mechanism.

I just hate Mondays, especially when they happen on Thursday.
	-Mike

-----------[000132][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14 Jun 85 19:55:14 EDT
From:      Mike Muuss <mike@BRL.ARPA>
To:        Mark Crispin <MRC@simtel20.ARPA>
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Doing the Right Thing
[In responding to MRC's comments, I am reducing the distribution
back down to the TCP-IP@NIC list]

I don't believe that I implied that the TOPS-20 systems were doing anything
wrong; it's just that they were where we ran into the problems.

Yes, what BRL's mail systems are doing right now is perhaps somewhat
distasteful, but rather than expending effort on improving the current
table-driven software, I have directed that all implementation effort be
focused on the transition to domain-based naming using domain servers &
resolvers.  We have a version of MMDF which operates with the domain server
system currently in testing, and hope to begin using it in production fairly
soon (at least against our own local domain server).

Interestingly, in the current state of affairs, the nicname.ARPA syntax
really isn't expected to be valid (because it is not in the table), but in
the domain-server view of the world, this is a perfectly correct syntax.
Our intention was to make it possible for our users to enter the new syntax
as soon as possible, so that they could begin to get accustomed to it; this
has been true for many months now.

However, implementing this meant that we had to "infer" the valid set of
nicnames for each host in the .ARPA domain by munging the table.  The code
that built our internal tables "KNEW" the format of the NIC tables, and thus
our problem.

At such time as the NIC reorders the tables to have the full (new) domain
name of the host FIRST, our table-builder will break again, but hopefully we
will have transitioned to using the domain servers by then, and have rid
ourselves of our .ARPA tables all together!

	-Mike
-----------[000133][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Jun-85 20:01:55 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Partial BRL Mail Outage:  Explained

From: HOSTMASTER@SRI-NIC

Mike,

A minor change in the ordering of the nickname field in version #457 of
the host table proved to be an inconvenience to BRL hosts.  Therefore
the following version (#458) reverted the name field to the following
format:

... :domain name (if any), official name.arpa, official name, nickname: ...

We are sorry for any inconveniences this has caused.

In the future please send all correspondence pertaining to host tables
directly to HOSTMASTER@SRI-NIC.ARPA.  The fact that Hostmaster did not
receive your messages directly caused a delay in correcting the problem.

-------

-----------[000134][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Jun-85 20:39:22 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Partial BRL Mail Outage:  Explained

From: Mark Crispin <MRC@SIMTEL20.ARPA>

     It is interesting that only the TOPS-20 systems were put off by
getting mail referring to "nickname.ARPA".  That means that most of
the non-TOPS-20 sites still have heuristics which recognize ".ARPA"
as a special case.

     When the NIC started distributing a host table with the .ARPA
names, I removed these heuristics, so that the TOPS-20 software would
correspond with the obvious intent of the NIC table.

     BRL's experience should rather dramatically show how bad such
heuristics can be.  Remember, the RFC's are quite clear in stating that:
 . only official names may appear in machine-generated fields (message
   headers, SMTP transactions)
 . only official names may have .ARPA applied

     My guess is that BRL ignored the official entry in the host table
entirely and used the remaining entries as the name list (ala the old
NIC host table).  It then applied .ARPA to traffic going out and
stripped it coming in (the standard heuristic).  Folks, this is a kludge!

     I believe the TOPS-20's were doing the right thing.  Comments?

-- Mark --
-------

-----------[000135][next][prev][last][first]----------------------------------------------------
Date:      Fri, 14-Jun-85 21:14:14 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Doing the Right Thing

From: Mike Muuss <mike@BRL.ARPA>

[In responding to MRC's comments, I am reducing the distribution
back down to the TCP-IP@NIC list]

I don't believe that I implied that the TOPS-20 systems were doing anything
wrong; it's just that they were where we ran into the problems.

Yes, what BRL's mail systems are doing right now is perhaps somewhat
distasteful, but rather than expending effort on improving the current
table-driven software, I have directed that all implementation effort be
focused on the transition to domain-based naming using domain servers &
resolvers.  We have a version of MMDF which operates with the domain server
system currently in testing, and hope to begin using it in production fairly
soon (at least against our own local domain server).

Interestingly, in the current state of affairs, the nicname.ARPA syntax
really isn't expected to be valid (because it is not in the table), but in
the domain-server view of the world, this is a perfectly correct syntax.
Our intention was to make it possible for our users to enter the new syntax
as soon as possible, so that they could begin to get accustomed to it; this
has been true for many months now.

However, implementing this meant that we had to "infer" the valid set of
nicnames for each host in the .ARPA domain by munging the table.  The code
that built our internal tables "KNEW" the format of the NIC tables, and thus
our problem.

At such time as the NIC reorders the tables to have the full (new) domain
name of the host FIRST, our table-builder will break again, but hopefully we
will have transitioned to using the domain servers by then, and have rid
ourselves of our .ARPA tables all together!

	-Mike

-----------[000136][next][prev][last][first]----------------------------------------------------
Date:      15 Jun 1985 06:40-EDT
From:      CERF@USC-ISI.ARPA
To:        zhang%erlang.DEC@DECWRL.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP Timer
Lixia,

I can't offer any statistics on other implementations, but one would
expect that a timer for the segment at the head of the queue for
any paritcular connection would be sufficient. If that segment times
oout, you retransmit it and perhaps you look to see if any other
segments on the queue have timed out.  When the acknowledgement is
received, you reset the timer for the next segment in the queue
if it has not also been acknowledged.

The calculation of the proper timeout in a variable delay 
environment is a challenge. Perhaps Dave Mills has the most experience
in devising ways to cope with varying transmission and propagation
delays.  Others have experimented with solutions to this problem and
I trust they will also respond to your query.  You might also read 
some of Dave Clark's notes to implementors on the subject of handling
retransmission.

Vint Cerf
-----------[000137][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Jun-85 07:07:34 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP Timer

From: CERF@USC-ISI.ARPA

Lixia,

I can't offer any statistics on other implementations, but one would
expect that a timer for the segment at the head of the queue for
any paritcular connection would be sufficient. If that segment times
oout, you retransmit it and perhaps you look to see if any other
segments on the queue have timed out.  When the acknowledgement is
received, you reset the timer for the next segment in the queue
if it has not also been acknowledged.

The calculation of the proper timeout in a variable delay 
environment is a challenge. Perhaps Dave Mills has the most experience
in devising ways to cope with varying transmission and propagation
delays.  Others have experimented with solutions to this problem and
I trust they will also respond to your query.  You might also read 
some of Dave Clark's notes to implementors on the subject of handling
retransmission.

Vint Cerf

-----------[000138][next][prev][last][first]----------------------------------------------------
Date:      15 Jun 1985 13:05:40 EDT
From:      MILLS@USC-ISID.ARPA
To:        CERF@USC-ISI.ARPA, zhang%erlang.DEC@DECWRL.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re: TCP Timer
In response to the message sent  15 Jun 1985 06:40-EDT from CERF@USC-ISI.ARPA

Vint,

The problem of estimating the mean of the roundtrip-delay random variable
was discussed in RFC-889. The so-called "RSRE algorithm," which we have
been using for several years, provides only a single sample per roundtrip
interval, which is appropriate for retransmission policies in which only the
first wadge on the retransmission queue is retransmitted. However, convergence
to a good mean-estimate is accellerated if you keep separate timers for
each segment originally transmitted and update the estimate with a new sample
as the ACK-sequence number passes by the first octet of the corresponding
segment, even if the segment timer isn't used for anything else. Under
conditions where many segments can be in flight, the estimate is very much
improved. The estimate can be improved further using a nonlinear smoothing
algorithm, as discussed in RFC-889.

All this horsepower was found necessary for a pair of fuzzballs to grumble
to each other via a transatlantic-cable link using a statistical multiplexor.
The delay variance on that circuit you wouldn't believe!

Dave
-------
-----------[000139][next][prev][last][first]----------------------------------------------------
Date:      15 Jun 85  1702 PDT
From:      Joe Weening <JJW@SU-AI.ARPA>
To:        TCP-IP@SRI-NIC.ARPA
Subject:   Re: Doing the Right Thing
    From: HOSTMASTER@SRI-NIC

    ... version (#458) reverted the name field to the following format:

    ... :domain name (if any), official name.arpa, official name, nickname: ...

    From: Mike Muuss <mike@BRL.ARPA>

    At such time as the NIC reorders the tables to have the full (new) domain
    name of the host FIRST, our table-builder will break again, but hopefully we
    will have transitioned to using the domain servers by then, and have rid
    ourselves of our .ARPA tables all together!

Consider yourselves broken.  (Look at the PURDUE.EDU entries in table #458.)
The fix to use the NIC table correctly is absolutely trivial on most systems:
just make sure "." is a legal character in host names, use the first name in
the NIC table as the official name, and allow others as nicknames.  It is
even easier than the kludge method, because no special handling of the
string ".ARPA" is needed.  Most of us did this back in late 1983 when we
were asked to.

On the issue of nicknames in the ARPA domain: if you use the host table,
there are (almost) none, not counting the non-domain nicknames, but if you
query the official servers it appears that all of the non-domain nicknames
have been turned into nickname.ARPA forms.  This is unfortunate; it would
be better to have the table and the name servers agree as long as the ARPA
domain remains.  (Which might be a while, since Milnet hosts still need to
set a date for their conversion.)  Presumably the nickname.ARPA forms have
not been added to the table because that would increase its size too much,
so I think they should be removed from the name server database as well.

						Joe

-----------[000140][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Jun-85 14:39:48 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP Timer

From: MILLS@USC-ISID.ARPA

In response to the message sent  15 Jun 1985 06:40-EDT from CERF@USC-ISI.ARPA

Vint,

The problem of estimating the mean of the roundtrip-delay random variable
was discussed in RFC-889. The so-called "RSRE algorithm," which we have
been using for several years, provides only a single sample per roundtrip
interval, which is appropriate for retransmission policies in which only the
first wadge on the retransmission queue is retransmitted. However, convergence
to a good mean-estimate is accellerated if you keep separate timers for
each segment originally transmitted and update the estimate with a new sample
as the ACK-sequence number passes by the first octet of the corresponding
segment, even if the segment timer isn't used for anything else. Under
conditions where many segments can be in flight, the estimate is very much
improved. The estimate can be improved further using a nonlinear smoothing
algorithm, as discussed in RFC-889.

All this horsepower was found necessary for a pair of fuzzballs to grumble
to each other via a transatlantic-cable link using a statistical multiplexor.
The delay variance on that circuit you wouldn't believe!

Dave
-------

-----------[000141][next][prev][last][first]----------------------------------------------------
Date:      15 Jun 85 17:26 EDT
From:      Rudy.Nedved@CMU-CS-A.ARPA
To:        Mike Muuss <mike@BRL.ARPA>
Cc:        Mark Crispin <MRC@simtel20.ARPA>, tcp-ip@sri-nic.ARPA
Subject:   Re: Doing the Right Thing
Mike,

You would have had more problems at least with CMU sites if it was
not for the policy of being "liberal" with the garbage we get. Our
mail receivers "correct" or "improve" return paths. In the case of
BRL, we are always adding the canonical host name of the connecting
host to the return path under the assumption BRL is yet another mail
relay forgetting to add its name to the relayed mail it is
handling...

It should not be case that nickname.ARPA is a valid name and it is a
unintentional side effect that nickname.ARPA works under the domain
system...you will run into problems with places like CMU that will 1)
flush all existing nikcnames, 2) have NIC point current canoncial
names like CMU-CS-A.ARPA to have CNAME records to A.CS.CMU.EDU and 3)
and change our official names to A.CS.CMU.EDU.

For some reason, it really sounds like BRL went in and hacked there
host table so nickname.ARPA would exist because there software simply
appends .ARPA to the end of a user supplied name before looking it
up. If this is the case and you are hoping that the domain system
will further this hack...you are going to be in a big mess
soon....look at Symbolics, CMU, Rochester, Berkeley, Rice,
Purdue, CSNET.....

-Rudy
-----------[000142][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Jun-85 17:57:44 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Doing the Right Thing

From: Rudy.Nedved@CMU-CS-A.ARPA

Mike,

You would have had more problems at least with CMU sites if it was
not for the policy of being "liberal" with the garbage we get. Our
mail receivers "correct" or "improve" return paths. In the case of
BRL, we are always adding the canonical host name of the connecting
host to the return path under the assumption BRL is yet another mail
relay forgetting to add its name to the relayed mail it is
handling...

It should not be case that nickname.ARPA is a valid name and it is a
unintentional side effect that nickname.ARPA works under the domain
system...you will run into problems with places like CMU that will 1)
flush all existing nikcnames, 2) have NIC point current canoncial
names like CMU-CS-A.ARPA to have CNAME records to A.CS.CMU.EDU and 3)
and change our official names to A.CS.CMU.EDU.

For some reason, it really sounds like BRL went in and hacked there
host table so nickname.ARPA would exist because there software simply
appends .ARPA to the end of a user supplied name before looking it
up. If this is the case and you are hoping that the domain system
will further this hack...you are going to be in a big mess
soon....look at Symbolics, CMU, Rochester, Berkeley, Rice,
Purdue, CSNET.....

-Rudy

-----------[000143][next][prev][last][first]----------------------------------------------------
Date:      Sat, 15-Jun-85 20:55:27 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Doing the Right Thing

From: Joe Weening <JJW@SU-AI.ARPA>

    From: HOSTMASTER@SRI-NIC

    ... version (#458) reverted the name field to the following format:

    ... :domain name (if any), official name.arpa, official name, nickname: ...

    From: Mike Muuss <mike@BRL.ARPA>

    At such time as the NIC reorders the tables to have the full (new) domain
    name of the host FIRST, our table-builder will break again, but hopefully we
    will have transitioned to using the domain servers by then, and have rid
    ourselves of our .ARPA tables all together!

Consider yourselves broken.  (Look at the PURDUE.EDU entries in table #458.)
The fix to use the NIC table correctly is absolutely trivial on most systems:
just make sure "." is a legal character in host names, use the first name in
the NIC table as the official name, and allow others as nicknames.  It is
even easier than the kludge method, because no special handling of the
string ".ARPA" is needed.  Most of us did this back in late 1983 when we
were asked to.

On the issue of nicknames in the ARPA domain: if you use the host table,
there are (almost) none, not counting the non-domain nicknames, but if you
query the official servers it appears that all of the non-domain nicknames
have been turned into nickname.ARPA forms.  This is unfortunate; it would
be better to have the table and the name servers agree as long as the ARPA
domain remains.  (Which might be a while, since Milnet hosts still need to
set a date for their conversion.)  Presumably the nickname.ARPA forms have
not been added to the table because that would increase its size too much,
so I think they should be removed from the name server database as well.

						Joe

-----------[000144][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 05:39:23 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Retransmission policies

From: Robert Cole <robert@ucl-cs.arpa>

Dave,
We do something similar here.
Except that when we re-transmit the head of the queue (or any
subsequent part) we make the packet size 200 bytes of TCP data.
Since a large part of our troubles arise from missing IP
fragments.
The point to note is that each site may have to think about its own
situation and problems, then apply its own (unique) solution to this
problem.

Robert
(from across the pond).

-----------[000145][next][prev][last][first]----------------------------------------------------
Date:      17 Jun 1985 10:00:23 EDT (Monday)
From:      T. Michael Louden (MS W422) <louden@mitre-gateway>
To:        tcp-ip@sri-nic
Cc:        louden@mitre-gateway
Subject:   TCP/IP on Hyperchannel
Can anyone give me some information on TCP/IP over a local Hyperchannel?
I would like to know what data rates a user file transfer could reasonably
expect to see.
Additional information on interfaces and configurations would also be
useful.

Thanks for any help!
Mike Louden
Louden@MITRE

-----------[000146][next][prev][last][first]----------------------------------------------------
Date:      17 Jun 1985 10:14:46 EDT
From:      MILLS@USC-ISID.ARPA
To:        robert@UCL-CS.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, MILLS@USC-ISID.ARPA
Subject:   Re:  Retransmission policies
In response to your message sent      Mon, 17 Jun 85 9:27:20 BST

Robert,

Yes, you do have a point. It's interesting that the issue of how big to
make a glob of data impacts performance so critically in both the
initial packetization (viz my earlier comments) and repacketization
policies. It would be instructive to test your TP-4 implementation with
respect to these policies, in view of the constraints of the protocol.

Dave
-------
-----------[000147][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 11:02:15 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP/IP on Hyperchannel

From: T. Michael Louden (MS W422) <louden@mitre-gateway>

Can anyone give me some information on TCP/IP over a local Hyperchannel?
I would like to know what data rates a user file transfer could reasonably
expect to see.
Additional information on interfaces and configurations would also be
useful.

Thanks for any help!
Mike Louden
Louden@MITRE

-----------[000148][next][prev][last][first]----------------------------------------------------
Date:      Mon 17 Jun 85 11:30:25-EDT
From:      Vince.Fuller@CMU-CS-C.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   FTP protocol question
I recently received a request to modify the TOPS-20 FTP server to selectively
accept/reject the "SITE xxx" based on the paramater, "xxx". In particular, Unix
systems are evidentally sending "SITE UNIX" to do something special which the
TOPS-20's aren't doing. My question is: how should an FTP server handle the
"SITE" command? A quick read of RFC765 mentions SITE, but that's about it - it
doesn't say what the command should do.

	--Vince
-------
-----------[000149][next][prev][last][first]----------------------------------------------------
Date:      17 Jun 85 14:36 PDT
From:      Tom Perrine <tom@LOGICON.ARPA>
To:        inco@ucs-isid, tcp-ip@sri-nic
Cc:        Tom Perrine <tom@logicon>
Subject:   TCP/IP for System V (responses)
About a week ago I asked about a TCP/IP for System V being offered by a
company called Uniq. Here are the responses I received:

<<<<11111111111111111111111111111111111111111111111>>>>
Date: 11 Jun 1985 17:29:02 EDT
From: INCO@USC-ISID.ARPA
Subject: TCP/IP for Version 5
To:   tom@LOGICON.ARPA
     The only other thing I heard about that project was that it was
supposed to support the original version of DoD HFP created for the
WWMCCS project some time ago, and that it had a series of specialized
performance and timing tools, although no specifics were given to
me.  I would be interested in finding out any more that you
discover on it, since I am working on an R&D on that very subject.
Thanks.

Steve Sutkowski
Inco at Usc-Isid
------
<<<<222222222222222222222222222222222222222222222>>>>
Date: Wed, 12 Jun 85 21:06:30 edt
From: bellcore!sabre!martin@Berkeley (Martin J Levy)
To: tom@logicon.ARPA
Subject: re : passage

look at excelans  stuff, i think it's faster.
we have that up on 2 5.0 vaxen.

martin
------
<<<<33333333333333333333333333333333333333333333333333>>>>
Date:     Thu, 13 Jun 85 7:27:09 CDT
From:     Linda Crosby <lcrosby@ALMSA-1>
To:       Tom Perrine <tom@logicon.arpa>
cc:       JGregory@ALMSA-1
Subject:  Re:  Passage TCP/IP for UNIX Sys V

Tom,

1. We are in process of procuring PASSAGE to be used on our VAXEN 
with AT&T UNIX System V.  I have passed a copy of your message to
our team member who has the most knowledge of PASSAGE, Jim Gregory.
He should be contacting you in a few days.

2. Also, you should contact Grace Avallone (GAVALLON@CECOM-2). The 
CECOM-2 machine is using System V with PASSAGE.

Linda J. Crosby
Technical Liaison
ALMSA
(LCROSBY@ALMSA-1)
-----
<<<<444444444444444444444444444444444444444444444444444>>>>
Date:     Fri, 14 Jun 85 10:09:07 CDT
From:     James Gregory <jgregory@ALMSA-1>
To:       tom@logicon.arpa
cc:       lcrosby@ALMSA-1
Subject:  Passage TCP/IP for UNIX Sys V

Tom,

We have been aware of the UNIQ effort for quite some time now.  Our contact
at the Army Communications Electronics Command, Grace Avallone (Avallone@
CECOM-1.ARPA), informed us about the middle of May that they are successfully
using PASSAGE on their system.  (I believe it's a VAX 11/780.)  The UNIQ
implementation, as I recall, was based on a version that was produced under
government contract.  UNIQ modified it to run on VAXen.  

UNIQ corporation conducts business a little different than I'm used to.  In
the fall of 1984, PASSAGE was a $3,000 package, in May, the cost was $15,000.
In the fall of 1984,
maintenance was $30 a month per copy of PASSAGE, now it's $5,400
for one year of maintenance for two copies.  Source
is no longer provided thereby creating dependency on a vendor that, if
my sources are correct, has a very volatile pricing strategy.  (It will be
interesting to see what their maintenance costs will be if and when we
renew it.)  At any rate, compared to the TCP/IP which is an integral part of
the BSD 4.2 system which is available for licensed users for a distribution
cost only (about $1000), UNIQ's software does not appear to be any kind of
bargain.  If you MUST run TCP/IP on a UNIX V VAX, however, it seems to
be the only act in town - at the moment.

I'm not sure how reliable this information is today, it's based on
information I received elsewhere within the Army within the last two weeks
(following their discussion with the vendor).  Suffice it to say that you
would be wise to consider the business ethics of the company before doing 
business with them.  (Incidentally;  Yes, it looks like we'll be buying in -
but, I'm not very happy about it.)

Jim Gregory
Project Leader
Data Communication Systems Division
Directorate for ADP Technology
US Army AMC ALMSA
<<<<<<<<<<<<<<<END OF INCLUDED MESSAGES>>>>>>>>>>>>>>>>>>>>>>>>>

Thanks to all who replied. If anyone has any other information, please
feel free to send it along.

Tom Perrine
Logicon - Operating Systems Divivison
Sand Diego CA
(619) 455-1330 ext. 726

-----------[000150][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 11:57:36 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Retransmission policies

From: MILLS@USC-ISID.ARPA

In response to your message sent      Mon, 17 Jun 85 9:27:20 BST

Robert,

Yes, you do have a point. It's interesting that the issue of how big to
make a glob of data impacts performance so critically in both the
initial packetization (viz my earlier comments) and repacketization
policies. It would be instructive to test your TP-4 implementation with
respect to these policies, in view of the constraints of the protocol.

Dave
-------

-----------[000151][next][prev][last][first]----------------------------------------------------
Date:      17 Jun 1985 12:58:45 EDT
From:      INCO@USC-ISID.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Cc:        protocols@RUTGERS.ARPA

     In order to propose the TAC as an approach to a project design
issue, I need to inquiry if anyone has (or knows who does) information
on TAC capacity (device interconnectivity), data rates, measurement/
timing data, performance, etc.  Any and all information would be
appreciated.  Thank you.

Steve Sutkowski
Inco at Usc-Isid
-------
-----------[000152][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Jun 85 13:32:01 EDT
From:      Mike Muuss <mike@BRL.ARPA>
To:        Rudy.Nedved@cmu-cs-a.ARPA
Cc:        Mike Muuss <mike@BRL.ARPA>, Mark Crispin <MRC@simtel20.ARPA>, tcp-ip@sri-nic.ARPA
Subject:   Re:  Doing the Right Thing
Actually, if you look carefully, you will notice that BRL is already
registered as a second-level domain (BRL.MIL), and we are hard at
work trying to do the *right thing* in the new context.  This does
not prevent us from having to continue to exist in the current environment
for a few more weeks until we transition to the new version of our
software.

The real problem was not that the MMDF software itself didn't handle
domains properly (it seems to), but instead that one lousy program
which converted the NIC table into some internal tables (used in
lieu of a domain server) was (and still is) somewhat demented.

Looking forward to newer forms of suffering (but at higher levels)
with domain servers...
	-Mike
-----------[000153][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Jun 85 14:08:19 EDT
From:      Mike Muuss <mike@BRL.ARPA>
To:        "T. Michael Louden" <louden@mitre-gateway.ARPA>
Cc:        tcp-ip@sri-nic.ARPA, louden@mitre-gateway.ARPA
Subject:   Re:  TCP/IP on Hyperchannel
Creon Levitt and Eugene Myia at NASA-AMES did a fairly complete set of
tests on the Hyperchannel;  they will probably send you a copy of it
if you like.

Locally, between a 780 and a 750, we see data rates on the order
of 80 Kbytes/sec of user->user data, which is similar to our other
interfaces (ethernet, etc).

Of course, for us the choice of Hyperchannel for that particular room
was necessitated by having to talk to a Cyber 750 running NOS2.
The fact that the VAXen can talk amongst themselves over the Hyperchannel
is incidental.

If you are looking for something REALLY FAST to interconnect just
minis and super-minis, try the 80 Mbit PRONET;  much cheaper than
Hyperchannel.

	Best,
	 -Mike
-----------[000154][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Jun 85 9:27:20 BST
From:      Robert Cole <robert@ucl-cs.arpa>
To:        MILLS@usc-isid.arpa
Cc:        tcp-ip@sri-nic.arpa
Subject:   Re:  Retransmission policies
Dave,
We do something similar here.
Except that when we re-transmit the head of the queue (or any
subsequent part) we make the packet size 200 bytes of TCP data.
Since a large part of our troubles arise from missing IP
fragments.
The point to note is that each site may have to think about its own
situation and problems, then apply its own (unique) solution to this
problem.

Robert
(from across the pond).


-----------[000155][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 14:50:13 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   FTP protocol question

From: Vince.Fuller@CMU-CS-C.ARPA

I recently received a request to modify the TOPS-20 FTP server to selectively
accept/reject the "SITE xxx" based on the paramater, "xxx". In particular, Unix
systems are evidentally sending "SITE UNIX" to do something special which the
TOPS-20's aren't doing. My question is: how should an FTP server handle the
"SITE" command? A quick read of RFC765 mentions SITE, but that's about it - it
doesn't say what the command should do.

	--Vince
-------

-----------[000156][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Jun 85 17:20 EDT
From:      "John G. Ata" <Ata@RADC-MULTICS.ARPA>
To:        Vince.Fuller@CMU-CS-C.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: FTP protocol question
This command is supposed to send parameters that are system specific,
but necessary for the file transfer.  If you don't handle it, you
probably should send back a 202 or 502 (Command not implemented).

                    John G. Ata
-----------[000157][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 17:57:59 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Doing the Right Thing

From: Mike Muuss <mike@BRL.ARPA>

Actually, if you look carefully, you will notice that BRL is already
registered as a second-level domain (BRL.MIL), and we are hard at
work trying to do the *right thing* in the new context.  This does
not prevent us from having to continue to exist in the current environment
for a few more weeks until we transition to the new version of our
software.

The real problem was not that the MMDF software itself didn't handle
domains properly (it seems to), but instead that one lousy program
which converted the NIC table into some internal tables (used in
lieu of a domain server) was (and still is) somewhat demented.

Looking forward to newer forms of suffering (but at higher levels)
with domain servers...
	-Mike

-----------[000158][next][prev][last][first]----------------------------------------------------
Date:      17 Jun 1985 2108-PDT (Monday)
From:      fouts@AMES-NAS.ARPA (Marty)
To:        Ron Natalie <ron@BRL.ARPA>
Cc:        "T. Michael Louden" <louden@MITRE-GATEWAY.ARPA>, tcp-ip@SRI-NIC.ARPA, louden@MITRE-GATEWAY.ARPA, Mike Muuss <mike@BRL.ARPA>
Subject:   Re:  TCP/IP on Hyperchannel
     Actually, we have some more experience at NASA now, and aren't
completely convince that the PI13(14)/VAX are the biggest bottleneck.

     I'm seeing some pretty horrible numbers when I make a Cray 2 pump
data onto the floor, and I'm pretty sure it's not the Cray, but I still
don't know what it is.

Marty

----------
-----------[000159][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 18:52:42 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  TCP/IP on Hyperchannel

From: Mike Muuss <mike@BRL.ARPA>

Creon Levitt and Eugene Myia at NASA-AMES did a fairly complete set of
tests on the Hyperchannel;  they will probably send you a copy of it
if you like.

Locally, between a 780 and a 750, we see data rates on the order
of 80 Kbytes/sec of user->user data, which is similar to our other
interfaces (ethernet, etc).

Of course, for us the choice of Hyperchannel for that particular room
was necessitated by having to talk to a Cyber 750 running NOS2.
The fact that the VAXen can talk amongst themselves over the Hyperchannel
is incidental.

If you are looking for something REALLY FAST to interconnect just
minis and super-minis, try the 80 Mbit PRONET;  much cheaper than
Hyperchannel.

	Best,
	 -Mike

-----------[000160][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Jun 85 19:27:51 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Mike Muuss <mike@BRL.ARPA>
Cc:        Rudy.Nedved@CMU-CS-A.ARPA, Mike Muuss <mike@BRL.ARPA>, Mark Crispin <MRC@SIMTEL20.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Doing the Right Thing
Jeez, end of subject please.  Yes, the program at BRL was doing the
wrong thing.  You get screwed when you make shortcuts like this.  Same
thing happened to nearly every 4.2 site when the host BRL-ZAP was
added.  They wrote grammer rules to parse the table that were wrong, and
the entry for that host was legal in the table spec, but caused the
program to blow up.  Guess who got all the complaints?   Right, BRL and
the NIC.  The code probably never did get fixed at most sites as BRL
modified their entry to complied with the 4.2 pinhead view on what the
table looked like.

Let's just move along to implementing the next specification (domains,
such as they are), regarding this as a lesson to be learned.

-Ron
-----------[000161][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17 Jun 85 19:30:28 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Mike Muuss <mike@BRL.ARPA>
Cc:        "T. Michael Louden" <louden@MITRE-GATEWAY.ARPA>, tcp-ip@SRI-NIC.ARPA, louden@MITRE-GATEWAY.ARPA
Subject:   Re:  TCP/IP on Hyperchannel
I should point out that it is NASA's conviction that the speed limitations
on the numbers they came up with are a result of the PI-13 interface to the
PDP-11.  I don't know if this is true, but it shouldn't surprise me.  The
interface is a pain to deal with and the whole hyperchannel system is amazingly
tempermental considering the small size of the system here and the high price
we paid for it.

-Ron
-----------[000162][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 19:43:53 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: FTP protocol question

From: "John G. Ata" <Ata@RADC-MULTICS.ARPA>

This command is supposed to send parameters that are system specific,
but necessary for the file transfer.  If you don't handle it, you
probably should send back a 202 or 502 (Command not implemented).

                    John G. Ata

-----------[000163][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 20:48:10 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP/IP for System V (responses)

From: Tom Perrine <tom@LOGICON.ARPA>

About a week ago I asked about a TCP/IP for System V being offered by a
company called Uniq. Here are the responses I received:

<<<<11111111111111111111111111111111111111111111111>>>>
Date: 11 Jun 1985 17:29:02 EDT
From: INCO@USC-ISID.ARPA
Subject: TCP/IP for Version 5
To:   tom@LOGICON.ARPA
     The only other thing I heard about that project was that it was
supposed to support the original version of DoD HFP created for the
WWMCCS project some time ago, and that it had a series of specialized
performance and timing tools, although no specifics were given to
me.  I would be interested in finding out any more that you
discover on it, since I am working on an R&D on that very subject.
Thanks.

Steve Sutkowski
Inco at Usc-Isid
------
<<<<222222222222222222222222222222222222222222222>>>>
Date: Wed, 12 Jun 85 21:06:30 edt
From: bellcore!sabre!martin@Berkeley (Martin J Levy)
To: tom@logicon.ARPA
Subject: re : passage

look at excelans  stuff, i think it's faster.
we have that up on 2 5.0 vaxen.

martin
------
<<<<33333333333333333333333333333333333333333333333333>>>>
Date:     Thu, 13 Jun 85 7:27:09 CDT
From:     Linda Crosby <lcrosby@ALMSA-1>
To:       Tom Perrine <tom@logicon.arpa>
cc:       JGregory@ALMSA-1
Subject:  Re:  Passage TCP/IP for UNIX Sys V

Tom,

1. We are in process of procuring PASSAGE to be used on our VAXEN 
with AT&T UNIX System V.  I have passed a copy of your message to
our team member who has the most knowledge of PASSAGE, Jim Gregory.
He should be contacting you in a few days.

2. Also, you should contact Grace Avallone (GAVALLON@CECOM-2). The 
CECOM-2 machine is using System V with PASSAGE.

Linda J. Crosby
Technical Liaison
ALMSA
(LCROSBY@ALMSA-1)
-----
<<<<444444444444444444444444444444444444444444444444444>>>>
Date:     Fri, 14 Jun 85 10:09:07 CDT
From:     James Gregory <jgregory@ALMSA-1>
To:       tom@logicon.arpa
cc:       lcrosby@ALMSA-1
Subject:  Passage TCP/IP for UNIX Sys V

Tom,

We have been aware of the UNIQ effort for quite some time now.  Our contact
at the Army Communications Electronics Command, Grace Avallone (Avallone@
CECOM-1.ARPA), informed us about the middle of May that they are successfully
using PASSAGE on their system.  (I believe it's a VAX 11/780.)  The UNIQ
implementation, as I recall, was based on a version that was produced under
government contract.  UNIQ modified it to run on VAXen.  

UNIQ corporation conducts business a little different than I'm used to.  In
the fall of 1984, PASSAGE was a $3,000 package, in May, the cost was $15,000.
In the fall of 1984,
maintenance was $30 a month per copy of PASSAGE, now it's $5,400
for one year of maintenance for two copies.  Source
is no longer provided thereby creating dependency on a vendor that, if
my sources are correct, has a very volatile pricing strategy.  (It will be
interesting to see what their maintenance costs will be if and when we
renew it.)  At any rate, compared to the TCP/IP which is an integral part of
the BSD 4.2 system which is available for licensed users for a distribution
cost only (about $1000), UNIQ's software does not appear to be any kind of
bargain.  If you MUST run TCP/IP on a UNIX V VAX, however, it seems to
be the only act in town - at the moment.

I'm not sure how reliable this information is today, it's based on
information I received elsewhere within the Army within the last two weeks
(following their discussion with the vendor).  Suffice it to say that you
would be wise to consider the business ethics of the company before doing 
business with them.  (Incidentally;  Yes, it looks like we'll be buying in -
but, I'm not very happy about it.)

Jim Gregory
Project Leader
Data Communication Systems Division
Directorate for ADP Technology
US Army AMC ALMSA
<<<<<<<<<<<<<<<END OF INCLUDED MESSAGES>>>>>>>>>>>>>>>>>>>>>>>>>

Thanks to all who replied. If anyone has any other information, please
feel free to send it along.

Tom Perrine
Logicon - Operating Systems Divivison
Sand Diego CA
(619) 455-1330 ext. 726

-----------[000164][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 21:30:58 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Doing the Right Thing

From: Ron Natalie <ron@BRL.ARPA>

Jeez, end of subject please.  Yes, the program at BRL was doing the
wrong thing.  You get screwed when you make shortcuts like this.  Same
thing happened to nearly every 4.2 site when the host BRL-ZAP was
added.  They wrote grammer rules to parse the table that were wrong, and
the entry for that host was legal in the table spec, but caused the
program to blow up.  Guess who got all the complaints?   Right, BRL and
the NIC.  The code probably never did get fixed at most sites as BRL
modified their entry to complied with the 4.2 pinhead view on what the
table looked like.

Let's just move along to implementing the next specification (domains,
such as they are), regarding this as a lesson to be learned.

-Ron

-----------[000165][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 22:11:54 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  TCP/IP on Hyperchannel

From: Ron Natalie <ron@BRL.ARPA>

I should point out that it is NASA's conviction that the speed limitations
on the numbers they came up with are a result of the PI-13 interface to the
PDP-11.  I don't know if this is true, but it shouldn't surprise me.  The
interface is a pain to deal with and the whole hyperchannel system is amazingly
tempermental considering the small size of the system here and the high price
we paid for it.

-Ron

-----------[000166][next][prev][last][first]----------------------------------------------------
Date:      Mon, 17-Jun-85 23:41:32 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: rfc948

From: ihnp4!houxm!hounx!bear@BERKELEY

-----------[000167][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 01:05:30 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  TCP/IP on Hyperchannel

From: fouts@AMES-NAS.ARPA (Marty)

     Actually, we have some more experience at NASA now, and aren't
completely convince that the PI13(14)/VAX are the biggest bottleneck.

     I'm seeing some pretty horrible numbers when I make a Cray 2 pump
data onto the floor, and I'm pretty sure it's not the Cray, but I still
don't know what it is.

Marty

----------

-----------[000168][next][prev][last][first]----------------------------------------------------
Date:      18 Jun 1985 04:22-EDT
From:      CERF@USC-ISI.ARPA
To:        INCO@USC-ISID.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, protocols@RUTGERS.ARPA
Steve,

BBN will certainly be glad to supply you with information about
the TAC.  The usual C/30 configuration can support up to 63
terminals and individual data rates up to 19.2 kb/s - any one
TAC, however, could not support all 63 devices operating at that
rate.

I would also suggest that you investigate BBN's newer line of
equipment based on the MC68000; this line is called the C/10.  I
do NOT know whether versions are available to operate as a DDN
TAC (versus X.25 PAD).

Vint Cerf
-----------[000169][next][prev][last][first]----------------------------------------------------
Date:      18 Jun 85 09:51:25 CDT (Tue)
From:      ihnp4!houxm!hrpd3!burns@Berkeley
To:        houxm!ihnp4!ucbvax!tcp-ip@Berkeley
Subject:   Re: TCP/IP on Hyperchannel
Could you plese send me a copy of your replies.

Derrick Burns



-----------[000170][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Jun 85 11:19 EDT
From:      "J. Spencer Love" <JSLove@MIT-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP/IP on Hyperchannel
There is an implementation of TCP/IP via the Hyperchannel for Multics,
which is used as an in-machine-room local area network between 4 Multics
systems in the Pentagon.  By setting the window size to 50000 and the
packet size to 5000, we were able to get FTP rates on a single
connection as high as 275,000 bits per second.  These large buffers and
packet sizes are not a problem for Multics, but we had to special case
the window size for our multi-home test site, since many implementations
on the ARPAnet do strange and bizarre things when given huge windows.
(Reply to me if you want a somewhat more detailed description).

Spitting data straight through the network we were only able to unload
800,000 bits per second (with no protocol).  This is a tiny fraction of
the raw bandwidth of the network.  We blame the problem on the hardware
interface design, which is amazingly brain damaged.  Given this
performance, I would recommend practically any other vendor who has an
appropriate interface card for your machine; you'll spend a whole lot
less money and get much better service.
-----------[000171][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Jun 85 16:20:27 pdt
From:      sun!sunshine!dbercel@Berkeley (Danielle Bercel)
To:        tcp-ip@sri-nic.ARPA
Subject:   Mailing list
Please remove my name from the tcp-ip mailing list

danielle bercel
-----------[000172][next][prev][last][first]----------------------------------------------------
Date:      18 Jun 85 16:54 PDT
From:      Tom Perrine <tom@LOGICON.ARPA>
To:        BostonU SysMgr <root%bostonu.csnet@csnet-relay>, tcp-ip@sri-nic
Cc:        Tom Perrine <tom@logicon>
Subject:   TCP/IP for Sys V (new responses)
Since my first posting of responses concerning TCP/IP for System V, I have
received the following additional information:

<<<<11111111111111111111111111>>>>
Date: 17 Jun 1985 17:43:56 PDT
From: POSTEL@USC-ISIF.ARPA
Subject: re: TCP and System V Unix
To:   tom@LOGICON.ARPA
cc:   postel@USC-ISIF.ARPA

Tom:

Try calling Bob Epley at ATT Info Systems 312-979-7587.

--jon.
<<<<2222222222222222222222222>>>>
Date: 18 Jun 1985 0921-EDT (Tuesday)
From: jas@proteon.arpa
Subject: Sys V TCP/IP
To: tom@logicon.ARPA

Network research has several TCP/IP's, sold under the
Fusion product name. One of them is for System V,
including release 2. They have a VAX UNIX version on their price list
at $3000. They support several flavors of Ethernet boards, and would
probably support proNET (10 & 80) if you prodded them hard enough.
They use the 4.2bsd programming interface, but NO 4.2 code. They
appear to be proud of their TCP. The primary user utilities are
FTP and TELNET. They are missing all the little goodies like
ping, tftp, finger, nicname, etc., but you could always pirate
the 4.2 code in the interim.

They also have XNS versions, for those of you with independent tastes.
Their phone number is 805-885-2700.

I have no hands-on experience with this code yet.
John Shriver, @proteon
<<<<<333333333333333333333333>>>>>
I also received the following info from Marian Jacob at ATT&T (by phone):
There is a TCP/IP available from the Woolongong Group (or however you spell
it) for System V. AT&T is planning to provide this{ (or maybe some other
vendor's system) for System V "Real Soon Now". AT&T has this code running
somewhere at the "Labs". It will be available for the 3B series.
<<<<<<<<<<<<<<<END>>>>>>>>>>>>>

Thanks to all respondents. I now know more  about TCP/IP for System V
than I ever wanted to know :-).

Thanks again,
Tom Perrine
Logicon - OSD
San Diego CA
(619) 455-1330 ext. 726

-----------[000173][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Jun 85 14:27 EDT
From:      "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   hyperchannel
I have run extensive performance analysis on Hyperchannel connected to
Honeywell equipment.  It goes real fast as a point-to-point stream, and
blows its brains out as soon as there is significant volume pf packet
traffic.  I had to add a home-grown flow-control/retry protocol under IP
to get anything close to rated speed even from IP, due to collisions of
acks and data.  I strongly dis-recommend it.
-----------[000174][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 14:27:20 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP on Hyperchannel

From: ihnp4!houxm!hrpd3!burns@BERKELEY

Could you plese send me a copy of your replies.

Derrick Burns

-----------[000175][next][prev][last][first]----------------------------------------------------
Date:      18 Jun 1985 1753-PDT (Tuesday)
From:      fouts@AMES-NAS.ARPA (Marty)
To:        "J. Spencer Love" <JSLove@MIT-MULTICS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP/IP on Hyperchannel
     I have also seen a maximum of 800000 bit/second, in this case
transferring data from a Cray 2 onto the floor. 

----------
-----------[000176][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 15:22:47 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   hyperchannel

From: "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>

I have run extensive performance analysis on Hyperchannel connected to
Honeywell equipment.  It goes real fast as a point-to-point stream, and
blows its brains out as soon as there is significant volume pf packet
traffic.  I had to add a home-grown flow-control/retry protocol under IP
to get anything close to rated speed even from IP, due to collisions of
acks and data.  I strongly dis-recommend it.

-----------[000177][next][prev][last][first]----------------------------------------------------
Date:      18 Jun 1985 16:58-CDT
From:      REINING@USC-ISIE.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Cc:        Reining@USC-ISIE.ARPA
Subject:   MAiling List
Please remove me from the tcp-ip mailing list

Rodney J Reining
Reining@isie

-----------[000178][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 18:04:55 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP on Hyperchannel

From: "J. Spencer Love" <JSLove@MIT-MULTICS.ARPA>

There is an implementation of TCP/IP via the Hyperchannel for Multics,
which is used as an in-machine-room local area network between 4 Multics
systems in the Pentagon.  By setting the window size to 50000 and the
packet size to 5000, we were able to get FTP rates on a single
connection as high as 275,000 bits per second.  These large buffers and
packet sizes are not a problem for Multics, but we had to special case
the window size for our multi-home test site, since many implementations
on the ARPAnet do strange and bizarre things when given huge windows.
(Reply to me if you want a somewhat more detailed description).

Spitting data straight through the network we were only able to unload
800,000 bits per second (with no protocol).  This is a tiny fraction of
the raw bandwidth of the network.  We blame the problem on the hardware
interface design, which is amazingly brain damaged.  Given this
performance, I would recommend practically any other vendor who has an
appropriate interface card for your machine; you'll spend a whole lot
less money and get much better service.

-----------[000179][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 18:45:36 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   MAiling List

From: REINING@USC-ISIE.ARPA

Please remove me from the tcp-ip mailing list

Rodney J Reining
Reining@isie

-----------[000180][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Jun 85 20:17:27 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        control@bbna.ARPA, lmaybaum@ddn1.ARPA
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Broken BBN Gateway
Can something be done to repair the BBN-MINET-A-GWY?  For months now
it has been generating packets with bad checksums.  I have informed
BBN of this two months ago, but so far the problem has not been
resolved.  This puts BRL in a bad place since defects in the gateway
protocols for the BBN gateways cause every packet for our networks that
comes through the mail bridges to be routed through this gateway.

-Ron
-----------[000181][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 20:47:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP/IP for Sys V (new responses)

From: Tom Perrine <tom@LOGICON.ARPA>

Since my first posting of responses concerning TCP/IP for System V, I have
received the following additional information:

<<<<11111111111111111111111111>>>>
Date: 17 Jun 1985 17:43:56 PDT
From: POSTEL@USC-ISIF.ARPA
Subject: re: TCP and System V Unix
To:   tom@LOGICON.ARPA
cc:   postel@USC-ISIF.ARPA

Tom:

Try calling Bob Epley at ATT Info Systems 312-979-7587.

--jon.
<<<<2222222222222222222222222>>>>
Date: 18 Jun 1985 0921-EDT (Tuesday)
From: jas@proteon.arpa
Subject: Sys V TCP/IP
To: tom@logicon.ARPA

Network research has several TCP/IP's, sold under the
Fusion product name. One of them is for System V,
including release 2. They have a VAX UNIX version on their price list
at $3000. They support several flavors of Ethernet boards, and would
probably support proNET (10 & 80) if you prodded them hard enough.
They use the 4.2bsd programming interface, but NO 4.2 code. They
appear to be proud of their TCP. The primary user utilities are
FTP and TELNET. They are missing all the little goodies like
ping, tftp, finger, nicname, etc., but you could always pirate
the 4.2 code in the interim.

They also have XNS versions, for those of you with independent tastes.
Their phone number is 805-885-2700.

I have no hands-on experience with this code yet.
John Shriver, @proteon
<<<<<333333333333333333333333>>>>>
I also received the following info from Marian Jacob at ATT&T (by phone):
There is a TCP/IP available from the Woolongong Group (or however you spell
it) for System V. AT&T is planning to provide this{ (or maybe some other
vendor's system) for System V "Real Soon Now". AT&T has this code running
somewhere at the "Labs". It will be available for the 3B series.
<<<<<<<<<<<<<<<END>>>>>>>>>>>>>

Thanks to all respondents. I now know more  about TCP/IP for System V
than I ever wanted to know :-).

Thanks again,
Tom Perrine
Logicon - OSD
San Diego CA
(619) 455-1330 ext. 726

-----------[000182][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 21:30:27 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Mailing list

From: sun!sunshine!dbercel@BERKELEY (Danielle Bercel)

Please remove my name from the tcp-ip mailing list

danielle bercel

-----------[000183][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 22:09:50 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Broken BBN Gateway

From: Ron Natalie <ron@BRL.ARPA>

Can something be done to repair the BBN-MINET-A-GWY?  For months now
it has been generating packets with bad checksums.  I have informed
BBN of this two months ago, but so far the problem has not been
resolved.  This puts BRL in a bad place since defects in the gateway
protocols for the BBN gateways cause every packet for our networks that
comes through the mail bridges to be routed through this gateway.

-Ron

-----------[000184][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18 Jun 85 22:36:40 edt
From:      cperry@mitre (Chris Perry)
To:        mills@dcn6.ARPA, tcp-ip@sri-nic.ARPA
Subject:   Re:  The night the clocks stopped
Dave,
"The night the clocks stopped" is obviously going to be a chapter
in your Book.  Better tell Padlipsky he's got competition.
Just reading your description of BBN-META's mischief makes
my Vulcan blood boil.
Chris
-----------[000185][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 22:48:03 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP on Hyperchannel

From: fouts@AMES-NAS.ARPA (Marty)

     I have also seen a maximum of 800000 bit/second, in this case
transferring data from a Cray 2 onto the floor. 

----------

-----------[000186][next][prev][last][first]----------------------------------------------------
Date:      Tue, 18-Jun-85 23:22:09 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  The night the clocks stopped

From: cperry@mitre (Chris Perry)

Dave,
"The night the clocks stopped" is obviously going to be a chapter
in your Book.  Better tell Padlipsky he's got competition.
Just reading your description of BBN-META's mischief makes
my Vulcan blood boil.
Chris

-----------[000187][next][prev][last][first]----------------------------------------------------
Date:      19 Jun 85 08:29:43 EDT (Wed)
From:      ulysses!bentley!mmj@Berkeley (M Jacob)
To:        ulysses!ucbvax!tcp-ip@Berkeley
Subject:   Re: TCP/IP for System V (responses)
The Wollongong's TCP/IP works for System V release 2 or later.
-----------[000188][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Jun-85 09:42:59 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP for System V (responses)

From: ulysses!bentley!mmj@BERKELEY (M Jacob)

The Wollongong's TCP/IP works for System V release 2 or later.

-----------[000189][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Jun 85 10:01:00 edt
From:      ukma!david@anl-mcs (David Herron, NPR Lover)
To:        anlams!tcp-ip@sri-nic.ARPA (tcp-ip@sri-nic.arpa)
Subject:   Re: TCP/IP on Sys V (new responses)
The TCP/IP code that AT&T runs internally is a port of 3Com's
UNET code that was done by Steve Bellovin (smb@ulysses).  But
the only license they have is for internal use.  (They're supposedly
negotiating for a license to distribute the code ...)

It runs on the 3B machines also ....

It's nice to know that people are making TCP/IP packages for System V 
on a Vax, but that doesn't help us.   We run 4.2 on our Vax, and will
be running that on our uVaxIIen when they arrive.  So we've already
got that stuff ... it's on the 3B machines, the 2 Intel 310's, and
the s100 based 68000 box we're building .... they will all be on
the ethernet too and need to have compatible software.

And 3Bnet just don't cut it!

	David Herron
	cbosgd!ukma!david
	ukma!david@anl-mcs.arpa


-----------[000190][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Jun 85 10:04:21 EDT
From:      Bob Clements <clements@BBNCCQ.ARPA>
To:        CERF@usc-isi.arpa
Cc:        INCO@usc-isid.arpa, tcp-ip@sri-nic.arpa, protocols@rutgers.arpa, clements@BBNCCQ.ARPA
Vint,

My understanding (NOT a BBNCC official statement!) is that the C/10
does not today support TCP/IP/TELNET, only X.25.  I think the reason
is simply that there hasn't been customer demand for it. If there were
such a demand, a TCP product could probably be whipped up.

/Rcc

-----------[000191][next][prev][last][first]----------------------------------------------------
Date:      19 Jun 1985 10:35:16 EDT
From:      PADLIPSKY@USC-ISI.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Flow Control Madness
   The msg from JSLove alluding to misbehavior in the face of large
windows reminds me of the time during NCP days when those of us then
at Multics discovered we could crash the TIPs by the simple
expedient of sending maximum ALLocates.  (Something to do with their
assuming nobody would send the whole field's worth of 1-bits and only
allowing about half that many bits for storing the ALL--hence overwriting
some table entry with the other half, as I recall.)  (Being good sports,
we made our NCP halve its ALLoctions.)
   If something similar is happening in  present, allegedly
enlightened times, I'd like to see it documented for the whole list
to see (but Brother Love can feel free to send it to me  and leave
it to my alleged discretion as to whether to pass it on to everybody
if he prefers).
   Reminiscent cheers, map
-------
-----------[000192][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Jun 85 11:23:17 edt
From:      mark%cbosgd.ATT.UUCP@Berkeley (Mark Horton)
To:        tcp-ip@Berkeley
Cc:        bellcore!sabre!martin@Berkeley
Subject:   Re: TCP/IP for System V (responses)
It is certainly not true that UNIQ is the only game in town for TCP/IP
for a System V VAX.  Here are some others:

Excelan: we tried their product and gave up in disgust.  The guy who
wrote it tells me it will only run on a VAX 750; I can vouch that
it does NOT work adequately on a 785.  It came up far enough to show
that the list of obvious bugs that should have been caught immediately
is quite long.  Cost: $5000 for the software, plus you have to
buy their board.  (TCP/IP runs on an 80186 in the board, which offloads
the host but creates a bottleneck communicating with the host via the
Unibus, and makes a gateway impossible.)  The code is based on Berkeley
4.1aBSD, which is not very close to 4.2.

The Wollongong Group: we wound up being an unintentional beta test site.
Their stuff works (although installation is nontrivial) and has a few
rough edges.  They tell me the final product (which we don't have yet) 
cleans up these problems.  Cost: an outrageous $15,000 for the first machine
and $6,000 for each additional machine at the same geographic site.
(This is probably why Uniq raised their prices suddenly.)  The code
is based on Berkeley 4.2 and seems pretty compatible.  rwho even works,
complete with a load average!

CMC: I don't know much about them, as I have never met a customer.
Their boards are like Excelan's but use a 68K on the board.  Price
is about $4,000 for hardware and software combined.  Excelan sounded
this good a year ago, however, so caution is advised.

3Com: we have their UNET code ported to System V in AT&T, it runs on
Vaxen as well as various 3B's.  3Com has discontinued this product
and the AT&T port is not available on the outside (and probably never will
be, it works but Berkeley's is much better) so you can't get it.
-----------[000193][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Jun-85 11:51:38 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Flow Control Madness

From: PADLIPSKY@USC-ISI.ARPA

   The msg from JSLove alluding to misbehavior in the face of large
windows reminds me of the time during NCP days when those of us then
at Multics discovered we could crash the TIPs by the simple
expedient of sending maximum ALLocates.  (Something to do with their
assuming nobody would send the whole field's worth of 1-bits and only
allowing about half that many bits for storing the ALL--hence overwriting
some table entry with the other half, as I recall.)  (Being good sports,
we made our NCP halve its ALLoctions.)
   If something similar is happening in  present, allegedly
enlightened times, I'd like to see it documented for the whole list
to see (but Brother Love can feel free to send it to me  and leave
it to my alleged discretion as to whether to pass it on to everybody
if he prefers).
   Reminiscent cheers, map
-------

-----------[000194][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19 Jun 85 18:00 EST
From:      Chris Johnson <johnson%northeastern.csnet@csnet-relay.arpa>
To:        tcp-ip@sri-nic.ARPA
Cc:        johnson%northeastern.csnet@csnet-relay.arpa
Subject:   native vax tcp-ip
      Has anyone heard of or know where I might get information
about any implementations of TCP/IP for VAX/VMS that AREN'T in C,
like how 'bout VAX native macro or even FORTRAN if necessary.  Although
I like C,  I don't have C and am not likely to get C until well beyond
something freezes and thaws again several times (probably the budget).

                 *help*
-----------[000195][next][prev][last][first]----------------------------------------------------
Date:      Wed, 19-Jun-85 20:09:23 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP on Sys V (new responses)

From: ukma!david@anl-mcs (David Herron, NPR Lover)

The TCP/IP code that AT&T runs internally is a port of 3Com's
UNET code that was done by Steve Bellovin (smb@ulysses).  But
the only license they have is for internal use.  (They're supposedly
negotiating for a license to distribute the code ...)

It runs on the 3B machines also ....

It's nice to know that people are making TCP/IP packages for System V 
on a Vax, but that doesn't help us.   We run 4.2 on our Vax, and will
be running that on our uVaxIIen when they arrive.  So we've already
got that stuff ... it's on the 3B machines, the 2 Intel 310's, and
the s100 based 68000 box we're building .... they will all be on
the ethernet too and need to have compatible software.

And 3Bnet just don't cut it!

	David Herron
	cbosgd!ukma!david
	ukma!david@anl-mcs.arpa

-----------[000196][next][prev][last][first]----------------------------------------------------
Date:      19 Jun 1985 2011-EDT (Wednesday)
From:      jas@proteon.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   Network Research
Oops, I blew that phone number for Network Research. In mercy to the
wrong number, their correct number is 213-394-2700.
-------

-----------[000197][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20-Jun-85 02:12:16 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Network Research

From: jas@proteon.arpa

Oops, I blew that phone number for Network Research. In mercy to the
wrong number, their correct number is 213-394-2700.
-------

-----------[000198][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20-Jun-85 03:30:18 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   native vax tcp-ip

From: Chris Johnson <johnson%northeastern.csnet@csnet-relay.arpa>

      Has anyone heard of or know where I might get information
about any implementations of TCP/IP for VAX/VMS that AREN'T in C,
like how 'bout VAX native macro or even FORTRAN if necessary.  Although
I like C,  I don't have C and am not likely to get C until well beyond
something freezes and thaws again several times (probably the budget).

                 *help*

-----------[000199][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20 Jun 85 18:11:21 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Chris Johnson <johnson%northeastern.csnet@CSNET-RELAY.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, johnson%northeastern.csnet@CSNET-RELAY.ARPA
Subject:   Re:  native vax tcp-ip
That's OK, you need not get the source.

-Ron
-----------[000200][next][prev][last][first]----------------------------------------------------
Date:      Thu, 20-Jun-85 22:21:35 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  native vax tcp-ip

From: Ron Natalie <ron@BRL.ARPA>

That's OK, you need not get the source.

-Ron

-----------[000201][next][prev][last][first]----------------------------------------------------
Date:      21 Jun 1985 05:51-EDT
From:      CERF@USC-ISI.ARPA
To:        JSLove@MIT-MULTICS.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TCP/IP on Hyperchannel

My glancing exposure to Hyperchannel some years ago left me
with the impression that the 50 Mbit channel had some built
in bus contention and handshaking logic which made its maximum
datarate a function of the physical length of the channel
(handshaking delays limit access frequency etc.).

This style of operation can, indeed, leave one with much
less effective bandwidth from any one source than one would
be led to expect from the burst rate of the channel.

Vint Cerf
-----------[000202][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 06:26:16 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP on Hyperchannel

From: CERF@USC-ISI.ARPA


My glancing exposure to Hyperchannel some years ago left me
with the impression that the 50 Mbit channel had some built
in bus contention and handshaking logic which made its maximum
datarate a function of the physical length of the channel
(handshaking delays limit access frequency etc.).

This style of operation can, indeed, leave one with much
less effective bandwidth from any one source than one would
be led to expect from the burst rate of the channel.

Vint Cerf

-----------[000203][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Jun 85 10:18:27 CDT
From:      Jerry Morence <morence@Almsa-2>
To:        tcp-ip@Sri-Nic.arpa
Cc:        cerf@Usc-Isi.arpa, louden@Mitre-Gateway.arpa, jslove@Mit-Multics.arpa, mike@Brl.arpa, ron@Brl.arpa, fouts@Ames-Nas.arpa
Subject:   tcp/ip on hyperchannel
Mike:

We have had the hyperchannel in production since 1983, connecting four IBM 
4341/4381 S/370 MVS systems in a private local network.  This configuration
is installed at a total of six sites using our (ALMSA, St. Louis, Mo.) own
developed software.  We do not use TCP/IP.

The speeds of data interchange among all these super hosts has been limited
only to the channel speeds of the slowest host.  We are averaging at each
location approximately 1.5 megabytes of data per second across multiple hosts.
We have had two pairs of hosts carrying on data interchange concurrently with
each pair averaging the same high 1.5 megabytes per second equating to 3.0
megabytes (30 megabits) per second across the hyperchannel.

We are so satisfied with the performance of the hyperchannel and our software,
that we are investigating expansion of the local network and linking our six
sites together (possibly using Hyperlink).

Regards,
Jerry
-----------[000204][next][prev][last][first]----------------------------------------------------
Date:      21 Jun 1985 10:28:07 EDT
From:      PADLIPSKY@USC-ISI.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   New Slogan
   I thought of a new Slogan yesterday that might be of some use
to some readers of this list in some contexts:

It's really dumb to pay full price
For a partially filled bottle of snake oil

   (Has anybody out there come across a French expression
something like "l'esprit d'escalier"?  I think it's literally
"the wit of the staircase" and is used for situations where
you think of a nifty line belatedly....)

  cheers, map
-------
-----------[000205][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 11:07:04 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   New Slogan

From: PADLIPSKY@USC-ISI.ARPA

   I thought of a new Slogan yesterday that might be of some use
to some readers of this list in some contexts:

It's really dumb to pay full price
For a partially filled bottle of snake oil

   (Has anybody out there come across a French expression
something like "l'esprit d'escalier"?  I think it's literally
"the wit of the staircase" and is used for situations where
you think of a nifty line belatedly....)

  cheers, map
-------

-----------[000206][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 12:16:42 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   tcp/ip on hyperchannel

From: Jerry Morence <morence@Almsa-2>

Mike:

We have had the hyperchannel in production since 1983, connecting four IBM 
4341/4381 S/370 MVS systems in a private local network.  This configuration
is installed at a total of six sites using our (ALMSA, St. Louis, Mo.) own
developed software.  We do not use TCP/IP.

The speeds of data interchange among all these super hosts has been limited
only to the channel speeds of the slowest host.  We are averaging at each
location approximately 1.5 megabytes of data per second across multiple hosts.
We have had two pairs of hosts carrying on data interchange concurrently with
each pair averaging the same high 1.5 megabytes per second equating to 3.0
megabytes (30 megabits) per second across the hyperchannel.

We are so satisfied with the performance of the hyperchannel and our software,
that we are investigating expansion of the local network and linking our six
sites together (possibly using Hyperlink).

Regards,
Jerry

-----------[000207][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Jun 85 13:08:34 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Jerry Morence <morence@ALMSA-1.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, cerf@USC-ISI.ARPA, louden@MITRE-GATEWAY.ARPA, jslove@MIT-MULTICS.ARPA, mike@BRL.ARPA, ron@BRL.ARPA, fouts@AMES-NAS.ARPA
Subject:   Re:  tcp/ip on hyperchannel
I heard a rumor that TCP/IP runs faster over Hyperchannel than NETIX does.
Does someone else who has hyperchannel know how to deal with the Adapters
floating away, other than resetting them by hand?

-Ron
-----------[000208][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Jun 85 13:40:41 edt
From:      cperry@mitre (Chris Perry)
To:        PADLIPSKY@USC-ISI.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  New Slogan
someone in w-74 posted the second page of a recent iso draft std. on naming
and addressing; said page contained a lengthy quote from dr. seuss.  the
quote in effect paraphrased something from "through the looking glass" my
beginning fortran teacher used on me 17 years ago.  plus ca change...
but dr. seuss???
jcp
-----------[000209][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 14:10:18 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  tcp/ip on hyperchannel

From: Ron Natalie <ron@BRL.ARPA>

I heard a rumor that TCP/IP runs faster over Hyperchannel than NETIX does.
Does someone else who has hyperchannel know how to deal with the Adapters
floating away, other than resetting them by hand?

-Ron

-----------[000210][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21 Jun 85 14:22:59 EDT
From:      Michael Sueltenfuss <msuelten@bbn-noc3>
To:        ron@brl.arpa, tcp-ip@nic, lmaybaum@ddn1
Cc:        jburke@bbn-noc3, romash@bbn-noc3, hinden@bbn-noc3, mimno@bbn-noc3, tmallory@bbn-noc3, brescia@bbn-noc3, msuelten@bbn-noc3
Subject:   BBN-MINET-A-GWY
The BBN-MINET-A-GWY problem was fixed at 2100 on June 20.  The problem was
not with the gateway, but with the imp the gateway is attached to.

We are sorry if the problem caused you any inconvenience and we apologize
for the length of time it took to resolve.            

If you have any problems or questions in the future, please don't hesitate
to call or send me a message.  I can be reached at (617) 497-3439 or
(617) 661-0100.

Sincerely,
Mike Sueltenfuss
-----------[000211][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 15:07:05 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  New Slogan

From: cperry@mitre (Chris Perry)

someone in w-74 posted the second page of a recent iso draft std. on naming
and addressing; said page contained a lengthy quote from dr. seuss.  the
quote in effect paraphrased something from "through the looking glass" my
beginning fortran teacher used on me 17 years ago.  plus ca change...
but dr. seuss???
jcp

-----------[000212][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 16:20:07 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   BBN-MINET-A-GWY

From: Michael Sueltenfuss <msuelten@bbn-noc3>

The BBN-MINET-A-GWY problem was fixed at 2100 on June 20.  The problem was
not with the gateway, but with the imp the gateway is attached to.

We are sorry if the problem caused you any inconvenience and we apologize
for the length of time it took to resolve.            

If you have any problems or questions in the future, please don't hesitate
to call or send me a message.  I can be reached at (617) 497-3439 or
(617) 661-0100.

Sincerely,
Mike Sueltenfuss

-----------[000213][next][prev][last][first]----------------------------------------------------
Date:      21 Jun 1985 20:52-EDT
From:      CERF@USC-ISI.ARPA
To:        morence@ALMSA-1.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, louden@MITRE-GATEWAY.ARPA jslove@MIT-MULTICS.ARPA, mike@BRL.ARPA ron@BRL.ARPA, fouts@AMES-NAS.ARPA
Subject:   Re:  tcp/ip on hyperchannel
Jerry,

thanks for the report on hyperchannel - have things changed in the
last couple of years? Do you have a short bus (literally, how
many feet of backplane or whatever is used to implement the
channel?).

Is my perception of the handshaking delay being a function of
distance incorrect? I would like to clear up any misconception
I have or may have propagated.

thanks,

Vint
-----------[000214][next][prev][last][first]----------------------------------------------------
Date:      Fri, 21-Jun-85 21:34:45 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  tcp/ip on hyperchannel

From: CERF@USC-ISI.ARPA

Jerry,

thanks for the report on hyperchannel - have things changed in the
last couple of years? Do you have a short bus (literally, how
many feet of backplane or whatever is used to implement the
channel?).

Is my perception of the handshaking delay being a function of
distance incorrect? I would like to clear up any misconception
I have or may have propagated.

thanks,

Vint

-----------[000215][next][prev][last][first]----------------------------------------------------
Date:      Sat, 22-Jun-85 01:03:06 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: TCP/IP for System V (responses)

From: mark@cbosgd.ATT.UUCP (Mark Horton)

It is certainly not true that UNIQ is the only game in town for TCP/IP
for a System V VAX.  Here are some others:

Excelan: we tried their product and gave up in disgust.  The guy who
wrote it tells me it will only run on a VAX 750; I can vouch that
it does NOT work adequately on a 785.  It came up far enough to show
that the list of obvious bugs that should have been caught immediately
is quite long.  Cost: $5000 for the software, plus you have to
buy their board.  (TCP/IP runs on an 80186 in the board, which offloads
the host but creates a bottleneck communicating with the host via the
Unibus, and makes a gateway impossible.)  The code is based on Berkeley
4.1aBSD, which is not very close to 4.2.

The Wollongong Group: we wound up being an unintentional beta test site.
Their stuff works (although installation is nontrivial) and has a few
rough edges.  They tell me the final product (which we don't have yet) 
cleans up these problems.  Cost: an outrageous $15,000 for the first machine
and $6,000 for each additional machine at the same geographic site.
(This is probably why Uniq raised their prices suddenly.)  The code
is based on Berkeley 4.2 and seems pretty compatible.  rwho even works,
complete with a load average!

CMC: I don't know much about them, as I have never met a customer.
Their boards are like Excelan's but use a 68K on the board.  Price
is about $4,000 for hardware and software combined.  Excelan sounded
this good a year ago, however, so caution is advised.

3Com: we have their UNET code ported to System V in AT&T, it runs on
Vaxen as well as various 3B's.  3Com has discontinued this product
and the AT&T port is not available on the outside (and probably never will
be, it works but Berkeley's is much better) so you can't get it.

-----------[000216][next][prev][last][first]----------------------------------------------------
Date:      Sat, 22 Jun 85 05:18:29 pdt
From:      Brian Thomson <utcs!thomson@uthub>
To:        tcp-ip@utcs
Subject: Problems with 4.2 BSD TCP

I am having problems with half-open connections remaining after
one of two connected hosts crashes.  When that host reboots, it
attempts to reopen the connection that the other end thinks is
still in ESTABLISHED state.

Now, my copy of RFC 793 (of Sept. 81) argues convincingly that
the two TCPs should be able to sort this out, but between a pair
of 4.2BSD TCPs this doesn't happen.  Instead, the rebooted host
eventually times out, and the half-open host remains in
ESTABLISHED state.

It appears to me that the problem may lie in the file netinet/tcp_input.c,
where we find the code sequence:

>dropafterack:
>	/*
>	 * Generate an ACK dropping incoming segment if it occupies
>	 * sequence space, where the ACK reflects our state.
>	 */
>	if ((tiflags&TH_RST) ||
>	    tlen == 0 && (tiflags&(TH_SYN|TH_FIN)) == 0)
>		goto drop;

This is branched to under several conditions, the most interesting
being when an established connection receives a segment (eg. a
connection request) that is entirely outside its window.
The effect of the tests is that an ACK packet will be returned to
the originator of the funny segment, UNLESS that funny segment
contained a reset (TH_RST) OR contained no data and no SYN or FIN flags
(which do "occupy sequence space").
These ACKs have to be sent to recover from half-open connections,
but they aren't.  What seems to happen here is the connection request
contains no data (so tlen == 0) but does, ORIGINALLY, contain a TH_SYN.
The receiving TCP trims the segment to its receive window, discarding
the SYN, realizes there's nothing left, and branches to this point
to send an ACK.  Which doesn't get sent, because the modified segment
does not "occupy sequence space".

It appears that the "occupy sequence space" requirement for ACK generation
is contributing to, if not causing, the problem here.  What I don't
understand is where this requirement came from.  The RFC says

	If an incoming segment is not acceptable, an acknowledgment
	should be sent in reply (unless the RST bit is set, if so drop
	the segment and return)

So the first test is legit, but there is no requirement about it
occupying sequence space.  Can anyone justify the extra test before
I try removing it?
-----------[000217][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24 Jun 85 16:37:07 EDT
From:      Marianne Gardner <mgardner@BBNCCY.ARPA>
To:        CERF@usc-isi.arpa
Cc:        mgardner@BBNCCY.ARPA, .../.msg.profile@usc-isi.arpa, tcp-ip@sri-nic.arpa
Subject:   Re: ARPANET/MILNET performance statistics
Vint,

Sorry to take so long to answer your message.  My question was not with the
interpretation of the Dave's data, but with the fact that you only saw one
week's data.  I have been looking at throughput data for the mailbridges
every day for almost a year.  I saw a different picture.  In fact, the week
covered by Dave's data did show a rise in the proportion of traffic
going to MILISI.  Before that week MILARP, MILBBN, MILDCEC, and MILISI
all received about the same amount of traffic; MILLBL, MILSAC, and MILSRI
received less.  We saw MILISI receiving more than its share of traffic all
month, but last week the traffic distribution again looked even.  Such
fluctuations in traffic are common.  They are worth attention only when
they persist and cause problems.  

In any case, your memory increased the disparity in the traffic
distribution.  The weekly averages are given below.  The drop rates varied.
Sometimes they were as high as 4%, sometimes they were low.  We, at BBN,
are looking into this problem.

          DATAGRAMS RECEIVED PER SECOND, averaged over one week
                 
		6/2      6/8      6/16     6/23
    MILARP     7.21     6.59      5.95     6.17   
    MILBBN     9.09     9.82      8.71     9.02   
    MILDCE     7.62     8.24      7.12     8.79   
    MILISI     9.31    12.08     10.13     9.68   
    MILLBL     6.06     6.25      5.41     5.26   
    MILSAC     4.92     6.24      5.76     4.48   
    MILSRI     3.88     4.31      3.67     3.26   

Perhaps, the people at ISI, who were so good about admitting to their
penchant for cross-network ftping, will have an explanation for the extra
traffic last month.  Actually, the answer is more likely to come from
across the network, since the increase in MILISI's traffic was accompanied
by a drop in everyone else's traffic.

Marianne

-----------[000218][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24-Jun-85 18:15:16 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: ARPANET/MILNET performance statistics

From: Marianne Gardner <mgardner@BBNCCY.ARPA>

Vint,

Sorry to take so long to answer your message.  My question was not with the
interpretation of the Dave's data, but with the fact that you only saw one
week's data.  I have been looking at throughput data for the mailbridges
every day for almost a year.  I saw a different picture.  In fact, the week
covered by Dave's data did show a rise in the proportion of traffic
going to MILISI.  Before that week MILARP, MILBBN, MILDCEC, and MILISI
all received about the same amount of traffic; MILLBL, MILSAC, and MILSRI
received less.  We saw MILISI receiving more than its share of traffic all
month, but last week the traffic distribution again looked even.  Such
fluctuations in traffic are common.  They are worth attention only when
they persist and cause problems.  

In any case, your memory increased the disparity in the traffic
distribution.  The weekly averages are given below.  The drop rates varied.
Sometimes they were as high as 4%, sometimes they were low.  We, at BBN,
are looking into this problem.

          DATAGRAMS RECEIVED PER SECOND, averaged over one week
                 
		6/2      6/8      6/16     6/23
    MILARP     7.21     6.59      5.95     6.17   
    MILBBN     9.09     9.82      8.71     9.02   
    MILDCE     7.62     8.24      7.12     8.79   
    MILISI     9.31    12.08     10.13     9.68   
    MILLBL     6.06     6.25      5.41     5.26   
    MILSAC     4.92     6.24      5.76     4.48   
    MILSRI     3.88     4.31      3.67     3.26   

Perhaps, the people at ISI, who were so good about admitting to their
penchant for cross-network ftping, will have an explanation for the extra
traffic last month.  Actually, the answer is more likely to come from
across the network, since the increase in MILISI's traffic was accompanied
by a drop in everyone else's traffic.

Marianne

-----------[000219][next][prev][last][first]----------------------------------------------------
Date:      Mon 24 Jun 85 18:27:35-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Apparent problem with ICMP Redirects on 4.2 systems
	We have recently noticed a problem with some 4.2 UNIX systems
when a new gateway (which was a better route to large sections of the
Internet) was installed on a network here. For most destinations, the
old gateways sent ICMP Redirects to the new one. This was fine, except
that apparently the 4.2 system had their routing tables fill up with
the Redirect information as they tried to contact new sites.
Apparently, when the tables filled up, they were unable to accept new
entries, because the machines became unreachable from certain (random)
destinations. I'm not sure why this happened, since the traffic should
still have flowed (albeit generating floods of Redirects by taking a
non-optimal path).
	Does this scenario make sense to any 4.2 network wizards?

	Certainly, it was something to do with routing, because when
we went into the rc.local file and changed the 'default' route (i.e.
in '/etc/route add') to be through the new gateway, and rebooted the
machines, things started working. I'm pretty annoyed that all the 4.2
systems had to be hand tweaked when a new gateway started up.

	I guess this points up a general problem with IP layers,
which is unfortunately not mentioned in Clark's 'IP Implementation
guide'. You should time out old entries in the routing cache, and
if you have a fixed size table and it fills up, you should be prepared
to evict someone.

		Noel
-------
-----------[000220][next][prev][last][first]----------------------------------------------------
Date:      Mon 24 Jun 85 18:29:00-EDT
From:      "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Common ARP implementation bug
	I would like to draw people attention to a little noticed
part of the ARP spec (RFC826) that, when ignored, leads to buggy
implementations.
	The problem is that if you have a mapping in your cache for
a given (protocol address, hardware address) pair then if the
destination changes its hardware address (quite common if, for
example, your Ethernet board breaks and you install a new one),
you may not notice that his hardware address has changed and
then (apparently inexplicably) lose contact with the destination.
	This bug exists in several implementations of ARP, including
at least one commercial one (which I won't name unless they don't
fix it). What is annoying is that if people *followed the spec* this
wouldn't happen.

	This problem is explicity approached in the spec. In the
section 'Packet Reception', the correct algorithm to prevent this
is given, along with a note (at the end of the paragraph after the
algorithm). In the section 'Related Issues', paragraph three
elaborates.
	Note that this approach does not help if the data in the
ARP packets is corrupted. However, we have not noticed this
kind of fault here at MIT. We have noticed the 'moving host' one
with some regularity. Timeouts would be necessary to fix the
'corrupted data' problem, since the protocol provides no checksum.

	Noel
-------
-----------[000221][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24-Jun-85 19:15:40 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Apparent problem with ICMP Redirects on 4.2 systems

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	We have recently noticed a problem with some 4.2 UNIX systems
when a new gateway (which was a better route to large sections of the
Internet) was installed on a network here. For most destinations, the
old gateways sent ICMP Redirects to the new one. This was fine, except
that apparently the 4.2 system had their routing tables fill up with
the Redirect information as they tried to contact new sites.
Apparently, when the tables filled up, they were unable to accept new
entries, because the machines became unreachable from certain (random)
destinations. I'm not sure why this happened, since the traffic should
still have flowed (albeit generating floods of Redirects by taking a
non-optimal path).
	Does this scenario make sense to any 4.2 network wizards?

	Certainly, it was something to do with routing, because when
we went into the rc.local file and changed the 'default' route (i.e.
in '/etc/route add') to be through the new gateway, and rebooted the
machines, things started working. I'm pretty annoyed that all the 4.2
systems had to be hand tweaked when a new gateway started up.

	I guess this points up a general problem with IP layers,
which is unfortunately not mentioned in Clark's 'IP Implementation
guide'. You should time out old entries in the routing cache, and
if you have a fixed size table and it fills up, you should be prepared
to evict someone.

		Noel
-------

-----------[000222][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24-Jun-85 20:22:59 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Common ARP implementation bug

From: "J. Noel Chiappa" <JNC@MIT-XX.ARPA>

	I would like to draw people attention to a little noticed
part of the ARP spec (RFC826) that, when ignored, leads to buggy
implementations.
	The problem is that if you have a mapping in your cache for
a given (protocol address, hardware address) pair then if the
destination changes its hardware address (quite common if, for
example, your Ethernet board breaks and you install a new one),
you may not notice that his hardware address has changed and
then (apparently inexplicably) lose contact with the destination.
	This bug exists in several implementations of ARP, including
at least one commercial one (which I won't name unless they don't
fix it). What is annoying is that if people *followed the spec* this
wouldn't happen.

	This problem is explicity approached in the spec. In the
section 'Packet Reception', the correct algorithm to prevent this
is given, along with a note (at the end of the paragraph after the
algorithm). In the section 'Related Issues', paragraph three
elaborates.
	Note that this approach does not help if the data in the
ARP packets is corrupted. However, we have not noticed this
kind of fault here at MIT. We have noticed the 'moving host' one
with some regularity. Timeouts would be necessary to fix the
'corrupted data' problem, since the protocol provides no checksum.

	Noel
-------

-----------[000223][next][prev][last][first]----------------------------------------------------
Date:      Mon, 24 Jun 85 22:26:44 edt
From:      ulysses!smb@Berkeley (Steven Bellovin)
To:        tcp-ip@Berkeley
Subject:   time servers
I'm implementing some network-based time stuff, and I find I need more
precision (in two senses of the word) than RFC868 provides.

First, what do folks think of allowing an (optional) second "word", giving
the time in microseconds.  (Yes, that's what Berkeley UNIX gives; no, that's
not why I'm using those units.)  As long as clients check the received
length on a message, the current behavior would still work.

Second, given the current standard, how should a system with a more precise
idea of the time round its response?  Truncate?  Round?  The current RFC
is silent.


		--Steve Bellovin
		AT&T Bell Laboratories
		ulysses!smb@berkeley.arpa
		smb.ulysses.btl@csnet-relay
-----------[000224][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25-Jun-85 01:07:55 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   time servers

From: ulysses!smb@BERKELEY (Steven Bellovin)

I'm implementing some network-based time stuff, and I find I need more
precision (in two senses of the word) than RFC868 provides.

First, what do folks think of allowing an (optional) second "word", giving
the time in microseconds.  (Yes, that's what Berkeley UNIX gives; no, that's
not why I'm using those units.)  As long as clients check the received
length on a message, the current behavior would still work.

Second, given the current standard, how should a system with a more precise
idea of the time round its response?  Truncate?  Round?  The current RFC
is silent.


		--Steve Bellovin
		AT&T Bell Laboratories
		ulysses!smb@berkeley.arpa
		smb.ulysses.btl@csnet-relay

-----------[000225][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25-Jun-85 01:57:36 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   The night the clocks stopped (again)

From: mills@dcn6.arpa

Folks,

A violent electrical storm wandered by our offices late this afternoon about
21Z and killed several traffic lights, an Ethernet board, two radio clocks and
an unknown number of erroneous timestamps hiding all over the Internet. Our
WWV secondary clock began ticking again after the static crashes died down
several hours later about 02Z, but our old faithful WWVB primary clock didn't
tick until late evening after 03Z. Even now the GOES tertiary clock in our
next-door neighbor net remains unreachable (due dead Ethernet board). It was a
bad day for clockwatching.

Once again, my apologies to all our ICMP, UDP and TCP clockwatchers. Turns out
the only UPS in our building runs the cypherlocks and security monitoring
system. We are considering pilfering a few watts from it to run at least the
primary WWVB radio reference. The irony of such a heist from such a source is
too yummy to resist. Pun intentional.

From the DCN-GATEWAY log it is apparent that a good clock service is
moderately important to this community. However, our recent experience with
primary-power disruptions suggests other sites may wish to share their clocks
with the rest of us. All it takes is a WWV receiver (Heath GC-1000 - about
$300), a dipole slung over the nearest tree (out of the near-field radio hash
from the computers), a spare serial port and a cup of software. Season with
UDP and serve on your nearest gateway.

Dave
-------

-----------[000226][next][prev][last][first]----------------------------------------------------
Date:      25 Jun 1985 10:37:23 PDT
From:      POSTEL@USC-ISIF.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   re: time servers

Steve:

I agree with Dave Mills, the RFC-868 Time Protocol is not expected
to have accuracy greater than one second.  I'd be very supprised if
it was every that good.  To find out more about the procedures Dave
uses see RFCs 891 and 778.

--jon.
-------
-----------[000227][next][prev][last][first]----------------------------------------------------
Date:      25-Jun-85 03:42:23-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   The night the clocks stopped (again)
Folks,

A violent electrical storm wandered by our offices late this afternoon about
21Z and killed several traffic lights, an Ethernet board, two radio clocks and
an unknown number of erroneous timestamps hiding all over the Internet. Our
WWV secondary clock began ticking again after the static crashes died down
several hours later about 02Z, but our old faithful WWVB primary clock didn't
tick until late evening after 03Z. Even now the GOES tertiary clock in our
next-door neighbor net remains unreachable (due dead Ethernet board). It was a
bad day for clockwatching.

Once again, my apologies to all our ICMP, UDP and TCP clockwatchers. Turns out
the only UPS in our building runs the cypherlocks and security monitoring
system. We are considering pilfering a few watts from it to run at least the
primary WWVB radio reference. The irony of such a heist from such a source is
too yummy to resist. Pun intentional.

From the DCN-GATEWAY log it is apparent that a good clock service is
moderately important to this community. However, our recent experience with
primary-power disruptions suggests other sites may wish to share their clocks
with the rest of us. All it takes is a WWV receiver (Heath GC-1000 - about
$300), a dipole slung over the nearest tree (out of the near-field radio hash
from the computers), a spare serial port and a cup of software. Season with
UDP and serve on your nearest gateway.

Dave
-------
-----------[000228][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25 Jun 85 9:03:27 EDT
From:      Bob Walsh <walsh@BBN-LABS-B.ARPA>
To:        "J. Noel Chiappa" <JNC@MIT-XX.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, JNC@MIT-XX.ARPA
Subject:   Re:  Apparent problem with ICMP Redirects on 4.2 systems

The Berkeley 4.2 TCP does not reroute current connections when it receives
an ICMP redirect.  As a result, such a machine can receive a lot of
redirects for the duration of those current connections.  The UNIX host
does not have a fixed size routing table; the table (at least in the kernel,
I'm not sure about the routing demon) grows as needed and is implemented
by an open chained hash table.  This, and some other problems with the 
distributed 4.2 TCP are described in a paper I gave at the Salt Lake City
USENIX in '84.

bob walsh
-----------[000229][next][prev][last][first]----------------------------------------------------
Date:      25 Jun 1985 10:23:39 EDT
From:      MILLS@USC-ISID.ARPA
To:        ulysses!smb@Berkeley, tcp-ip@Berkeley
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: time servers
In response to the message sent  Mon, 24 Jun 85 22:26:44 edt from ulysses!smb@Berkeley 

Steve,

I think you might be heading down the wrong road. We had quite a long discussion
on these points some years back when the present protocols were being designed.

1. TCP-derived timestamps can never achieve precisions much better than a
   few seconds, due dispersions in transmission and service times on typical
   hosts. I try to discourage anyone from using TCP for this in the first
   place.

2. UDP-derived timestamps can be expected to achieve precisions in the order
   of a second on most hosts and operating systems, but the TOPS-20 is
   not one of them. The problem is queueing delays at several points in the
   service process - time-slicing, interprocess message passing, paging and
   the like. We decided 32 bits in precision was justified for UDP.

3. ICMP-derived timestamps are the best we can do. In most systems the
   IP/ICMP layer is as close to the hardware driver as we can get, so the
   protocol delays at higher levels can be avoided. The residual errors are
   due to frame encapsulation, possible link-level retransmissions and so
   forth. We decided 32 bits of milliseconds was the most appropriate unit.

4. For anything more precise than milliseconds, you need to be very careful
   about your technique. Absolute timetelling to this precision requires
   carefully calibrated radio clocks or atomic standards. Relative delays
   between mutually synchronized clocks is easier, but precisions better than
   a millisecond requires carefully controlled link delays and constant-drift
   intrinsic oscillators. This is what the fuzzballs strive to do. They have to
   work so hard at it that the intrinsic drift of the ovenless crystal
   oscillators can be measured individually via the network.

5. The usefulness of any timestamp is relevant only to the extent the
   application program can operate with it. It doesn't make sense to deliver
   a super-accurate timestamp to a user program trying to control a real-time
   process when its control mechanism has inherent random delays in the order
   of disk-seek latencies. This comment does not apply if you are measuring
   differences in timestamps (delays), of course.

6. With respect to Unix timestamps. Our Sun workstation has a hard time
   maintaining clockwatch to within several seconds (sic), much less to within
   an order of milliseconds. It is not clear whether this is due to oscillator
   instability or simply sloppy implementation. The apparent time drifts wildly
   relative to our rather precise network clock as measured by our loccal-net
   clock-synchronization algorithms.

7. In my experience the most pressing need for additional protocol development
   is a mechanism to determine the order of precision of a delivered timestamp.
   For instance, the recent episodes when we told our clockwatching friends
   gross lies in timestamps due local power disruptions, we should have been
   able to indicate relative faith, perhaps as a field in the header (assuming
   ICMP for record). We should also be able to convey whether the stamp
   was derived from primary, secondary or other standards and whether it
   was determined by a third party.

The bottom line is to suggest that you use ICMP as the primary source of precise
milliseconds and resolve high-order ambiguity with UDP and/or TCP only as
necessary.

Dave
-------
-----------[000230][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25-Jun-85 11:35:04 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: time servers

From: MILLS@USC-ISID.ARPA

In response to the message sent  Mon, 24 Jun 85 22:26:44 edt from ulysses!smb@Berkeley 

Steve,

I think you might be heading down the wrong road. We had quite a long discussion
on these points some years back when the present protocols were being designed.

1. TCP-derived timestamps can never achieve precisions much better than a
   few seconds, due dispersions in transmission and service times on typical
   hosts. I try to discourage anyone from using TCP for this in the first
   place.

2. UDP-derived timestamps can be expected to achieve precisions in the order
   of a second on most hosts and operating systems, but the TOPS-20 is
   not one of them. The problem is queueing delays at several points in the
   service process - time-slicing, interprocess message passing, paging and
   the like. We decided 32 bits in precision was justified for UDP.

3. ICMP-derived timestamps are the best we can do. In most systems the
   IP/ICMP layer is as close to the hardware driver as we can get, so the
   protocol delays at higher levels can be avoided. The residual errors are
   due to frame encapsulation, possible link-level retransmissions and so
   forth. We decided 32 bits of milliseconds was the most appropriate unit.

4. For anything more precise than milliseconds, you need to be very careful
   about your technique. Absolute timetelling to this precision requires
   carefully calibrated radio clocks or atomic standards. Relative delays
   between mutually synchronized clocks is easier, but precisions better than
   a millisecond requires carefully controlled link delays and constant-drift
   intrinsic oscillators. This is what the fuzzballs strive to do. They have to
   work so hard at it that the intrinsic drift of the ovenless crystal
   oscillators can be measured individually via the network.

5. The usefulness of any timestamp is relevant only to the extent the
   application program can operate with it. It doesn't make sense to deliver
   a super-accurate timestamp to a user program trying to control a real-time
   process when its control mechanism has inherent random delays in the order
   of disk-seek latencies. This comment does not apply if you are measuring
   differences in timestamps (delays), of course.

6. With respect to Unix timestamps. Our Sun workstation has a hard time
   maintaining clockwatch to within several seconds (sic), much less to within
   an order of milliseconds. It is not clear whether this is due to oscillator
   instability or simply sloppy implementation. The apparent time drifts wildly
   relative to our rather precise network clock as measured by our loccal-net
   clock-synchronization algorithms.

7. In my experience the most pressing need for additional protocol development
   is a mechanism to determine the order of precision of a delivered timestamp.
   For instance, the recent episodes when we told our clockwatching friends
   gross lies in timestamps due local power disruptions, we should have been
   able to indicate relative faith, perhaps as a field in the header (assuming
   ICMP for record). We should also be able to convey whether the stamp
   was derived from primary, secondary or other standards and whether it
   was determined by a third party.

The bottom line is to suggest that you use ICMP as the primary source of precise
milliseconds and resolve high-order ambiguity with UDP and/or TCP only as
necessary.

Dave
-------

-----------[000231][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25-Jun-85 12:49:33 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Apparent problem with ICMP Redirects on 4.2 systems

From: Bob Walsh <walsh@BBN-LABS-B.ARPA>


The Berkeley 4.2 TCP does not reroute current connections when it receives
an ICMP redirect.  As a result, such a machine can receive a lot of
redirects for the duration of those current connections.  The UNIX host
does not have a fixed size routing table; the table (at least in the kernel,
I'm not sure about the routing demon) grows as needed and is implemented
by an open chained hash table.  This, and some other problems with the 
distributed 4.2 TCP are described in a paper I gave at the Salt Lake City
USENIX in '84.

bob walsh

-----------[000232][next][prev][last][first]----------------------------------------------------
Date:      Tue, 25-Jun-85 14:33:12 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   re: time servers

From: POSTEL@USC-ISIF.ARPA


Steve:

I agree with Dave Mills, the RFC-868 Time Protocol is not expected
to have accuracy greater than one second.  I'd be very supprised if
it was every that good.  To find out more about the procedures Dave
uses see RFCs 891 and 778.

--jon.
-------

-----------[000233][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26 Jun 85 02:34:37 PDT
From:      decwrl!sun!guy@Berkeley (Guy Harris)
To:        tcp-ip@Berkeley
Subject:   FTP and "SITE UNIX"
I don't know if anybody's answered this, but...

The 4.2BSD "ftp" doesn't send a "SITE UNIX" command, but the 3Com UNET
version (and CCI's 4.2BSD version, for historical reasons you don't really
want to know about) does.  If I remember correctly, it sends out a "SITE
UNIX" command and, if the response is a positive completion (first digit 2),
it sends out a "TYPE I" command to put the other side into image mode.  The
intent behind this, I assume, was to permit naive users to use FTP to move
files between UNIX machines without having to remember to go into image mode
if they were transferring binary files.  Unfortunately, this wreaks havoc if
you're transferring ASCII files from machines other than 8-bit-byte ASCII
machines.

The best thing to do is reject the command; then those FTPs won't turn on
image mode.

	Guy Harris
-----------[000234][next][prev][last][first]----------------------------------------------------
From:      schoff%rpi.csnet@csnet-relay.arpa
To:        tcp-ip@sri-nic.ARPA
Cc:        schoff%rpi.csnet@csnet-relay.arpa
Subject:   4.2UNIX on VAX as a gateway
Has anyone done any measurements of throughput with a VAX750 or VAX780 running
4.2bsd UNIX acting as a gateway between two local nets?  I am interested
in knowing what kind of performance can be had in either packets/sec or
(bits,bytes)/second.  Operationally I would be using a dedicated MicroVAX II
for this service (the rational is that they are cheap and can get reasonable
service in the field).  I would like to know what the duration of the test
was, what the media and interfaces of the local net's were etc....


marty schoffstall
schoff%rpi@csnet-relay
-----------[000235][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26-Jun-85 08:11:55 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   FTP and "SITE UNIX"

From: decwrl!sun!guy@BERKELEY (Guy Harris)

I don't know if anybody's answered this, but...

The 4.2BSD "ftp" doesn't send a "SITE UNIX" command, but the 3Com UNET
version (and CCI's 4.2BSD version, for historical reasons you don't really
want to know about) does.  If I remember correctly, it sends out a "SITE
UNIX" command and, if the response is a positive completion (first digit 2),
it sends out a "TYPE I" command to put the other side into image mode.  The
intent behind this, I assume, was to permit naive users to use FTP to move
files between UNIX machines without having to remember to go into image mode
if they were transferring binary files.  Unfortunately, this wreaks havoc if
you're transferring ASCII files from machines other than 8-bit-byte ASCII
machines.

The best thing to do is reject the command; then those FTPs won't turn on
image mode.

	Guy Harris

-----------[000236][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26-Jun-85 11:58:37 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   4.2UNIX on VAX as a gateway

From: schoff%rpi.csnet@csnet-relay.arpa

Has anyone done any measurements of throughput with a VAX750 or VAX780 running
4.2bsd UNIX acting as a gateway between two local nets?  I am interested
in knowing what kind of performance can be had in either packets/sec or
(bits,bytes)/second.  Operationally I would be using a dedicated MicroVAX II
for this service (the rational is that they are cheap and can get reasonable
service in the field).  I would like to know what the duration of the test
was, what the media and interfaces of the local net's were etc....


marty schoffstall
schoff%rpi@csnet-relay

-----------[000237][next][prev][last][first]----------------------------------------------------
Date:      26 Jun 85 16:57:56 EST (Wed)
From:      Christopher A Kent <cak@Purdue.EDU>
To:        tcp-ip@sri-nic.arpa
Subject:   Yet another date hack
I've changed my 4.2 date getter once again. Now it tries to estimate
the round-trip time and adds half to the returned value before setting
the date.

It also fixes a long-standing bug that caused it not to properly record
the old time in /usr/adm/wtmp before setting the time; this is now
properly done.

The new file is where the old one was: in the file pub/dated.flar on
merlin.purdue.edu, available viaanonymous FTP.

chris
----------
-----------[000238][next][prev][last][first]----------------------------------------------------
Date:      Wed, 26-Jun-85 18:42:16 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Yet another date hack

From: Christopher A Kent <cak@Purdue.EDU>

I've changed my 4.2 date getter once again. Now it tries to estimate
the round-trip time and adds half to the returned value before setting
the date.

It also fixes a long-standing bug that caused it not to properly record
the old time in /usr/adm/wtmp before setting the time; this is now
properly done.

The new file is where the old one was: in the file pub/dated.flar on
merlin.purdue.edu, available viaanonymous FTP.

chris
----------

-----------[000239][next][prev][last][first]----------------------------------------------------
Date:      27 Jun 85 07:57:25 PDT
From:      Murray.pa@Xerox.ARPA
To:        TCP-IP@SRI-NIC.ARPA, Namedroppers@SRI-NIC.ARPA
Cc:        Murray.pa@Xerox.ARPA
Subject:   Retransmission timeouts
A few days ago, Andrew McDowell @ucl-cs sent a msg to namedroppers
reporting round trip times to the states of "about 2.5-3 seconds". That
seemed too good to be true, but when I tried popking his server, that's
what I observed.

Well, a while ago, I was testing a new ICMP Echo tool, and looking for
strange cases, I fished around in my old mail file for Andrew's message
and tried one of his hosts: 128.16.5.2.

I've seen round trip times as long at 34 seconds! I'm using 20 byte
packets. In the last hour or so, I've rarely seen anything as low as 3
seconds.

From a recent run:
	5 out of 35 packets were lost. (That's 14%.)
	23% of the packets took between 7 and 10 seconds.
	20% of the packets took between 10 and 14 seconds.
	40% of the packets took between 20 and 28 seconds.
	The min time was 3.2 seconds.
	The max time was 28 seconds.

Things just started working again. A test that just finished had a
strong peak between 2 and 2.8 seconds.

Oops. I guess I spoke too soon. It's back to the slow mode now. 4 out of
14 took over 28 seconds.

-----------[000240][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27 Jun 85 12:07:57 EDT
From:      Ron Natalie <ron@BRL.ARPA>
To:        Guy Harris <decwrl!sun!guy@Berkeley>
Cc:        tcp-ip@Berkeley
Subject:   Re:  FTP and "SITE UNIX"
I'll settle for a single BSD FTP that works at all.  3COM's FTP server
violates the spec by sending spurious reply messages and the BSD version
gets hung up when you're on a fast net (I guess they used RCP to talk on
the ethernet) because sometimes you can get two replies on a single read.

-Ron
-----------[000241][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27-Jun-85 12:08:13 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Retransmission timeouts

From: Murray.pa@Xerox.ARPA

A few days ago, Andrew McDowell @ucl-cs sent a msg to namedroppers
reporting round trip times to the states of "about 2.5-3 seconds". That
seemed too good to be true, but when I tried popking his server, that's
what I observed.

Well, a while ago, I was testing a new ICMP Echo tool, and looking for
strange cases, I fished around in my old mail file for Andrew's message
and tried one of his hosts: 128.16.5.2.

I've seen round trip times as long at 34 seconds! I'm using 20 byte
packets. In the last hour or so, I've rarely seen anything as low as 3
seconds.

From a recent run:
	5 out of 35 packets were lost. (That's 14%.)
	23% of the packets took between 7 and 10 seconds.
	20% of the packets took between 10 and 14 seconds.
	40% of the packets took between 20 and 28 seconds.
	The min time was 3.2 seconds.
	The max time was 28 seconds.

Things just started working again. A test that just finished had a
strong peak between 2 and 2.8 seconds.

Oops. I guess I spoke too soon. It's back to the slow mode now. 4 out of
14 took over 28 seconds.

-----------[000242][next][prev][last][first]----------------------------------------------------
Date:      27 Jun 1985 13:11:03 EDT
From:      MILLS@USC-ISID.ARPA
To:        Murray.pa@XEROX.ARPA, TCP-IP@SRI-NIC.ARPA, Namedroppers@SRI-NIC.ARPA
Cc:        MILLS@USC-ISID.ARPA
Subject:   Re: Retransmission timeouts
In response to the message sent  27 Jun 85 07:57:25 PDT from Murray.pa@Xerox.ARPA

Murray,

See RFC-889 for additional statistics. The SATNET path uses a reservation-type
algorithm which can be operated in several different modes. The one most
often used results in a low-traffic, one-way mean delay of about 1.1 seconds
at the earth stations. Delays up to 15 seconds have been observed here on
the ARPANET, while delays of 20 seocnds or more are not uncommon on SATNET.
There are lots of conjectures why such long tails in the distribution are
observed, but no single majority contributant has been proven.

Dave
-------
-----------[000243][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27-Jun-85 13:40:34 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  FTP and "SITE UNIX"

From: Ron Natalie <ron@BRL.ARPA>

I'll settle for a single BSD FTP that works at all.  3COM's FTP server
violates the spec by sending spurious reply messages and the BSD version
gets hung up when you're on a fast net (I guess they used RCP to talk on
the ethernet) because sometimes you can get two replies on a single read.

-Ron

-----------[000244][next][prev][last][first]----------------------------------------------------
Date:      Thu, 27-Jun-85 17:24:02 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Namesolving via SATNET

From: mills@dcn6.arpa

Folks,

A few minutes ago (about 18Z on Thursday afternoon), I collected the following
delay data on the path between DCN-GATEWAY (10.0.0.111) and UCL-GATEWAY
(4.0.0.60), each on opposite sides of SATNET, using ICMP Echo/Echo Reply
messages. Of the 808 ICMP Echo messages sent, all but 66 had come back as ICMP
Echo Reply messages within ten seconds. In my simple experiment it was not
possible to determine if any of these 66 messages did in fact wander back
after more than ten seconds. Of those that came back, the minimum delay was
1539 milliseconds and the mean 2175 milliseconds.

For calibration purposes, the mean delay on the path between DCN-GATEWAY and
the nearest SATNET gateway ranges between about 78 and 379 milliseconds with a
mean of 91 milliseconds. The delays between the namesolver host, name-server
host and their respective gateways are not included, nor are the delays
intrinsic to the host operating systems. The total of these delays could
easily contribute another few hundred milliseconds to the total.

Following is a crude histogram showing the (normalized) delay distribution.

Value	Count
----------------+
1200	0	|
1400	4	|****
1600	39	|***************************************
1800	95	|************************************************************
2000	67	|************************************************************
2200	18	|******************
2400	5	|*****
2600	1	|*
2800	0	|
3000	1	|*
3200	0	|
3400	0	|
3600	2	|**
3800	0	|
4000	0	|
4200	0	|
4400	0	|
4600	0	|
4800	0	|
5000	0	|
5200	1	|*
5400	0	|

The performance shown above is typically of SATNET under light loading
conditions. The mean roundtrip delay somewhat less than I am used to from
other measurements suggests that, at the moment, one of the non-reservation
modes is in effect, possibly FTDMA.

These data should be instructive to those designing namesolver retransmission
strategies, not to mention those configuring TCPs to work over such paths.

Dave
-------

-----------[000245][next][prev][last][first]----------------------------------------------------
Date:      27-Jun-85 19:31:48-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa, namedroppers@usc-isif.arpa
Subject:   Namesolving via SATNET
Folks,

A few minutes ago (about 18Z on Thursday afternoon), I collected the following
delay data on the path between DCN-GATEWAY (10.0.0.111) and UCL-GATEWAY
(4.0.0.60), each on opposite sides of SATNET, using ICMP Echo/Echo Reply
messages. Of the 808 ICMP Echo messages sent, all but 66 had come back as ICMP
Echo Reply messages within ten seconds. In my simple experiment it was not
possible to determine if any of these 66 messages did in fact wander back
after more than ten seconds. Of those that came back, the minimum delay was
1539 milliseconds and the mean 2175 milliseconds.

For calibration purposes, the mean delay on the path between DCN-GATEWAY and
the nearest SATNET gateway ranges between about 78 and 379 milliseconds with a
mean of 91 milliseconds. The delays between the namesolver host, name-server
host and their respective gateways are not included, nor are the delays
intrinsic to the host operating systems. The total of these delays could
easily contribute another few hundred milliseconds to the total.

Following is a crude histogram showing the (normalized) delay distribution.

Value	Count
----------------+
1200	0	|
1400	4	|****
1600	39	|***************************************
1800	95	|************************************************************
2000	67	|************************************************************
2200	18	|******************
2400	5	|*****
2600	1	|*
2800	0	|
3000	1	|*
3200	0	|
3400	0	|
3600	2	|**
3800	0	|
4000	0	|
4200	0	|
4400	0	|
4600	0	|
4800	0	|
5000	0	|
5200	1	|*
5400	0	|

The performance shown above is typically of SATNET under light loading
conditions. The mean roundtrip delay somewhat less than I am used to from
other measurements suggests that, at the moment, one of the non-reservation
modes is in effect, possibly FTDMA.

These data should be instructive to those designing namesolver retransmission
strategies, not to mention those configuring TCPs to work over such paths.

Dave
-------
-----------[000246][next][prev][last][first]----------------------------------------------------
Date:      28 Jun 1985 at 0813-EDT
From:      hsw at TYCHO.ARPA  (Howard Weiss)
To:        tcp-ip at sri-nic
Subject:   Gateway re-directs.
I have recently seen some strange behavior from the core gateways that
I cannot understand or resolve.

While receiving mail from BRL-TGR (and acking that mail), my host (TYCHO -
26.0.0.57) did not have the BRL gateway defined in its table.  This meant
that all packets for BRL-TGR were sent, by default, to the "smart" gateway
otherwise known as the MIL-ARPA mail bridge gateway (26.0.0.106).
Immediately, MIL-ARPA sent redirects back.  One would normally think that
the gateway would redirect to the BRL gateway (26.3.0.29), since that is the
shortest path, but, that is NOT what happened.

Instead, the MIL-ARPA gateway sent redirects to the MINET gateway
(26.1.0.40).  The MINET gateway, in turn, then sent back redirects to the
BRL gateway.

Does anyone have any good words or reasonings for this behavior?  It would
seem to me that MIL-ARPA should have redirected to the BRL gateway
immediately.

Howard Weiss
-------

-----------[000247][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28-Jun-85 09:29:02 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Gateway re-directs.

From: hsw@TYCHO.ARPA    (Howard Weiss)

I have recently seen some strange behavior from the core gateways that
I cannot understand or resolve.

While receiving mail from BRL-TGR (and acking that mail), my host (TYCHO -
26.0.0.57) did not have the BRL gateway defined in its table.  This meant
that all packets for BRL-TGR were sent, by default, to the "smart" gateway
otherwise known as the MIL-ARPA mail bridge gateway (26.0.0.106).
Immediately, MIL-ARPA sent redirects back.  One would normally think that
the gateway would redirect to the BRL gateway (26.3.0.29), since that is the
shortest path, but, that is NOT what happened.

Instead, the MIL-ARPA gateway sent redirects to the MINET gateway
(26.1.0.40).  The MINET gateway, in turn, then sent back redirects to the
BRL gateway.

Does anyone have any good words or reasonings for this behavior?  It would
seem to me that MIL-ARPA should have redirected to the BRL gateway
immediately.

Howard Weiss
-------

-----------[000248][next][prev][last][first]----------------------------------------------------
Date:      28 Jun 85 11:27:44 CDT (Fri)
From:      ihnp4!ihu1e!jee@Berkeley
To:        ihnp4!ucbvax!tcp-ip@Berkeley
Subject:   Re:  tcp/ip on hyperchannel
The protocol on the Hyperchannel is best described as a CSMA/CP where
CP is collision prevention. This roughly equates to a p-persistenc CSMA
except it is prioritized.

What all this means is that it is CSMA when it is planning to transmit.
All adapters  recognize when a transmission is happening. If they have
to transmit each waits a different time (the backoff algorithm) which
is preselected (the priority). It is true that prior to transmission
of the actually date there is a control information exchange with the
destination adapter.  It is a simple way of making sure the channel is
clear prior to transmission (i.e collision occurring after transmission
begins) and the time is only equal to the round trip time.  This control
information allowys them to do transmit very large packets (much more
than 4kbytes).

In fact their protocol is similar to the proposed ANS X3T9.5 proposed
standard for high speed local networks.

I would suggest you contact Network
Systems Corporation in Minneapolis, Minnesota directly for some
introductory information which goes into much more detail.

-----------[000249][next][prev][last][first]----------------------------------------------------
Date:      28 Jun 85 10:31:55 EDT (Fri)
From:      Mike Brescia <brescia@bbnccv>
To:        hsw@tycho.ARPA (Howard Weiss)
Cc:        tcp-ip@sri-nic, brescia@bbnccv
Subject:   Re: Gateway re-directs.

	(... 26.0.0.57 sending packets to BRL-TGR starting with 26.0.0.106)
	the MIL-ARPA gateway sent redirects to the MINET gateway (26.1.0.40).
	The MINET gateway, in turn, then sent back redirects to the BRL gateway.

The reason for this is, while MILARPA and BRL are both gateways on the MILNET,
MILARPA does not know about BRL, because BRL uses exterior gateway protocol
(EGP) to pass around routing information, but the only gateways on MILNET
which have EGP are MINET and AERO (26.8.0.65).  The protocol used between
'core' gateways (e.g. MILARPA, MINET, AERO) does not carry the neighbor
information about BRL from MINET to MILARPA, only the fact that the net is
reachable.

Current plans include putting EGP in some of the MILNET-ARPANET gateways so
that routes from the ARPANET can reach local area nets off the MILNET without
taking an extra hop through MINET or AERO.

You should not have too much to worry about in your case, because when the
redirects settle down, you do have the shortest route to BRL-TGR.  It just
takes 1 or 2 extra packets to get the (TCP) connection established.  If your
host were to save this information around between connections, you could
eliminate even the first redirect in most exchanges.

	Mike Brescia
-----------[000250][next][prev][last][first]----------------------------------------------------
Date:      28 Jun 1985 at 1108-EDT
From:      hsw at TYCHO.ARPA  (Howard Weiss)
To:        tcp-ip at sri-nic
Subject:   re: Gateway re-directs

Mike,

Thanks for the explanation of what is going on.  Your answer was the only
one that I could think of, but I thought that the core-gateways did have
EGP.  The problem on my host has now been fixed in that the BRL gateway
is now defined and packets to the BRL net does NOT first get sent to
MILARPA.

thanks,

Howard
-------

-----------[000251][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28-Jun-85 11:26:36 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: Gateway re-directs.

From: Mike Brescia <brescia@bbnccv>


	(... 26.0.0.57 sending packets to BRL-TGR starting with 26.0.0.106)
	the MIL-ARPA gateway sent redirects to the MINET gateway (26.1.0.40).
	The MINET gateway, in turn, then sent back redirects to the BRL gateway.

The reason for this is, while MILARPA and BRL are both gateways on the MILNET,
MILARPA does not know about BRL, because BRL uses exterior gateway protocol
(EGP) to pass around routing information, but the only gateways on MILNET
which have EGP are MINET and AERO (26.8.0.65).  The protocol used between
'core' gateways (e.g. MILARPA, MINET, AERO) does not carry the neighbor
information about BRL from MINET to MILARPA, only the fact that the net is
reachable.

Current plans include putting EGP in some of the MILNET-ARPANET gateways so
that routes from the ARPANET can reach local area nets off the MILNET without
taking an extra hop through MINET or AERO.

You should not have too much to worry about in your case, because when the
redirects settle down, you do have the shortest route to BRL-TGR.  It just
takes 1 or 2 extra packets to get the (TCP) connection established.  If your
host were to save this information around between connections, you could
eliminate even the first redirect in most exchanges.

	Mike Brescia

-----------[000252][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28-Jun-85 12:13:57 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   re: Gateway re-directs

From: hsw@TYCHO.ARPA    (Howard Weiss)


Mike,

Thanks for the explanation of what is going on.  Your answer was the only
one that I could think of, but I thought that the core-gateways did have
EGP.  The problem on my host has now been fixed in that the BRL gateway
is now defined and packets to the BRL net does NOT first get sent to
MILARPA.

thanks,

Howard
-------

-----------[000253][next][prev][last][first]----------------------------------------------------
Date:      28 Jun 85 13:16 EDT
From:      nsi @ DDN1.ARPA
To:        tcp-ip @ sri-nic.arpa
Subject:   mailing list
Please remove us from the TCP/IP mailing list.
 Thank you.

-----------[000254][next][prev][last][first]----------------------------------------------------
Date:      28 Jun 85 13:58 EDT
From:      jhodges @ DDN2.ARPA
To:        tcp-ip @ sri-nic.arpa
Cc:        jhodges @ DDN2.ARPA
Subject:   TCP/IP mailing list
Please add my name to the TCP/IP mailing list.

Thank you.
Jim Hodges
jhodges at Dddn2

-----------[000255][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28-Jun-85 14:20:42 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Time flies when you're having fun

From: mills@dcn6.arpa

Folks,

The professional clockwatchers have decided to insert a leap-second in the
broadcast time standards at 23:59:59 UT on 30 June. This happens just
about every six months and would naturally be expected to jog our klutzy
collection of radio clocks and synchronization protocols. I have declared
an Internet Clockwatch Weekend, during which I plan to deploy spy-daemons
to watch the radio and power-line clocks closely. First, I would like
to see what our radio clocks make of the transition, second what the
power grid does and third what the synchronization protocols do.

During some part of the ICW our fuzzball clocks will be unstrapped from
their respective synchronization reference and will be allowed to free-run.
Since the accuracy of the el-cheapo crystals used in the interfaces is
only about 10 ppm, this may result in systematic offsets up to a second
in timestamps returned from our swamp rats.

Dave
-------

-----------[000256][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28-Jun-85 15:37:18 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   TCP/IP mailing list

From: jhodges@DDN2.ARPA

Please add my name to the TCP/IP mailing list.

Thank you.
Jim Hodges
jhodges at Dddn2

-----------[000257][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28-Jun-85 20:22:48 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  tcp/ip on hyperchannel

From: ihnp4!ihu1e!jee@BERKELEY

The protocol on the Hyperchannel is best described as a CSMA/CP where
CP is collision prevention. This roughly equates to a p-persistenc CSMA
except it is prioritized.

What all this means is that it is CSMA when it is planning to transmit.
All adapters  recognize when a transmission is happening. If they have
to transmit each waits a different time (the backoff algorithm) which
is preselected (the priority). It is true that prior to transmission
of the actually date there is a control information exchange with the
destination adapter.  It is a simple way of making sure the channel is
clear prior to transmission (i.e collision occurring after transmission
begins) and the time is only equal to the round trip time.  This control
information allowys them to do transmit very large packets (much more
than 4kbytes).

In fact their protocol is similar to the proposed ANS X3T9.5 proposed
standard for high speed local networks.

I would suggest you contact Network
Systems Corporation in Minneapolis, Minnesota directly for some
introductory information which goes into much more detail.

-----------[000258][next][prev][last][first]----------------------------------------------------
Date:      28-Jun-85 17:02:12-UT
From:      mills@dcn6.arpa
To:        tcp-ip@sri-nic.arpa
Subject:   Time flies when you're having fun
Folks,

The professional clockwatchers have decided to insert a leap-second in the
broadcast time standards at 23:59:59 UT on 30 June. This happens just
about every six months and would naturally be expected to jog our klutzy
collection of radio clocks and synchronization protocols. I have declared
an Internet Clockwatch Weekend, during which I plan to deploy spy-daemons
to watch the radio and power-line clocks closely. First, I would like
to see what our radio clocks make of the transition, second what the
power grid does and third what the synchronization protocols do.

During some part of the ICW our fuzzball clocks will be unstrapped from
their respective synchronization reference and will be allowed to free-run.
Since the accuracy of the el-cheapo crystals used in the interfaces is
only about 10 ppm, this may result in systematic offsets up to a second
in timestamps returned from our swamp rats.

Dave
-------
-----------[000259][next][prev][last][first]----------------------------------------------------
Date:      Fri, 28 Jun 85 23:43:59 EDT
From:      Mike Muuss <mike@BRL.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Backup Root Domain Servers?
In the past few days, it seems that both SRI-NIC and ISIB have had
periods where both were not answering UDP nameserver queries.
This leads me to believe that perhaps there should be more than
two root domain servers, and that perhaps at least one should be
on the East Coast.

If this suggestion seems sound, but an appropriate site is needed,
BRL would be happy to volunteer.  Why?  Because we plan to transition
a dozen VAXen to using domain-server based naming in a few weeks,
and not having the root domain accessable could have some very bad
effects on the network usability of our systems.

Comments?
  -Mike
-----------[000260][next][prev][last][first]----------------------------------------------------
Date:      Sat, 29-Jun-85 00:41:47 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Backup Root Domain Servers?

From: Mike Muuss <mike@BRL.ARPA>

In the past few days, it seems that both SRI-NIC and ISIB have had
periods where both were not answering UDP nameserver queries.
This leads me to believe that perhaps there should be more than
two root domain servers, and that perhaps at least one should be
on the East Coast.

If this suggestion seems sound, but an appropriate site is needed,
BRL would be happy to volunteer.  Why?  Because we plan to transition
a dozen VAXen to using domain-server based naming in a few weeks,
and not having the root domain accessable could have some very bad
effects on the network usability of our systems.

Comments?
  -Mike

-----------[000261][next][prev][last][first]----------------------------------------------------
Date:      30 Jun 1985 00:02-EDT
From:      CERF@USC-ISI.ARPA
To:        walsh@BBN-LABS-B.ARPA
Cc:        JNC@MIT-XX.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Apparent problem with ICMP Redirects on 4.2 systems
Bob W -

Are the problems acknowledged to be problems by our Berkeley friends and
are they being resolved?

I was in England last week at ONLINE 85 and found a surprising number of
booths in the exhibition advertising LANs of all types and TCP/IP; mostly
based on the BSD 4.2 version.  Since it is that version which is getting
the most play and exposure in Europe, it will prove important for us to
clean up any serious bugs remaining.

Vint
-----------[000262][next][prev][last][first]----------------------------------------------------
Date:      Sun, 30-Jun-85 01:00:33 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re:  Apparent problem with ICMP Redirects on 4.2 systems

From: CERF@USC-ISI.ARPA

Bob W -

Are the problems acknowledged to be problems by our Berkeley friends and
are they being resolved?

I was in England last week at ONLINE 85 and found a surprising number of
booths in the exhibition advertising LANs of all types and TCP/IP; mostly
based on the BSD 4.2 version.  Since it is that version which is getting
the most play and exposure in Europe, it will prove important for us to
clean up any serious bugs remaining.

Vint

-----------[000263][next][prev][last][first]----------------------------------------------------
Date:      Sun, 30-Jun-85 07:47:39 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   mailing list

From: nsi@DDN1.ARPA

Please remove us from the TCP/IP mailing list.
 Thank you.

-----------[000264][next][prev][last][first]----------------------------------------------------
Date:      Sun, 30 Jun 85 22:18 EDT
From:      "Richard Kovalcik, Jr." <Kovalcik@MIT-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: tcp/ip on hyperchannel
Unfortunately, the hyperchannel collision protection is worthless.  The
adapters are protected against transmitting against each other, but for
all real messages (> about 32 bytes) each adapter only has one receive
buffer.  If you transmit a second packet to another node before it has
read the first out, or worse if two different nodes transmit a packet
each to a third node in a small interval, all hell breaks loose.  All
the adapters forget all the messages (including the one already in the
buffer) and you have to issue reset commands to them all.
-----------[000265][next][prev][last][first]----------------------------------------------------
Date:      Sun, 30-Jun-85 22:52:53 EDT
From:      tcp-ip@ucbvax.ARPA
To:        fa.tcp-ip
Subject:   Re: tcp/ip on hyperchannel

From: "Richard Kovalcik, Jr." <Kovalcik@MIT-MULTICS.ARPA>

Unfortunately, the hyperchannel collision protection is worthless.  The
adapters are protected against transmitting against each other, but for
all real messages (> about 32 bytes) each adapter only has one receive
buffer.  If you transmit a second packet to another node before it has
read the first out, or worse if two different nodes transmit a packet
each to a third node in a small interval, all hell breaks loose.  All
the adapters forget all the messages (including the one already in the
buffer) and you have to issue reset commands to them all.

END OF DOCUMENT