The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1983)
DOCUMENT: TCP-IP Distribution List for December 1983 (42 messages, 25875 bytes)
NOTICE: recognises the rights of all third-party works.


Date:      Thursday,  1 Dec 1983 09:17-PST
From:      imagen!geof@Shasta
To:        shasta!tcp-ip@sri-nic
Cc:        geof@Shasta
Subject:   Keep-alives & TCP

I have pondered TCP's philosophy w.r.t. keep-alives in the past (a
keep-alive is an N-level message which serves no purpose other than to
ascertain the continued existence of its peer and the communications path to
its peer). I believe that the designers of TCP were justified in their
decision to avoid a keep-alive mechanism.  On the other hand, this is not to
say that keep-alives are unnecessary in a layered architecture; they simply
belong somewhere other than TCP.  Allow me to sermonise...


There are two reasons that I can think of for sending keep-alives:
	1. To avoid wasting resources by early detection of a connection
	   where one side has died.
	2. To allow necessary delays of arbitrary length to be accommodated
	   without the sender timing out.

The first case is the one that is normally referred to by users of TCP.  The
second case is somewhat more subtle.  I believe that most (though not all)
situations of the first case are really only an indication of a situation of
the second case in some higher level of protocol.  I'll get to explaining
this, but first let me explain what I mean by #2, above.

Consider the case of a remote command execution protocol's client.  It opens
a TCP connection to a server, and sends a command to be invoked on the
foreign system.  Then it waits.  When the command completes, the server
sends a message indicating that the command is done.

When the command-execution client is waiting for a success/failure response
from the CE-server, the TCP connection is in a passive or ``happy'' state
(neither side is waiting for the other to send anything). But there is a bug
in the CE protocol.  What if the server dies?  Since the TCP software on the
client machine is ``happy'' it will never alert the CE-client.  Since the
CE-client is waiting for incoming data, it will never wake up.  A one-sided

The typical solution, which doesn't work, is for the client to set a time
limit for the command to complete.  The value for this timeout is some
function of the availability of resources on the client, and the worst-case
time for the command to complete.  The nature of this solution is that it
causes the client to work some of the time and appear to arbitrarily
determine a failure some of the time (for example, any command that
takes more than ten seconds fails, or any command will fail with 50%
probability during the server's peak hours).

The more sophisticated solution (which, as Murphy would have it, doesn't
work, but fails less) is for the CE-client to set TWO timeouts: one based on
the availability of network resources and another based on the expected time
for the command to complete.  The network resources timer is typically set
to some short interval (for example, the ignominious 10 seconds), but the
command execution timer can be set for some ``worst case'' value (say 10
minutes).  The protocol operates as before, except that when the client's
network resource timer expires, it sends an ``AreYouThere'' query to the
server.  The server must respond with a ``Yes'' without further processing
(specifically, since the intent of the AYT was to determine whether the
Server is running, not the command, the server should NOT translate the AYT
into some wakeup of the command that is running, but rather process it
itself).  The client can base its timeout for the AYT response on the
network delay, which is a value which can be measured with some success.

Of course, the above doesn't work, since there is still a fixed timer that
will cause a successfully running remote job to be aborted.  On the other
hand, this situation is entirely analogous to the common hack where an
operating system sets a CPU-time limit on all jobs to prevent an infinite
loop from using up too much time.  Whereas the keep-alive scheme doesn't
always work, it works about as well as local operating systems work, which
is about as well as one can expect.

Note that the TCP software which plays a part in the above scenario has
nothing to say about the keep-alive negotiations.  TCP neither cares about
nor anticipates the need for client-level keep-alives. And the continued
existence of the TCP connection says nothing about the success or failure of
the command-execution transaction.

If you don't believe the above, look at SMTP -- it exhibits this problem
between the time a message has been marked by the client for ``send'' and
the time that the foreign SMTP acks that the message has been sent.  Think
about what happens when the SMTP server must dump the message onto a model
33 teletype before acking it....


Lest you believe that I have forgotten what I'm talking about, let me return
to TCP's keep-alives.  TCP has no situation analogous the that of the
remote-execution protocol client.  A keep-alive scheme in TCP would exist
only to allow network resources to be released faster in some cases.
In most cases, TCP's resource requirements can best be met by relying on a
higher level client which must perform keep alives for the ``other'' reason,
discussed above.  It is always possible to modify higher level protocols to
at least time out idle connections after some arbitrary amount of time (just
as some operating systems time out users who are idle too long -- hey, that
works for telnet too, doesn't it...!).  Still, TCP could help them do some
of the work.

There are situations in which keep-alives cause more harm than good:
	A. when a connection is idle more than active (e.g. telnet)
	B. when the network is tariffed by packet (don't forget that
	   some Internet traffic falls into this category!).
The meaning of (A) is that there is always a per-packet cost in a network,
even if it is not monetary, as in (B).  How much of a burden is it to a
large mainframe to have to wake up every network process every five seconds,
so that each can send a keep-alive response?  In at least some operating
systems (Multics was metered at MIT, for example) this cost is far more
damaging than the long-term existence of dead TCP connections.  Furthermore,
keep alives burden the network as well as the host, and the extra capacity
to deal with them costs (remember Tops-20's and ICMP-pinging?).

If keep-alives sometimes hurt more than help, they should not be used unless
needed.  Since higher level protocols can always accommodate TCP's occasional
need of keep-alives, and since ONLY higher-level protocols can accommodate
their own, different, need of keep-alives, it is prudent to avoid keep
alives as a requirement of TCP.  It would be nice to see them as an option,
although some work would have to be done to make sure that older
implementations don't have to implement the option to work.

I hope that this was all clear, sorry for the length.  The subject actually
can go on for more pages that this; I finished a Master's thesis on it last
May (``An Argument for Soft Layering of Protocols'' by Geoffrey H. Cooper,
SM Thesis, MIT, available from MIT Lab. for Computer Science, 545 Tech. Sq.,
Cambridge, MA 02139 -- to put in a plug).

- Geof Cooper
  Imagen Corporation
Date:      Thursday,  1 Dec 1983 12:30-PST
From:      imagen!geof@Shasta
To:        shasta!DCP%SCRC-TENEX@MIT-MC, shasta!dcp@mit-mc
Cc:        shasta!tcp-ip@SRI-NIC
Subject:   Re: Philosophy, Consistency question. -- URG & 0 window


Isn't it the case that data can indeed be pushed through during the zero
window state?  Since urgent pointers are a rare item, I had assumed that the
next byte of the data stream would be ``pushed through'' the zero window in
order to get the URG through (there is guaranteed to be data in the pipe,
since you have to send some byte to have it marked urgent).

Maybe someone out in TCP-IP land can clarify this for me.

- Geof Cooper

PS: excuse 2 copies if you get 2 -- I'm not sure that my mailer knows about
    %'d hosts. - geof
Date:      Thu 1 Dec 83 12:42:04-PST
From:      Ken Harrenstien <KLH@SRI-NIC>
To:        don.provan@CMU-CS-A, tcp-ip@SRI-NIC
Cc:        KLH@SRI-NIC
Subject:   Re: retransmitting lossage
Perhaps I am missing something, but I don't understand your example.
You seem to be assuming that the ACKs for A1 and B1 will not be seen,
because they carry a sequence number which hasn't yet been reached
and thus will be dropped on the floor before the ACK field is examined.

This isn't true.  The sequence-number test checks to see if the incoming
segment has a sequence number WITHIN THE RECEIVE WINDOW.  If the segment
isn't in the window it should never have been sent in the first place.
Thus, A2 and B2 are presumably in the window, their ACKs will be processed,
and the respective TCPs will correctly determine that A1 and B1 have been
ACKed and can be removed from the retransmit queue.

There is a funny little phrase on page 69 to the effect that "If the
RCV.WND is zero, no segments will be acceptable, but special allowance
should be made to accept valid ACKs, URGs, and RSTs."  Perhaps you
ran afoul of this.  Also, the phrase on page 70, "held for later
processing", can be misleading; the ACK fields of queued segments should
definitely be "processed" immediately!

Date:      1 Dec 83 1301 EST (Thursday)
From:      don.provan@CMU-CS-A
To:        tcp-ip@SRI-NIC
Cc:        klh@mit-mc
Subject:   retransmitting lossage
reading klh;.tcp qs reminding me of trouble i had when i retransmitted
only the first element in the retransmit queue.  the fact is that this
can lead to the tcp connection being entirely wedged.  the scenario is
as follows: there's a tcp connection with data flowing in both directions.
(a telnet connection with the remote site echoing has the timing just
right.)  both sides, call them A and B, send two packets each, call
them A1 A2 and B1 B2.  BOTH A2 and B2 are lost in transit.  (yes, Virginia,
some arpa connections ARE that bad.  i had to debug TCP on one of them.)
now A1 and B1 arrive.  A and B both send back ACKs with the correct
sequence number, which is the number AFTER A2 and B2.  both ACKs
are ignored (either discarded or queued) because that sequence
number hasn't been reached yet (both A and B are still waiting for
A2 and B2 to appear).  A and B can send A1 and B1 to each other
all day long, but unless one of them retransmits the second segment,
the connection is wedged.

there are a few solutions to this problem.  one is to always
retransmitted each packet as an individual unit, rather than
treating the entire queue as a single unit.  another is to
retransmit bytes rather than packets.  in this case, the first
piece of data (A1, for example, which B has already received)
is trimmed off and the needed data (A2) is received.  there's
no particular reason to think this scenario can't happen with
packets of the maximum size, however.  i think a third way is
for TCP to require immediate processing of zero length packets even
if the sequence number is in the future.  i don't think this causes
any trouble, since there's no data in the packet that can get
out of order.  unless i'm missing something, the sequence number
in a zero length packet is useless anyway, pacticularly in light
of the recent discussion of the illegality of zero length packets
with push or urg set.  perhaps the sequence number
should always be ignored in a zero length packet.  there's probably
a reason that this isn't a good idea, but i can't think of it.

Date:      1 Dec 83 1908 EST (Thursday)
From:      don.provan@CMU-CS-A
To:        Ken Harrenstien <KLH@SRI-NIC>
Cc:        tcp-ip@SRI-NIC
Subject:   Re: retransmitting lossage
do i have an old RFC793 or something?  on page 69 i have "segments
are processed in sequence."  it doesn't say anything about "except
the ACK part, which should be processed immediately."  on page 70 i
have "In the following it is assumed that the segment is the
idealized segment that begins at RCV.NXT....Segments with higher
begining [sic] sequence numbers may be held for later processing."
that's at the very end of "first check sequence number", so "fifth
check the ACK field" on page 72 seems to clearly be "In the

i agree that it appears to be a good idea to always process ACKs even
in future messages, but that's not what the bible tells me to do.  i
wasn't in on all the development of TCP, so for all i know, there's
an excellent reason not to do so.


my confusion here seems to come from the use of "segment" as
"packet" (as in "segment arrives") and as "segment text" (in the two
cases i just cited?).  i guess divorcing the data from the ACK and
URG data makes sense.  on the other hand, divorcing it from the PUSH
bit (which, just like the others, does not take up space in the data
stream) makes no sense.  i claim that the spec is confusing at least,
if not ambiguous.   what do the gods say?
Date:      Thu 1 Dec 83 23:06:55-PST
From:      Mathis@SRI-KL.ARPA
To:        DCP@MIT-MC.ARPA
Cc:        don.provan@CMU-CS-A.ARPA, KLH@SRI-NIC.ARPA, tcp-ip@SRI-NIC.ARPA, Mathis@SRI-KL.ARPA
Subject:   Re: retransmitting lossage

When to process window information is also well-specified (or it was 
in the n-m th version of the spec).  Basically you want to always have
the "most recent" window information where most recent is from the point
of view of the sender.  Thus you use the sequence number and ACK number
to "ratch" the window information; you don't want to take window 
information from an old, long-delayed packet.
Date:      1 December 1983 21:25 EST
From:      David C. Plummer <DCP @ MIT-MC>
To:        don.provan @ CMU-CS-A
Cc:        KLH @ SRI-NIC, tcp-ip @ SRI-NIC
Subject:   Re: retransmitting lossage
    Received: from CMU-CS-A by SRI-NIC with TCP; Thu 1 Dec 83 16:15:21-PST
    Received: from [] by CMU-CS-PT with CMUFTP;  1 Dec 83 18:56:42 EST
    Date:  1 Dec 83 1908 EST (Thursday)
    From: don.provan@CMU-CS-A
    In-Reply-To: "Ken Harrenstien's message of 1 Dec 83 15:42-EST"

    do i have an old RFC793 or something?  on page 69 i have "segments
    are processed in sequence."  
I think that means "TCP presents the segments to the client in
order."  All segments contain useful information.  This useful
information should always be processed.  

Date:      1 Dec 83 2147 EST (Thursday)
From:      don.provan@CMU-CS-A
To:        David C. Plummer <DCP@MIT-MC>
Cc:        don.provan@CMU-CS-A, KLH@SRI-NIC, tcp-ip@SRI-NIC
Subject:   Re: retransmitting lossage
yes, but WHICH information?  ACKs, ok.  how about URGs?  and PUSHes?
obviously not FINs.  what about the window info?

i agree that's probably what it should say, but i don't think
there's anyway to read that from what's actually there.  after all, it
does use the word "processed."
Date:      1 December 1983 22:09 EST
From:      David C. Plummer <DCP @ MIT-MC>
To:        imagen!geof @ SU-SHASTA
Cc:        tcp-ip @ SRI-NIC, geof @ SU-SHASTA
Subject:   Keep-alives & TCP
    The meaning of (A) is that there is always a per-packet cost in a network,
    even if it is not monetary, as in (B).  How much of a burden is it to a
    large mainframe to have to wake up every network process every five seconds,

Now wait a minute.  I gave the 5 second number as an example in
the Chaosnet.  Normally it is once per minute because the SNS
usually gets through and the reply STS does also.  For TCP (if it
had such a thing) I would probably suggest something like an
initial delay of 200 to 500 round trip times with an interval of
20 to 50 round trip times.  If 5 are ignored; abandon the

I guess I don't share several people's view of clients.  I do not
see keep-alives as part of a client's contract.  Clients have a
job to do, no matter how long it takes them.  I don't like the
idea of forcing a timer-based interrupt mechanism for doing
keep-alives as being part of a client's duty.  TCP and most other
transport layers that have any form of timed retransmission have
provisions for timer processing.  (And yes, I know this loses if
the client dies irrecoverably.  But what if it dies but somebody
can make it recoverable (as I said in my initial query.)

Date:      1 December 1983 22:15 EST
From:      David C. Plummer <DCP @ MIT-MC>
To:        don.provan @ CMU-CS-A
Cc:        DCP @ MIT-MC, KLH @ SRI-NIC, tcp-ip @ SRI-NIC
Subject:   Re: retransmitting lossage
    Received: from CMU-CS-A by SRI-NIC with TCP; Thu 1 Dec 83 18:53:00-PST
    Received: from [] by CMU-CS-PT with CMUFTP;  1 Dec 83 21:37:00 EST
    Date:  1 Dec 83 2147 EST (Thursday)
    From: don.provan@CMU-CS-A
    In-Reply-To: "David C. Plummer's message of 1 Dec 83 21:25-EST"

    yes, but WHICH information?  ACKs, ok.  how about URGs?  and PUSHes?
    obviously not FINs.  what about the window info?

Everything except FINs, because everything is valid (including
FINs).  The difference with FIN is that it causes a state
transition which might interfere with out of order segments.
Therefore, I think the *correct* place to "test the FIN bit" is
when the segment with the FIN gets added to the TCBs in-order
queue.  If the FIN is out of order, it will either get dropped or
put on an out-of-order queue, depending on implementation.  When
the gaps are filled in, it is moved to in-order and the state
transition happens then.

Date:      4 Dec 1983 11:32-PST
From:      CERF@USC-ISI
Cc:        TCP-IP@SRI-NIC
Subject:   Re: Netmail Spreads Common Cold


Date:      4 Dec 1983 12:05-PST
From:      CERF@USC-ISI
To:        Mathis@SRI-KL
Cc:        DCP@MIT-MC, don.provan@CMU-CS-A, KLH@SRI-NIC tcp-ip@SRI-NIC
Subject:   Re: retransmitting lossage



Date:      7 Dec 1983  9:44:22 EST (Wednesday)
From:      Bob Hinden <hinden@BBN-UNIX>
To:        gateway-info@BBN-UNIX, tcp-ip@nic, tcp-ip@brl
Subject:   Gateway SIG Meeting

	   Internet Gateway Special Interest Group Meeting

		      February 28 and 29, 1984

			       held at

		    Information Sciences Instute
		     Marina Del Ray, California

  Announcing the first annual  Internet  Gateway  Special  Interest
  Group  meeting  to  be  held at the Information Sciences Instute,
  Marina Del Ray, California.  The purpose of the SIG  is  to  have
  gateway  designers  and implementers meet to describe the gateway
  they are developing and to discuss new ideas and common problems.
  Potential topics for discussion include:

		     Internet Routing
		     Exterior Gateway Protocol
		     Gateway Flow Control
		     High Speed Gateways
		     Translation Gateways
		     Smart versus Dumb Gateways

  If you are interested in attending the meeting please  send  your
  request to:

		      Robert Hinden
		      BBN Communications Corp.
		      50 Moulton Street
		      Cambridge, MA 02238


  Your request should describe who you are,  the  gateway  you  are
  working  on, and what you plan to talk about at the meeting.  All
  attendees will be required to give a talk on the gateway they are
  developing or another relevant topic.

  In order to keep the meeting to a workable size, attendance  will
  limited to people working directly with gateways.

Date:      7 Dec 83 13:55 EST
From:      Bob Archer <archer@nswc-wo>
To:        tcp-ip@sri-nic
Cc:        archer@nswc-wo
Subject:   Mailing List

Please take me off the TCP-IP mailing list.


Bob Archer
Date:      Fri, 9 Dec 83 11:38:52 CST
From:      Don Johnson <dhj@RICE>
To:        tcp-ip@sri-nic
Please take me off the TCP-IP mailing list

Don Johnson
Date:      Fri, 9 Dec 83 18:39 EST
From:      "David C. Plummer" <DCP%SCRC-TENEX@MIT-MC.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP/IP in Berkeley Unix 4.2
To what person, persons or mailing list should I send comments about the
4.2BSD TCP/IP implementation?  For example, I think I am seeing 2
uncalled-for zero length segments for every 3 sent on the Ethernet.

Date:      12 Dec 1983 13:27-PST
From:      Bob Gilligan <Gilligan at ISID>
Cc:        tcp-ip@SRI-NIC
Subject:   Re: TCP/IP in Berkeley Unix 4.2
	4.2 BSD includes a "sendbug" program which provides you with
a bug report form to fill out in vi, then mails your bug report off
to 4bsd-bugs@berkeley.
Date:      12 December 1983 17:13 EST
From:      dca-pgs @ DDN1
To:        tcp-ip @ brl, tcp-ip @ sri-nic
Cc:        dca-pgs @ DDN1
Subject:   VAX Internet Interface; TTY Load Problems

Date: December 12, 1983
Re: Compion (now Gould) Access-T Arpanet/Milnet Interface

We here at the DDN/PMO have been getting reports to the effect
that the Compion Access-T interface package for VAX can only
support 10 network tty connections max. This is a serious
problem; for the typical military VAX user, it renders Access-T
all but useless as a network interface.

Wanted to find out who among the readership was running Access-T,
and what your experience with it has been. We are going to try
to set up a meeting with Compion on the subject, and intend to
invite all the Access-T users, so they will have a collective
forum in which to make their feelings known.

Pat Sullivan
Defense Communications Agency
Defense Data Network Program Management Office
Subscriber Integration Branch, Code B615
Washington, DC 20305

Date:      13 Dec 1983 0530-PST
From:      Howard S. Weiss <dedwards@USC-ISI>
To:        Michael John Muuss <mike@BRL-VGR>
Cc:        tcp-ip@SRI-NIC, mike@BRL, Postmaster@WASHINGTON, Feedback@OFFICE-3
Subject:   Re: Interesting PING results
I belive that I read in some message to tcp-ip that Tymshare still
had some unresolved bugs in their tcp/ip that were still being
tracked down.  Don't remember exactly what their problems were, but 
it would somewhat explain the service outage to the Office-3 host.
Date:      Tue, 13 Dec 83 6:52:35 EST
From:      Michael John Muuss <mike@brl-vgr>
To:        tcp-ip@sri-nic
Cc:        mike@brl, Postmaster@washington, Feedback@office-3
Subject:   Interesting PING results
This evening, after fixing IPPROTO_ICMP service in our 4.2 BSD
UNIX system (with all due thanks to Robert Scheifler at MIT-BOLD
for the majority of the code), I settled down to write a PING
program for UNIX (based loosely on the one that Robert provided).

(Code availible to anybody for the asking).

Having created such a toy, I decided to run some tests against hosts
with which we have, at one time or another, experienced difficulties.

Briefly, here is our configuration:

BRl-VGR ..... BRL-GATEWAY ..... IMP .... IMP ..... MILDCEC ..... IMP ..... XXX	#29	#104		     ARPA #20

All datagrams used for these tests were 64 bytes long, including IP header,
and a new datagram was transmitted each second to each destination.
The ID field was used to distinguish the replies, and the SEQUENCE field
was given an ascending number.  The first 8 bytes of the data portion of
the datagram contained a UNIX timeval struct, giving the time of day
in microseconds (alas, derived from a 16.6 ms period clock [60hz]).
All datagrams sent were ICMP/ECHO requests.
BRL-VGR was lightly loaded (load average of ~0.3), and traffic on
the BRL-GATEWAY was light.

The first test worthy of note concerned ARPANET host WASHINGTON (
To permit me to measure packet loss and response at various points in the
InterNet, I ran 4 sets of PING programs **at the same time**.

BRL-VGR to  (ARPA side of MILDCEC)
BRL-VGR to  (ARPA interface of UCB-VAX)
BRL-VGR to  (ARPA interface of WASHINGTON)

(My most recent network map shows BERKELEY and WASHINGTON as having
their IMPs directly connected;  I don't know if this condition still
existed at the time of my test [12-13-83 at 0530].  Hence the choice
of BERKELEY for a "control" machine on the West Coast.  BRL is in Maryland.)

Here are my results:

Dest		% packet loss	MinTim	AvgTim	MaxTim
----------	-------------	------	------	------	  19%		 180	 213	  790	  20%		 150	 214	  800
BERKELEY	  20%		 310	 597	  730
WASHINGTON	  65%		 310	3253	15890

Times are in milliseconds.

Additional data:  The following round-trip delays were noticed
for WASHINGTON, immediately followed by about 20 seconds of
dropped packets:  15870, 14910, 12330, 11530, 15890, 15000, 15730 ms.

This behavior strikes me as being caused by some queue not being serviced
for LONG periods of time;  the software is probably clever enough to
start discarding packets once the queue reaches a preset maximum length
(or a certain number of packet buffers are occupied).

In any case, both the BRL-GATEWAY and MILNET/ARPA gateway and even the
ARPANET cross-country trunking can be ruled out as the source of this
phenomenon, due to the quite good results achieved to BERKELEY.

- - - - - - - - - - - - - -

The second test I ran was with host OFFICE-3, one of the Tymeshare machines.
Performance of the control machines was nearly identical:

Dest		% packet loss	MinTim	AvgTim	MaxTim
----------	-------------	------	------	------	  20%		 180	 207	  790	  20%		 150	 210	  800
BERKELEY	  20%		 310	 600	  730
OFFICE-3	  19%		 360	 405	 1040

Not bad performance, looked at with averages.  However, watching the
individual packets returning, I noticed the following interesting fact:
every 15 seconds (+/- 3 seconds), there would be 4 seconds of packet
loss.  The pattern persisted for the 5 minutes or so that I watched.

Oddly, this correlates supprisingly well with the observations
of various users, who have commented that TAC-access to Office-3
suffers from occasional brief "pauses" fairly regularly.

- - - - - - - - - - - - - -

I have no real idea what the source of either of these behaviors might
be, but decided to share these results with everybody, rather than
ignore them.
			 -Mike Muuss
Date:      Tue, 13 Dec 83  8:45:34 EST
From:      Andrew Malis <malis@BBN-UNIX>
To:        Michael John Muuss <mike@brl-vgr>
Cc:        tcp-ip@sri-nic, mike@brl, Postmaster@washington, Feedback@office-3, malis@BBN-UNIX
Subject:   Re: Interesting PING results

Just as a point of information, the Berkeley and Washington IMPs
are still neighbors, so Berkeley was a good "control" machine to use.


Date:      Tue 13 Dec 83 15:41:56-PST
From:      Bob Albrightson <BOB@WASHINGTON.ARPA>
To:        mike@BRL-VGR.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, feedback@OFFICE-3.ARPA, postmaster@WASHINGTON.ARPA
Subject:   PING
You might want to try your PING test again to Washington.  We have for
the last week or so had a flakey connection on one of our lines going
to uwisc (which now goes to utah).  It seems to be working reliably,
for the short period of time it has been up (about 1 hour).

Date:      Tue, 13 Dec 83 15:18 EST
From:      Dennis Rockwell <drockwel@BBN-VAX>
To:        Michael John Muuss <mike@brl-vgr>, tcp-ip@sri-nic
Cc:        Postmaster@washington, Feedback@office-3
Subject:   Re: Interesting PING results
This 15-second burst of lost packets sounds a *lot* like OFFICE-3 is
still pinging some largish set of gateways, all the time.  The IMP
immediately runs out of connection blocks and all traffic is blocked.
This, of course, would also cause the bursty TAC behavior.

Good experiment, Mike!

Date:      Wed, 14 Dec 83 7:19:14 EST
From:      Michael John Muuss <mike@brl-vgr>
To:        tcp-ip@sri-nic
Cc:        Gurus@brl
Subject:   More PING Results
Prompted by a flurry of messages concerning the PING results I posted
yesterday, I decided to do a little more investigation.

First, Jon Postel and others had commented to me that the pretty
constant 20% packet loss was a sure sign of a bad problem somewhere.
So, I did some digging, and found that the "drop 4 packets every 15
seconds" problem didn't occur only with office-3, but also with
lots of other hosts, including my own interface (!!).

Please accept my humble apology for misleading everybody;  that interesting
result "popped up" just before I was finished testing.  Obviously, it
had been present in all the data, but I was not looking so hard there.

The difficulty turned out to be a small defect in the 4.2 BSD kernel
(netinet/ip_icmp.c);  I have posted a bug fix to the Unix-Wizards list
which cures the problem.  (Turns out that a few lines of ICMP_REDIRECT
code were looking at the sequence numbers of ICMP_ECHO packets, and
occasionally snatching a few.  Sigh).

Fixing the bug prompted me to re-run all my pinging tests, plus a few
more which were requested by various people, plus a test of SATNET
just for kicks.

The tests were run from BRL-TGR (, a VAX 11/780 running 4.2 BSD,
with all datagrams 64 bytes long (including IP header).  All packets were
ICMP_ECHO (request) messages.  The tests took place on 12/14/83, from
0530 to 0630.  The hardware configuration was:

BRL-TGR .... BRL-GATEWAY .... IMP .... IMP .... MILDCEC .... IMP .... XXX     #29     #104		  ARPA #20
			     BRL-TAC (

Here is the data, in reasonable raw form (times in ms):
Separate test runs are separated by a blank line.

Target			%lost	min	avg	max
=========		=====	===	===	===
brl-tgr			0	10	10	10
brl-gateway		0	10	10	20
brl-tac			0	50	89	930

mildcec (net 26)	1%	180	185	200
mildcec (net 10)	1%	340	365	880

mildcec (net 26)	1%	180	195	360
mildcec (net 10)	1%	340	361	680
berkeley		1%	580	1030	15690	<--
washington		6%	580	2034	27420	<-- see notes below

mildcec (net 26)	0%	180	239	600
mildcec (net 10)	1%	350	410	670
berkeley		1%	590	652	970
rice 			5%	1540	1791	2510	Via VAN-GW & TELENET

mildcec (net 26)	0%	180	201	370
mildcec (net 10)	1%	340	367	540
berkeley		29%	590	614	900	29% ?
lll-mfe			0%	320	354	670

mildcec (net 26)	0%	180	 196	  340
mildcec (net 10)	0%	340	 366	  910
berkeley		4%	570	1625	15700	15 seconds??
office-3		0%	360	 403	 1020	All OK here

dcec-gw (	0%	 530	 648	1270
ucl-gw (	2%	1980	2532	4230	England via SATNET
ntare-gw (	2%	2290	2721	3330	Norway via SATNET

van-gw (	1%	460	506	620	(run at 0715)
van-gw (	0%	480	520	660


1)  The test involving host WASHINGTON.  I got a message from somebody out
there who said that one of their IMP trunks kept flickering in and out
last night while I was testing, and asked me to test again.
BBN also informed me that BERKELEY and WASHINGTON are IMP-neighbors.
The results I present here are better, but still show some remarkably
odd behavior.

While watching the ECHO_RESPONSEs comming back, there was one occasion
where a period of 27 seconds elapsed without ANY responses comming
back from WASHINGTON, then 14 returned all at once, and the 13 after that
were totally lost.  On another such occasion, 5 seconds elapsed, and
then all 5 popped out all at once.

This same behavior happened once to the pings to BERKELEY, with a
15.6 second delay before all the packets came pouring in.

I wonder what happens to IMP throughput when the state of one of it's
trunks changes state?

2)  The folks at RICE wrote me, and asked for some tests to be run
on their machine, which they said is reached via the VAN-Gateway at BBN,
then via TELENET, and finally via a 9600 baud (X.25) access line to their
facility.  The round trip times (1540/1791/2510) are pretty lengthy
(aproaching satelite delays rather than terrestrial circuit delays),
especially when you consider the timings to the VAN-GW itself (max 660 ms).

3)  OFFICE-3's behavior was completely smooth this time.  I'll try to
run some tests on them during the day, knowing that they are in fine
shape in the middle of the night.

4)  The SATNET timings are for curiosity;  I have no idea what the loading
on SATNET was, but I was wondering how it would come out.  DFVLR-GW,
FUCION-IG, and CLARKSBURG-IG were not answering.


For those of you who requested copies of the PING program, I will begin
mailing them out tomorrow, now that I feel that the program and the kernel
(after fixing) provide reasonable, believable results.

Date:      Wednesday, 14 Dec 1983 10:49-PST Wednesday, 14 Dec 1983 11:06-PST
From:      geof@Shasta
To:        shasta!BOB@WASHINGTON
Cc:        shasta!tcp-ip@SRI-NIC.ARPA, shasta!berlin@mit-xx, geof@Shasta
Subject:   Re: PING performance with WASHINGTON

While Dave Reed and I were both still at MIT, I remember that he found a bug
in the Tops-20 IP code.  I don't remember the exact bug (although it is
doubtless documented in the sources for Tops-20 at MIT, and someone there
can probably find it for you), but I think it could have caused the strange
performance with Washington that Mike Muuss mentioned in his recent message
on ICMP Pinging.

The bug was in a kernel process' command loop.  As I remember it, the loop
went something like (translated to PDP-10 assembly language...):
	until <I am awoken by the packet enqueuer> sleep
	while <there are more packets>
	    dequeue packet
	    loop to while
	loop to forever.
The bug was that the ``loop to forever'' instruction was interchanged with
the ``loop to while'' instruction.  The result was that the Internet code
worked properly most of the time, unless two packets got put on the queue
before this process noticed the first packet.  In this case it would take
off only the first packet, and leave the other(s) hanging for some time.  I
think that there was some other kind of signal that allowed the packets to
eventually trickle through, but clumps of packets tended to bunch up in the
process and get stuck there for a long time (once I got a UDP echo-response
returned to me after over 5 minutes!).

I bring this all up because it is possible that this bug fix never made it
out to WASHINGTON.  If so, it could possibly explain the performance seen,
whereby all packet traffic stopped for a long while, and then 14 packets
came back all at once.

- Geof Cooper
Date:      Wed, 14 Dec 83  9:19:13 EST
From:      Andrew Malis <malis@BBN-UNIX>
To:        Michael John Muuss <mike@brl-vgr>
Cc:        tcp-ip@sri-nic, Gurus@brl, malis@BBN-UNIX
Subject:   Re: More PING Results

Your results at Berkeley and Washington this morning, as reported
in your message, sounded like they could have been caused by IMP
blockage, so I checked the ARPANET/MILNET log to see if there was
anything "interesting" going on this morning during the period of
your tests.  Unfortunately, I don't have much to report.

Whenever the IMPs block a host for 15 seconds, they send a "trap"
to the NOC, which is then printed out on the network log.  They
also trap if a host is held up for more than three seconds while
the IMP is waiting for some resource that is needed to send a
particular message.  This morning, between 5:30 and 6:30 EST, the
only traps of interest were being generated by the BERKELEY IMP
with regards to their UCB-VAX host (10.2..78).  For some reason,
it seemed to be sending a lot of multi-packet traffic both to
itself and to other hosts around the net.  Berkeley has a small
(32K) IMP, with the result that during the hour, it was blocked
for 15 seconds while trying to reserve destination multi-packet
buffering at its own IMP for a total of 19 times (it also blocked
on traffic heading elsewhere as well, but that is less

However, while this shows that there was a lot of traffic going
through the BERKELEY IMP during this period, it doesn't otherwise
apply to your results.  You were sending single-packet traffic,
which we optimize to "scoot" through even when multi-packet
traffic may be blocked, and neither the Berkeley nor BRL-Gateway
hosts were blocked at any time during your tests.  The Washington
host was also not blocked at any time during the hour.

The only other interesting thing going on in the network was that
one line, TEXAS-GUNTER, was bouncing up and down during the hour.
However, it is really doubtful that any of your traffic travelled
along that path - the northern route via WISCONSIN is by far the
shortest between DCEC (IMP 20) and WASHINGTON & BERKELEY.  Also,
bad things really don't happen when lines go down, unless an IMP
becomes isolated as a result, or it happens to be a cross-country
(or otherwise heavily loaded) trunk that died.  Nothing is lost -
the IMPs simply re-route any store-and-forward traffic to another
live trunk.  However, a major trunk going down means that other
trunks now become more heavily loaded, and some traffic that was
queued to go over that line just before it died may have to
backtrack quite a bit to eventually get to its destination.

So, I'm afraid I can't help you much in explaining your results
this morning at Washington and Berkeley from the network's point
of view.


Date:      Wed 14 Dec 83 15:14:47-PST
From:      Richard Furuta <Furuta@WASHINGTON.ARPA>
To:        malis@BBN-UNIX.ARPA, mike@BRL-VGR.ARPA
Subject:   Re: More PING Results
My impression was that we no longer have a WISCONSIN/WASHINGTON line
into our imp.  I believe that this connection has been replaced with a
UTAH/WASHINGTON line.  The UTAH/WASHINGTON link is the one that was
causing us difficulty in the past, as noted in Bob's previous message.
Is it possible that the TEXAS/GUNTER difficulties you mentioned could

Date:      14 Dec 83 12:18:51 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        mike@BRL-VGR.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, Gurus@BRL.ARPA
Subject:   Re: More PING Results
I think Washington is the only Tops-20 host used in your tests.
Tops-20 is still pinging gateways on a regular basis.  The pings
could possibly cause the sort of periodic lossage you are seeing.
It might be interesting to try some other Tops-20 hosts.  We
PING only 7 gateways.  I don't know what other Tops-20 sites
are doing these days.
Date:      Thu, 15 Dec 83  6:45:31 EST
From:      Andrew Malis <malis@BBN-UNIX>
To:        Richard Furuta <Furuta@WASHINGTON.ARPA>
Subject:   Re: More PING Results

Not really.  Even with that line now connected to UTAH, the
northern route is by far the best for Mike's pings to take.  And
even if the packets were to go via the southern route, there are
enough alternate paths to take in the TEXAS and GUNTER area that
the one flakey line shouldn't have either caused the reported
delays or burstiness.

Date:      15-Dec-83 19:33 PST
From:      Robert N. Lieberman  <RLL.TYM@OFFICE-2>
To:        Michael John Muuss <mike@brl-vgr>
Cc:        tcp-ip@sri-nic, mike@brl, Postmaster@washington Feedback@office-3
Subject:   Re: Interesting PING results
Thanks  Mike.  Yes, we still have problems which no one can find.  BBN now has 
some sort of trace program in the IMP and we are hoping that a closer look at 
the packets between IMP and Host will tell us something.  Your 15 second outage 
is a new fact that we will certainly look into (wonder if there is some 15 sec 
timeout somewhere or just a mean time between evoking a bug)

I will treat anyone who provides a clue to the basic problem to a dinner for two
at the best restaurant in your own city.  Also, if anyone wants to solve the 
problem, contact me for a lucative consulting arrangement.


Date:      15-Dec-83 19:36 PST
From:      Robert N. Lieberman  <RLL.TYM@OFFICE-2>
To:        Dennis Rockwell <drockwel@BBN-VAX>
Cc:        Michael John Muuss <mike@brl-vgr>, tcp-ip@sri-nic Postmaster@washington, Feedback@office-3
Subject:   Re: Interesting PING results
we have totally disabled all pinging of gateways for many many months.  maybe 
someone else pinging us (for reasons totally unknown).   Maybe someone trying to
send us mail every 15 sec???  we are checking all this out... thanks for 
response  Robert

Date:      20 Dec 1983 0034 PST
From:      Eric P. Scott <EPS@JPL-VAX>
To:        TCP-IP@SRI-NIC
Cc:        Solomon@CSnet-SH,Postel@USC-ISIF
Subject:   (Negative) feedback on RFC 884

I'm terribly disappointed to see Telnet Terminal Type Option
circulated as a standard in the form presented in RFC 884.  While
I feel some sympathy for the desire to communicate (propagate)
terminal type information across a telnet connection, attempting
to do this in an "out of band" fashion seems a little misguided;
attempting to set one standard for the entire Internet borders on
the unreasonable!

Who gets to play God?

You specify that "the current list of valid terminal types will
be found in the latest `assigned numbers' RFC."  I trust that
this really means "refer to REGISTERED-TERMINAL-TYPES.TXT at your
friendly neighborhood NIC."  Okay, who decides what the "proper"
name for a terminal is?  A Unix site is inevitably going to have
to translate RFC 884 names to termcap names; you could save an
enormous number of sites a lot of grief by simultaneously
adopting termcap as an Internet standard.  This of course a poor
argument; Unix sites can smugly flush telnet in favor of rlogin
which dispenses with option negotiations and just happens to
propagate terminal type information.  I would be a lot happier if
RFC 884 read "the interpretation of terminal types is determined
by each individual server host."  But even if we all agree that
BRAND-X refers to the same manufacturer, model, and firmware
revision level, which mutually incompatible user-selectable modes
shall we declare "right?"  (I don't want to think about what to
do when Your Terminal isn't a terminal per se but a personal
computer running strange terminal emulator software.)

It violates the spirit of telnet protocol.

While this option supposedly doesn't obligate the server to do
anything, you wouldn't be asking if you didn't want to know.
Telnet protocol is predicated on the notion of a Network
\Virtual/ Terminal, and is intended for uses besides remote
login.  When the NVT is extended, such enhancement should be in
terms of specific functionality; if you want a "my terminal is a
VT100" option, be prepared to tell me EXACTLY what this entails.
If the motivation is to allow you to use a feature that 300
manufacturers implement in 50 different ways, why not think in
terms of that feature?  What kind of responsibility do I have to
support YOUR terminal?  At what point does this become a "non-
option option" (as many sites currently treat Suppress Go-Ahead--
if you don't honor it you're a "loser")?  Stuff like "other
options (such as switching to BINARY mode) may be refused if a
valid terminal type name has not been specified" turns my
stomach.  Why don't you specify a negative acknowledgement?  How
about something like
which more closely resembles negotiation.  (Once upon a time
there was Old Telnet.  Old Telnet didn't believe in saying no.
Old Telnet went away.)

I have a host that can be reached from dozens of networks, each
with many hosts, and even more kinds of terminals.  YOU have a
BRAND-X terminal.  There are many more interesting sites than
there are you.  Why don't YOU take responsibility for supporting
your white (Blue?) elephant and let the network standard be a
canonical representation of what you want.  Does this sound like
an NVT?  Data Entry Terminal Option?  SUPDUP protocol?
Interoperability of dissimilar equipment?  A real win?  Yes!

I can't make use of the information.

My telnet server is "magic"--it runs at interrupt level, without
the benefit of process context.  It permanently occupies valuable
physical memory.  It controls the "wrong" end of a pseudo-
terminal.  The effort required to support this option is extreme;
the benefit is questionable.

Better you should tell me...

My terminal doesn't overstrike.
My terminal is connected via a serial line limited to X cps.
My terminal conforms to ANSI X3.64-1979.
...Output Line Width, Output Page Size, oops, those fell out of
favor, didn't they?

It doesn't matter, no one's going to implement it anyway.

One of the traditionally nice things about telnet protocol has
been its simplicity and ability to do One Thing Well.  It's a
pity it has been plagued by the "Creeping Feature Monster" for so
long that it looks like a Christmas Tree weighted down by too
many ornaments (and half the bulbs burned out).

What ever became of draft standards?

					Eric P. Scott
			      Networked Computer Systems Group
			  Computer Science and Applications Section
				  Jet Propulsion Laboratory
Date:      21 December 1983 01:19 EST
From:      Christopher C. Stacy <CSTACY @ MIT-MC>
To:        EPS @ JPL-VAX
Cc:        TCP-IP @ SRI-NIC, Postel @ USC-ISIF, Solomon @ CSNET-SH
Subject:   (Negative) feedback on RFC 884

I believe the TCP/SUPDUP protocol has negotiation options which
exchange some packed information about terminal characteristics such
as the ones EPS wants his server to be told about.

Date:      21 Dec 1983 11:20:44 PST
Cc:        Solomon@CSNET-SH, Postel@USC-ISIF, BRADEN@USC-ISI
Subject:   Re: (Negative) feedback on RFC 884
In response to the message sent  20 Dec 1983 0034 PST from  EPS@JPL-VAX


Contrary to your suggestion, it is very likely that at least a subset of
the production hosts on MILNET will implement the proposed Terminal Type
Option, if it is adopted.  And it may get implemented even if it is not
adopted.  The impetus is likely to dome from the non-ASCII world,
hitherto little-known on the ARPANET.

Bob Braden
Date:      21 December 1983 19:41 EST
From:      David C. Plummer <DCP @ MIT-MC>
To:        CSTACY @ MIT-MC
Cc:        TCP-IP @ SRI-NIC, Postel @ USC-ISIF, Solomon @ CSNET-SH, EPS @ JPL-VAX
Subject:   (Negative) feedback on RFC 884
    Received: from MIT-MC by SRI-NIC with TCP; Tue 20 Dec 83 22:18:29-PST
    Date: 21 December 1983 01:19 EST
    From: Christopher C. Stacy <CSTACY @ MIT-MC>
    In-reply-to: Msg of 20 Dec 1983 0034 PST from Eric P. Scott <EPS at JPL-VAX>

    I believe the TCP/SUPDUP protocol has negotiation options which
    exchange some packed information about terminal characteristics such
    as the ones EPS wants his server to be told about.

Just to avoid a little confusion, the SUPDUP negotiation is not
optional, it is mandatory (and in band).  I REALLY WISH PEOPLE
LIVE.  I'm not saying SUPDUP is the *right* thing, but it is a
hell of a lot better than TELNET and is in better accordance with
todays display oriented option oriented terminals instead of
TELNET's bag of kludges necessary to make it work for anything
but a vanilla printing terminal.

(I hope that takes care of my TELNET flaming quota for the rest
of the year.)

There is an RFC describing SUPDUP (RFC734) and one that describes
s somewhat generic (usable, but not awesomely powerful) graphics
extension (RFC746).

Date:      29 December 1983 10:12 EST
From:      corrigan @ DDN1
To:        tcp-ip @ sri-nic
Cc:        heiden @ DDN1,jethomas @ DDN1,tharris @ DDN1,corrigan @ DDN1,gpark @ DDN1
Subject:   POLYMORPH

Date: December 13, 1983
Text: Comments on the SMTP POLYMORPH COMMAND

   There appears to be a significant amount of misunderstanding about the 
goals and the plans of the DDN with respect to the ARPANET/MILNET split.
I will try to clarify the goals, and to some extent the plans.

   Two basic goals motivated the split. First and foremost was a desire to
create an environment in which both network experimentation and operation
could proceed with a minimum of conflict.  As long as both network exper-
imenters and operational users shared the same backbone, conflict was in-
evitable, since experimenters needed to change both the networking software
and topology in order to perform experiments, with the potential for small
and large network disasters (some planned, most not), and operational users
needed a stable, consistent level of service.  This goal is reasonably well
accomplished by the separation of the backbone into two physically and 
logically separate sets of IMPs and trunks, connected by IP gateways, with
the operational users on one backbone and network experimenters on the other.
The second goal, which is clear as a goal, but much less clear in how it should
be accomplished, is to reduce the exposure of the operational users to the
potential for unauthorised access to their equipment, while retaining their
ability to operate with all users on the internet of which they are a part.

   At this point it is necessary to distinguish between capabilities and
intentions.  It appears to be prudent to possess the capability to restrict
access between the operational and experimental communities.  This is because,
it appears that the threat posed by the users in the two communities is
quantitatively different.  Based on past experience in the ARPANET, it seems
clear that the vast majority of potentially malicious hackers are on hosts
associated with the experimental community.  Our intentions with respect to
the use of the current capability to control access, and future planned 
capabilities with finer levels of control, are not firm and depend on 
a number of factors, not the least of which is the ability of host admini-
strators to control the behavior of users on their hosts.  

   Some of the capabilities developed to permit this level of access control
depend on gateways "peeking" at such things as TCP port numbers.  These
numbers have well defined uses.  It is clear, as indicated in the referenced
message, that users can collude to circumvent such controls.  Such collusion,
or any other attempts to circumvent controls established by the network
administration, are grounds for immediate disconnection of a host from the
network, and are grounds for as serious action against the individual 
colluders as we can manage to instigate.

   We have found the problems of balancing the need of users to communicate
against the need of users to be protected against attacks to which they  
previously (before networking) were not vulnerable to be some of the most 
difficult we face.  We are open to any and all constructive suggestions on
how to deal with this issue.  The suggestions in the referenced message,       
unfortunately, do not fall in the category of constructive.

Date:      29 Dec 1983 1314 PST
From:      Eric P. Scott <EPS@JPL-VAX>
To:        TCP-IP@SRI-NIC
Cc:        corrigan@DDN1
Subject:   Re: POLYMORPH
A minor, but important technicality: a host which uses such a
technique does so only at its own risk; it in no way attains
undesired access to any other internet site at a consequence.
This is consistent with the design of enforcing access control at
the host level.  If there is a way for a Host Administrator to
have an IMP port known "public" or "wild" (i.e. the bridges will
pass any packets with that destination), then the equivalent
effect is accomplished "cleanly."  It's one thing to protect
hosts that can't protect themselves; it's another to censor
traffic to a host that doesn't need it and doesn't want it.  Any
host which elects this option would do so with the understanding
that it may be making itself more vulnerable to attack.

The malicious hacker you should be worrying about isn't a high
school kid with a home computer; he's just out to have a good
time and then move on.  An extremely sophisticated foreign
national (who won't be the least bit deterred by "Mickey Mouse"
mail bridges) is out to destroy The American Way of Life.
Stop HIM.

Dark Thought For The Day: I would not be at all surprised to find
as many or more malicious hackers on the operational side than on
the experimental side.  Go ahead and tell your sponsors whatever
they want to hear, but C.Y.A. if you value it.


P.S. I would sympathize with any host that resorted to
extraordinary measures in order to continue mission-critical
operations that would be jeopardized because the wheels of
network administration turned too slowly or in the wrong
direction (not to imply that this is intentional).
Date:      Thu 29 Dec 83 14:34:00-PST
From:      David Roode <ROODE@SRI-NIC>
Cc:        corrigan@DDN1
Subject:   Re: POLYMORPH
I think that the POLYMORPH joke-RFC suffers from being perhaps too
flippant.  Of course, it was presented like a joke.  As I think most
people are aware, the performance over TCP connections between MILNET
and ARPANET could degrade significantly and yet still be acceptable
for mail.  Trying to establish an interactive virtual terminal session
over an SMTP connection could well end up practically infeasible, let
alone illegal.  Access from TAC's is something which the POLYMORPH
"method" would not permit, because arbitrary TCP ports are not
permitted in an "@o" command.  Yet, large numbers of users depend on

I know of many MILNET/ARPANET communication problems being solved, so
I think the implication that EPS provides that everyone will be out in
the cold is especially inaccurate.  Also, many sites obviously had a
community of interest that lay entirely within one network of the
MILNET/ARPANET pair, so the split was not composed entirely of problem

The idea of having optional intra-network-only or inter-network-only
TCP ports for services such as FTP and TELNET is an intriguing one
that would have been a better thing to put forth than the POLYMORPH

The MILNET/ARPANET split is a pioneering step.  I don't think anyone
argues that internetworking is inevitable.  Naturally there are lots
of growing pains and problems to be worked out in the case of the
first two large operational networks to depend on internetworking
techniques to communicate between the sites on the two nets.
Similarly access control techniques are inevitable.  I don't think
that WHATEVER evolves in the next two years will be permanent, but I
do think that the experience thus gained will be very valuable for
future efforts to implement internetworking complete with billing,
access control, and the like.

Date:      Thu 29 Dec 83 16:13:48-PST
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
To:        corrigan@DDN1.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA, heiden@DDN1.ARPA, jethomas@DDN1.ARPA, tharris@DDN1.ARPA, gpark@DDN1.ARPA
Subject:   Re: POLYMORPH
     It is clear that the Polymorph suggestion was intended as a tongue-
in-cheek comment to what to all appearances is a poorly-thought out and
brute-force plan to protect Milnet from the various flavors of computer
vandals which have appeared from time to time.  The appearance of lack
of thought is at least partially due to the general handwaving made by
DDN management as to future plans.

     Without a well-thought out and understood plan, the isolation of
Milnet can have potentially disasterous effects to those users who need
to cross the ARPANET/Milnet boundary as part of their everyday business.
It is clear that Milnet access granularity needs to be accomplished on a
per-user basis (as opposed to per-host), but it is not as clear how this
mechanism is going to work.

     I've expressed my concerns to DCA on several occasions, but they
come down to the following points:

. In addition to electronic mail, FTP connections should not be hampered.
  If this is unavoidable, then at the very least Milnet sites must be
  able to initiate connections to ARPANET.  It MUST be possible to
  accomplish a file transfer bidirectionally between ARPANET and Milnet.

. It is completely unacceptable to state that electronic mail suffices
  to accomplish file transfer.  There are valid administrative and
  technical objections towards the misuse of electronic mail for this

. Milnet sites must be able to initiate TELNET connections to ARPANET.

. Preferably, Milnet sites should be able to initiate connections on
  any service to ARPANET.

. A means must exist on a per-user basis for an ARPANET user to TELNET
  to a Milnet site.

. A means must exist on a per-user basis for an ARPANET user to FTP to
  a Milnet site.

. It does not suffice to have a bridge system, unless that bridge is
  prepared to handle all forms of file data.  For example, it is NOT
  acceptable to set up some Unix system as a bridge.  Consider the case
  of TOPS-20 "holey" files that need to be FTP'd from one system to

. The procedure should be as automatic as possible, and preferably fit
  within the current TELNET/FTP protocols.

. The authorization mechanism needs to be clarified.  The present NIC
  means of establishing Milnet TAC access with host administrators is
  a disaster.  In certain sites, it is impossible for the technical
  liaison (who formerly took care of these matters) to get the individual
  designated as administrative liaison to understand the necessity of
  taking care of such things.  The current procedures, while pretty to
  management, seem to be less than fully effective.
Date:      Thu 29 Dec 83 15:45:06-MST
From:      Randy Frank <FRANK@UTAH-20.ARPA>
Cc:        corrigan@DDN1.ARPA
Subject:   Re: POLYMORPH
One point which has never been clarified, and for which I have been trying to
get an answer for months: It has been stated that the "bridges" will have the
ability to allow services other than mail on a "host pair" authorization
basis.  However, I have not seen any statement on the mechanism for getting
these approved.  Will it require an elaborate approval process with pages of
justifications and months to process, or will it be a simple request.  I would
feel a lot better if some statement were made by the powers that be that all
that will be required is a simple request from the administrators of the
respective hosts.  I think that many of us envision the former.  It is also
critical that the time required to process such requests be minimal: often
a need will arise for unanticipated communication, and if the approval
cycle takes even days, much less weeks or months, this will pose a serious
impediment to getting work done in a timely fashion.

The fact that virtually nothing has been said by the network administration
on the administrative aspects of dealing with these bridges has most of us
assuming the worst.

Date:      29 December 1983 14:02 EST
From:      dca-pgs @ ddn1
To:        allusers,tcp-ip@brl,tcp-ip@sri-nic,leiner@usc-isi,postel@usc-isi
Cc:        dca-pgs @ ddn1
Subject:   New DDN Interface Effort

Date:  Thu, 29 Dec 83 12:08:56 EST
Tymnet, Inc. has made a corporate decision and commitment to
develop, market, and support a DDN interface, including TCP/IP,
as part of their product line.

This information was obtained by phonecon from Mr. Frank Tapsell
of Tymnet at 1200 today. I had hosted a meeting earlier this fall,
with Mr. Rod Richardson's participation and assistance, with 
Messrs. Tapsell and G. Edgerton of Tymnet for the purpose of 
exchanging technical and management information about the Tymnet
public data network, Tymnet interface devices, and DDN.
Messrs. Edgerton and Tapsell became convinced that a DDN interface
would be a good product venture for Tymnet, and successfully
advocated that idea to Tymnet corporate headquarters, resulting in
today's news.

Projected date of product release is Jan 1985. Tymnet will be 
issuing a written announcement to this effect.

Happy New Year!
Pat Sullivan

Date:      29 Dec 1983 1843 PST
From:      Eric P. Scott <EPS@JPL-VAX>
To:        TCP-IP@SRI-NIC
Cc:        MRC@SU-SCORE
Subject:   Why just FTP?
Why not DISCARD, ECHO, TIME, FINGER, domain name server (good
luck routing your precious mail in the future without that!)?
If the object is to restrict evil TELNET, then restrict evil
TELNET \instead of/ permit <long list of approved services>.