The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1984)
DOCUMENT: TCP-IP Distribution List for June 1984 (41 messages, 25931 bytes)
NOTICE: recognises the rights of all third-party works.


Date:      1 Jun 84 11:03:52 EDT
From:      dca-pgs @ DDN1.ARPA
To:        navyusers2 @, ntec @, lenhart @ dtnsrdc-gw, navyusers @, hawgs @, navtasc @ dtnsrdc-gw
Cc:        allusers,allusers2,
Subject:   DDN Navy Team Personnel Change

From Pat Sullivan:

Friday, 8 June 1984, will be my last day at the DDN PMO.
I will be moving to another DCA element, the Defense Communications
Engineering Center (DCEC) in Reston, VA. I will be with Code R610,
Voice Networks Design Division, working with the Defense Switched
Network (DSN). I will retain my electronic mailbox <dca-pgs@ddn1>.
My phone number will be (703)437-2441, VON 364-xxxx. All are welcome
to contact me if there is a question that they feel I can answer.

I want to express my appreciation to all with whom I have worked.
I have particularly enjoyed working with Rod Richardson on the Navy
Subscriber Team, and with the Navy Subscribers themselves. It's been
a tremendously gratifying mixture of hard work and results; 
above all, it's been fun.

Best to all
Pat Sullivan

Date:      1 Jun 84 15:00:37 EDT
From:      dca-pgs @ DDN1.ARPA
To:        tcp-ip @, hawgs @
Subject:   New Unix Mini

I understand that Computer Consoles Inc. has come out with a
new 32-bit mini running Unix Sys V "with Berkeley enhancements",
so that it supports the Arpanet protocols, including LH/DH-11 driver.
CCI claims that, in a drag-race load scenario, the machine is
eight times faster than a VAX 11/780. 

POC for this item is Mr. Jim Fisk, (301)585-9662.

Any comments appreciated.

Pat Sullivan

Date:      3 Jun 1984 12:43:24-EDT
From:      hisint@NADC
To:        tcp-ip@sri-nic
Subject:   name removal
Please remove my name from this distribution.  (REECE@NADC)
Date:      3 Jun 1984 20:37:34-PDT
Subject:   lost mail
[this message was lost, addressed to you]
>From root Thu May 31 16:54:43 1984
To: root
Subject: Unable to deliver mail

   ----- Transcript of session follows -----

   ----- Unsent message follows -----
Via: decvax!Nosc!ucbvax
Date: Thu May 31 16:55:01 1984
Received: by decvax.UUCP (4.12/1.0)
	id AA01808; Thu, 31 May 84 16:55:01 edt
Received: from SRI-NIC.ARPA (sri-nic.ARPA.ARPA) by UCB-VAX.ARPA (4.24/4.27)
	id AA29583; Thu, 31 May 84 13:45:03 pdt
Received: from nosc.ARPA by SRI-NIC.ARPA with TCP; Thu 31 May 84 12:42:41-PDT
Received: from cod.ARPA by nosc.ARPA (4.12/4.7)
	id AA06679; Thu, 31 May 84 12:39:35 pdt
Received: by cod.ARPA (4.12/4.7)
	id AA20014; Thu, 31 May 84 12:42:22 pdt
From: Thomas Hutton <decvax!ucbvax!sdcsvax!hutton@Nosc.ARPA>
Message-Id: <8405311829.AA28226@sdcsvax.UCSD>
Received: by sdcsvax.UCSD; Thu, 31 May 84 11:29:13 pdt
From-The-Terminal-Of: Thomas Hutton - sdcsvax (ttys7)
Phone-Number: (619) 487-2698
Posted-Date: Thu, 31 May 84 11:29:13 pdt
Weather: Sunny and Warm
Date: 31 May 1984 1129-PDT (Thursday)
To: tcp-ip@sri-nic.ARPA
Subject: Addition to mailing list

Could you please add my name to the distribution list for the 
mailing group tcp-ip.

				Tom Hutton
				hutton@nosc.ARPA	{internet}
				ucbvax!sdcsvax!hutton	{uucp}

Date:      Thu, 7 Jun 84 2:11:48 EDT
From:      Mike Muuss <mike@BRL-TGR.ARPA>
To:        tcp-ip@sri-nic.ARPA
Cc:        Gurus@BRL-TGR.ARPA
Subject:   Interesting Happening
Late last week and the beginning of this week, host BRL-TGR started
exhibiting severe difficulties in delivering mail.  Complaints
about delivery of truncated messages came pouring in from
dozens of sites.  Here is what happened:

BRL-TGR is connected to our local network via a DEC PCL-11B, which
ordinarily runs with an MTU of 1006.  The PCL is connected to
the MILNET by a gateway of our own design, presently operating
the 1822 interface with an MTU of 1006.

For purposes of stressing our gateway, a special PCL driver was
devised which operated with an Input MTU of 1006 but an output
MTU of 503 (exactly half).  It was never intended that this code
be installed on a production mail machine (TGR, in this case),
but Murphy helped out, and thus the story begins.

As the 4.2 BSD TCP on TGR sets a Max Seg Size of 1024, typical outbound
IP datagrams (to hosts giving MSS > 512) is ~1050 bytes.  To fit into
the PCL interface, this is sent as two IP datagrams (1004, ~50 bytes).
With our (ahem) "special" PCL driver, three IP datagrams were being
sent (502, 502, ~50) (MSS == 1024).  We immediately began to experience
firsthand the results of gateway congestion, which resulted in increased
packet loss through the gateway (typically 10%, sometimes higher), which
is worse than it seems because the rest of the fragments which get sent
are wasted bandwidth (the receiving TCP never sees them, as they die
a lingering death on the receiver's IP reassembly queues).

For hosts offering us a Max Seg Size of 512, the situation was not
so bad, as our outbound IP datagrams want to be ~540 bytes.  With
the "special" MTU of 503, this yields two fragments (502, ~40).

All this caused predictable effects, namely, more CPU time spent trying to
deliver mail, and more elapsed time in the queues per message, plus longer
perceived round-trip delays for interactive connections.

However, we also got some unexpected effects.  In particular, there were
hosts (which previously we sent mail to all the time) which our TCP
connections would simply time out on.  More specificly, when SMTP began
pumping data into the connection after issuing the DATA command and
receiving the 354 "go ahead" from the other end, our SMTP process would show
a few Kbytes of data being dumped into the connection, and then nothing
further would happen until TCP timed-out the connection.

At first we thought that we were suffering from congestion collapse,
but this difficulty was destination host-specific, leading to further
speculations.  Last Thursday I stayed up all night building a nearly
marvelous TCP-level trace interpreter for 4.2 BSD.  Friday morning,
I started packet tracing an SMTP connection to a problem host,
and to my great dismay observed nothing but TCP retransmits
when the connection "hung".  When we finally timed the connection
out for lack of an ACK, our connection shutdown was properly
acknowledged by the other host!  Nothing wrong with TCP, but problems
further downstream (IP and below).  Sigh.  Well, I got a nice tool
out of it, anyway.

My theory is that there are hosts out there that simply do
not know how to (correctly) do IP reassembly.

We startled the poor folks at BNL, because their IP reassembly code crashed
their machine.  Seemingly, our 502 + ~40 byte fragment pair was the first IP
fragment their host had ever seen!  After all, with a recommended MTU of 576
on the IMPs, if a Max Seg Size of 512 is issued, who would ever expect to
see fragments?  Sadly, we crashed them every 15 minutes (our mail delivery
retry interval) until they were able to change their code enough to forstall
further crashes.  Sorry!

I'm happy to report that we experienced no difficulty with any TOPS-20,
TENEX, or 4.2 BSD UNIX system, in spite of our perverse behavior.
However, 4.1 BSD UNIX systems, C/70 systems, and "xerox" (a cedar dorado?)
all seemed to have difficulties.  Total, and repeatable difficulties.

Monday night, DPK & Kermit found the errant driver and replaced it, and
TGR returned to normal operation.  The immediate problem has been avoided,
but the question remaining in my mind is:

*)  How many TCP implementaions out there can actually do IP Reassembly?

A more general implementaion issue concerns relationships between the TCP
Max Seg Size and the IP device MTU.  Especially for "bulk" connections like
SMTP and FTP, there are strong performance advantages to "capping" the TCP
Max Seg Size to no more than the first-hop's interface MTU.  I know that
this is an unseemly transgression across protocol boundaries, but the
negative effects that result from dropping one fragment of an IP datagram
are startlingly severe; it seems wiser to avoid IP fragmentation whenever
possible and let TCP alone cope with any packet loss.  On the other hand, it
is desireable to keep the Max Seg Size as large as possible to make the best
use of LAN hardware for local connections.  I'm not pushing for a change,
mind you, just trying to provoke more thinking.


 -Mike Muuss
  U. S. Army BRL
Date:      Thu, 7 Jun 84 09:41 EDT
From:      Charles Hornig <Hornig@SCRC-STONY-BROOK.ARPA>
To:        Mike Muuss <mike@BRL-TGR.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Interesting Happening
    Date:     Thu, 7 Jun 84 2:11:48 EDT
    From:     Mike Muuss <mike@BRL-TGR.ARPA>
    A more general implementaion issue concerns relationships between the TCP
    Max Seg Size and the IP device MTU.  Especially for "bulk" connections like
    SMTP and FTP, there are strong performance advantages to "capping" the TCP
    Max Seg Size to no more than the first-hop's interface MTU.  I know that
    this is an unseemly transgression across protocol boundaries, but the
    negative effects that result from dropping one fragment of an IP datagram
    are startlingly severe; it seems wiser to avoid IP fragmentation whenever
    possible and let TCP alone cope with any packet loss.  On the other hand, it
    is desireable to keep the Max Seg Size as large as possible to make the best
    use of LAN hardware for local connections.  I'm not pushing for a change,
    mind you, just trying to provoke more thinking.

I recommend to everyone that they carefully read RFC 879 on the
difficult subject of TCP segment sizes.  I interpreted it to mean that a
host should do source fragmentation only under unusual circumstances
(section 9) and, should advertise the MSS described in section 7, and
should consider the MTU of the local network when an MSS is received.

Following RFC 879 would make my system advertise a MSS of 1458 (it is on
an Ethernet) (don't ask about the other 2 octets).  If I am talking to a
host on the local Ethernet, things go fine.  If I am talking through a
gateway, I lose big.  One reason for this is that there is no way to
take into account the MTU's of intervening networks.  In this example,
there is a network with a MTU of 488 octets between my network
(MTU=1500) and the destination network (MTU=576).  The result?  I
advertise MSS=1458, the other side interprets this in ligh of its MTU of
576 and settles on MSS=536, and every segment gets fragmented.  This is
a hard problem.  I imagine EGP could propagate "effective MTU"
information and solve it.

The really unfortunate part is that Unix 4.2bsd (at least) doesn't even
consider the local network when it received a MSS.  It would blithely
fragment its 1458-octet TCP segments into 3 576-octet fragments, which
would end up as 6 fragments by the time I got them.  The error rate of
the network, combined with Unixes adaptive retransmission algorithm,
ensures that no long transfer will ever succeed.

Date:      Thu 7 Jun 84 13:10:36-EDT
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        mike@BRL-TGR.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: Interesting Happening
	One thing that would help, but is probably contrary to the spirit
of some TCP's, is to keep the original segments that were sent
Date:      7 Jun 1984 1430-EST (Thursday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        Mike Muuss <mike@BRL-TGR.ARPA>
Cc:        Gurus@BRL-TGR.ARPA, tcp-ip@sri-nic.ARPA
Subject:   Re: Interesting Happening

In case you don't know of it, I have made the changes to the 4.2 bsd
TCP to tune the TCP max seg size to the MTU of the interface, since we
had similar problems through a gateway some time ago. The changes were
submitted to Berkeley, but I can dig out the changes it you can't get
them from there (or Unix-Wizards).

Are you willing to give out copies of your TCP trace tool?

Date:      Thu 7 Jun 84 13:42:51-EDT
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        mike@BRL-TGR.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: Interesting Happening
	(Sorry about that earlier fragment; something spazzed and turned on
parity detect on my line, with the result that a random 50% of my characters
were thrown away, causing much confusion.)

	The root problem here is that IP still doesn't have a decent network
level (as opposed to end-to-end) flow control system. Many people agree that
it is dropping packets should be absolutely avoided at all costs (and if
it does happen, a notification MUST be sent, ESPECIALLY if the effect is non
transient) if any sort of resonable performance is to be maintained.
Dropping packets is much worse if you are dropping fragments because the
probability that you will lose a packet is multiplied by the number of
fragments (the mathematics is actually (1 - (1 - p)^n), I guess, where n
is the number of fragments). Needless to say, the host must listen to such
notifications and modify it's behaviour in some as yet undetermined fashion.

	This is exacerbated by some silly algorithms in 4.2 TCP which don't
seem to have taken into account lossy networks (Berkeley Braindamage strikes
again.) MIT sees the same problem with connections running over some loaded
gateways (MINITS CHAOS boxes) that love to drop traffic. Every time a packet
is dropped, it backs off the reatransmit time until eventually it's larger
than the connection timeout. By-by connection.

	Finally, large scale usage of the IP fragmentation algorithm has never
occurred to see how well it works in an operational environment. Setting your
MSS to the min of all the MTU's on all the transit nets somehow makes having
fragmentation pointless in the first place. It's not actually a bad idea;
you use less resource along the path up to the point at which the packet has
to be fragmented, and the TCP's on each end certainly have a lot less work
to do. The problem is that on a lossy link it a complete lose.
	You can to some degree finesse this by the nasty kludge of remebering
your transmitted segments, and if you reatransmit, use the exact same packet,
in the hope that you'll get one complete packet out of the bits of two. This
is actually not such a bad idea on connections that send a lot of data; on
slow connections you are better off sending all un-acked data every time
you send a new packet, since the extra data ussually doesn't cost
much (compared to the over head of packet processing and the headers and

	We still have a lot of work to do to make this work well.
Date:      7 Jun 1984 13:43:03 EDT
To:        mike@BRL-TGR, tcp-ip@SRI-NIC
Cc:        Gurus@BRL-TGR, MILLS@USC-ISID
Subject:   Re: Interesting Happening
In response to the message sent      Thu, 7 Jun 84 2:11:48 EDT from mike@BRL-TGR.ARPA


Isn't it wonderful down in the trenches with all that mud and gore on the
floor? The consequences of dropping fragments are indeed severe, since the
recipient may claim the remaining fragments for on the order of thirty
seconds. If your RTX timeout is less than this, you will quickly eat all his
buffers. The implications for the reassembly algorithm are obvious and not
widely appreciated by the implementers. I have seen your phenomenon - hung
data connections that close readily - often on SATNET paths when things go bump,
presumably due to gateway congestion. I have also seen the same thing on domestic
paths when a gateway in the middle of a path for some reason cannot handle the
jumbogram size agreed by the TCP endpoints. As long as your SMTP clients are
on the MILNET/ARPANET, this should not be a problem.

You are welcome to reach out and tickle a fuzzball, which is a good reassembly
soldier, having served time on both sides of the Atlantic in the SATNET trenches
with stutter-stream schrapnel flying all about. That, dear heart, really does
ventilate the reassembly algorithm!

Date:      7 Jun 1984 18:41:59 EDT
To:        Hornig@SCRC-STONY-BROOK, mike@BRL-TGR, tcp-ip@SRI-NIC
Subject:   Re: Interesting Happening
In response to the message sent  Thu, 7 Jun 84 09:41 EDT from Hornig@SCRC-STONY-BROOK.ARPA


Your example implies nothing wrong with the specifications and machinery
in place between the hosts. The fact that the gateways are so fragile
in the face of high packet fluxes is exacerbated by the fragmentation,
which is specified by the endpoints without knowledge of the path
characteristics between them. EGP could not of itself provide this knowledge,
since this would imply knowledge of the pathc interior to the various
autonomous systems and may not remain fixed for the lifetime of the

In other words, the fact that it doesn't work is a consequence of the resources
available, not the architecure itself. We have found the safest thing to do in
our leaky corner of the Internet plumbing is to use 576 for just about
everything that escapes past a gateway (MSS 236). In addition, our hosts
negotiate down, as well as up, from the 576 default. That is important for our
SATNET clones so we can avoid fragmentation and the effects of stutter streams.
However, if this is done you have to be careful not to include data in the SYN
packet (we don't, anyway), since some bum could put 536 octets behind that.

I shudder to send this, since I know the mail-failure messages will rain on

Date:      8 June 1984 08:50-EDT
From:      David C. Plummer <DCP @ MIT-MC>
To:        MILLS @ USC-ISID, tcp-ip @ SRI-NIC, mike @ BRL-TGR
Cc:        Gurus @ BRL-TGR
Subject:   Re: Interesting Happening
Allow me to restate a claim I made a year ago to Dave Clark,
which I think I sent to Postel and possibly this mailing list
last November or December.

The two naive ways to implement the adaptive retransmission rate
as "suggested" in the TCP specification are unstable.  Either the
machine will start retransmitting packets as fast as it can or it
will length the retransmission interval until it gets bounded
from some parameter above.  In both cases, the farther you drift
from the REAL ROUND TRIP TIME the faster you will drift away.
The rate of drift is also proportional to some power of the
probability of lost packets.

From discussions I have seen, 4.2 exemplifies the latter case.
KLH's implementation for ITS does not attempt to do adaptive
retransmission, but instead has a static interval.  I think there
are other operating systems that also do this and therefor can't
gain from adaptive retransmission, but on the other hand, also
don't lose because of it.

Assumptions: To be able to compute round trip time one must keep
the transmit time associated with the packet.

Proof 1: The case that the transmit time is set for each
transmission:  If the network gets congested and slows down
replies (with ACKS), then an ACK can appear to arrive in a time
much sooner than the round trip time.  The reply is not for the
most recent packet, but for some packets sent in the past.  This
makes the round trip time artificially shorter.  If it gets too
short (e.g., half of the actual round trip time [Note: this could
happen with initial parameters on a fresh connection if the
actual round trip time is large]) then two packets will get sent
in the actual round trip time and the first ACK that can EVER
come back for the first packet comes back very soon after the
second packet, which makes the apparent round trip time be
epsilon.  I have seen this phenomena when I was testing and
trying to understand the problem, but I have not seen it in

Proof 2: The case that the transmit time is set only for the
first transmission of a packet.  This case really loses, because
you diverge very quickly.  All it takes is one lost packet.  Even
if the retransmitted packet is ACKed immediately, the round trip
time reported for the packet is twice (most implementations seem
to wait for twice the apparent round trip time before
retransmitting) the actual round trip time.  So this gets
averaged in.  This might not be so bad for losing .1% of the
packets, but any significant amount of lossage will pump the
average up.  Suppose the apparent round trip time is twice the
actual (e.g., 2 seconds instead of 1, or something, which is
realistic) and that 5 packets are sent, one of which (say the
third) is lost.  2 packets get the actual round trip time
reported for them.  The last 3 don't get acked because 3 got
dropped.  A retransmission happens after 4 actual round trip
times (apparent = 2 * actual, retransmit after 2 * apparent)
and the ACKs come back immediately.  For the 5 packets 2 of them
came back with the actual round trip time and the other 3 came
back with 4 times the actual.  I think you will agree that
weighting 1,1,4,4,4 produces a number greater than 2, which is
the original factor between apparent and actual.

Note that it doesn't matter if you retransmit your entire
retransmission queue or just the first item.  Several
implementations retransmit only the head of the queue.  This can
actually make the problem worse, because if some other packet
also got dropped, it will eventually need retransmitting and it
will make matters worse because it will appear to have taken a
very long time to get around.

The only fix that I know of that has any hope (in other words, it
hasn't been proven or reliably demonstrated to me) is that if you
EVER retransmit a packet you NEVER count that packet in round
trip calculations.  This requires a flag or a counter associated
with the packet for knowing if or how many times it was
retransmitted.  Note carefully that retransmitting for a
connection 'virtually' retransmits all packets on the transmit
queue, even if in actuality you only retransmit the head of the

Conclusions: Until somebody publishes an RFC which describes
these problems (I'm certainly don't have the time, but someone
who does is welcome to use the ideas in this note) and proposes a
solution with actual in-the-field results of workability, I
suggest everybody dike out (or at least turn off) their adaptive
retransmission algorithm in favor of a static interval.  I hope
nobody says that is impractical, because the current losing
situation (especially with 4.2) is unbearable.

I'm not about to proof read this message at 1200 baud.  My
apologies for spelling and grammatical errors.  I hope I never
swapped 'actual' and 'apparent'!  Please seriously think about
this before proposing solutions.  If you have a scheme that
works, WITH PROOF (verbal and experimental), please do let us
know about it..
Date:      Friday,  8 Jun 1984 12:57-PDT
From:      imagen!
To:        shasta!mike@BRL-TGR
Cc:        shasta!tcp-ip@sri-nic.ARPA, shasta!Gurus@BRL-TGR.ARPA,
Subject:   Re: Interesting Happening with TCP-IPfrag interactions

At IMAGEN we've had some similar problems with bad interactions between TCP
and IP/fragmentation.  A similar scenario occurred when our printer was
advertising a MaxSegSize of about 868 bytes (derived from the internal
buffer size) to a 4.2 vax.  The vax transmitted packets that were exactly
the right size, and every packet was fragmented by an intervening gateway.
Some more facts:
	- The gateway sent the packets faster than the printer could receive
	  them, such that it was vey likely that the second fragment was lost
	- TCP reformats its packets on retransmission, so that each packet
	  has a unique IP-Identification field
	- The printer limits the number of packet buffers that can be held for
	  reassembly to 10, when there are 20 buffers in the system.

The effect was:
	After a while, data stops flowing across the connection.  Data sent
	to the printer is not acked, and the printer's trace facility
	indicates that no packets are being received at the TCP level.  
	Yet when the remote host gives up and resets the connection, the
	RST packet is received by the printer.

The cause of this behavior was that, while there were plenty of buffers free
in the printer, all of the 10 that could be used for reassembly were in use
being timed out (by counting down TTL's -- packets free in 255 seconds).
Thus, large TCP packets were not getting through, because they were being
fragmented, but small TCP packets had no trouble, because they were allowed
to make use of the rest of the printer's buffers.  Further analysis
indicated that all the fragments being reassembled contained identical TCP

The General Problem

The general problem is that IP fragmentation only really works if we make
assumptions about the way higher level protocols operate.  This is stated
briefly in the IP spec (RFC791, probably a bit old, page 29):
	... TCP protocol modules may retransmit an identical
	TCP segment and the probablility for correct reception would be
	enhanced if the retransmission carried the same identifier as
	the original transmission...
IP fragmentation is not robust by itself.  It gets by if the higher level
protocol retransmits packets with the same identification.  This is easy to
acheive in the IP interface:
	ip_Send(packet...) - choose new ID
	ip_Retransmit(packet...) - use ID in the packet

When I first read the IP protocol, I thought that this trick would work. I'm
sure that it works with protocols like TFTP, but it doesn't with TCP. The
reason is that an implementation of TCP should, when possible, retransmit
all the data that it has to send, rather than retransmitting the original
packets.  This is an especially Good Idea when the TCP is sending lots of
little packets (e.g., telnet remote echo).  

The easiest way to implement a TCP with this desired behavior is to have TCP
always reformat packets when it sends them, even if some or all of the data
is being retransmitted.  When the amount of data that has backed up is
greater than MaxSegSize used, this technique does result in retransmitted
packets that are identical to the original packet.  But the normal TCP
implementation doesn't realize this, so it calls ip_Send not ip_Retransmit,
and IP allocates a new Identification number for each packet.  Thus, buffer
space at the receiver fills up with fragments, all of which are
retransmissions of the same packet.

So the General Problem:
	TCP and IP both perform fragmentation and reassembly.  IP reassembly
	can only be robust when its client is not also performing reassembly,
	because it relies on the client to resend identical packets to be
	robust, which the client will not do if it is dynamically
	fragmenting. When TCP and IP are used together, TCP fragmentation 
	should be used to the exclusion of IP fragmentation.

The above is obvious, but when I restate it, it comes out different from
what has been said before:

The Specific Problem:
	A major goal of the MaxSegSize option is to prevent IP fragmention
	from occuring.  Another goal is to allow hosts with
	different sized buffers to be able to exchange packets without
	overflowing each other's limits.

The ordering of the goals sounds phoney, since probability of failure if the
option is ignored in the second case is 1 but <1 in the first case.  But the
goal of preventing IP fragmentation is the harder of the two to implement: a
host knows the largest buffer size that it can accept but can never know the
largest size that will not be fragmented on the way to it (since it can not
know what gateways are used for each packet sent to it).

[ Footnote: I am deliberately assuming that any sized packet can be      ]
[ transmitted over any gateway path.  This is easy to ensure by having   ]
[ every gateway follow two conventions: 1) every gateway can receive the ]
[ largest packet transmitted on each connected network, and 2) every     ]
[ gateway can fragment outgoing packets.                                 ]

The Specific Solution:
	The TCP MaxSegSize option should be enhanced by a negotiation that
	persists over the entire course of the connection.  A negotiation
	is needed since only the receiving TCP can know that a packet
	transmitted to it was fragmented on the way.  The negotiation must
	persist during the entire connection because the route used over
	the connection can change with time.

I have thought somewhat about how this might work, but haven't gotten all
the bugs out of it; maybe someone out there can think up some ideas.

The TCP MaxSegSize Negotiation (in addition to the Option, which
works as now):
	At any time during the course of a TCP connection a receiver can
	be informed by the local IP layer that a received packet had been
	reassembled.  The TCP layer then sends a packet (which may have
	data in it, if there is data to send) with a TCP MaxSegSize
	Negotiation in its option field that decreases the offered
	MaxSegSize.  The IP layer may be able to suggest a MaxSegSize to
	use, otherwise the TCP layer must continue to decrease the
	maxSegSize until packets are no longer fragmented.

	Periodically, each receiver can test enlarging its offered maxSegSize
	until it is up to the limit of its buffer space.  This allows the
	receiver to eventually detect a change in the network path between
	itself and the sender.
	The initial maxSegSize offered by a host should be the minimum of
	the host's buffering limitations and the maximum transmissible size
	of the network to which the host is connected.

The above makes some assumptions:
	1. Internet routing is relatively static.
	2. One path between two hosts is used at a time

I believe that these assumptions are true in the Internet world.  Anyone
believe that they will not continue to be true?   Anyone have better ideas
about how to negotiate maxSegSizes?

- Geof Cooper
Date:      14-Jun-84 04:26:37-UT
From:      mills@dcn6
To:        tcp-ip@nic
Subject:   Timeout hazards

You may be interested in a problem that has cropped up several times over the
last few months and lately has become serious, especially with mail. The
problem has to do with subtle interactions between TCP timeouts and resource
allocations along the rickety network lines up to and including the
destination TCP. The problem is exacerbated and becomes costly in real money
when public-net paths are involved.

It is well known that the BBN VAN gateway drops the first packet while
attempting to open a connection to a public-net host. Actually, there are
other offenders that do that, including our public-net fuzzballs. Once the
connection is open our fuzzy, for example, will hold it open for two minutes
in the absence of traffic. The corresponding timeout for the VAN gateway is
three minutes. Therefore, if the TCP retransmission timeout ever gets longer
than that, every segment will die while opening a connection that is closed
before the next retransmission comes along.

It turns out that the ISID TCP, for instance, will back off to at least two
minutes between retransmissions and will sustain these retransmissions
indefinately. In a recent case our fuzzball and ISID got into a state where
every such retransmission was lost with attendant public-net charges and would
have stayed in that state accumulating charges forever, absent operator
intervention. Morals: (1) do not permit TCP retransmissions to persist
indefinately (however long that is), (2) do not permit retransmission timeouts
to back off to unreasonable values (e.g. greater than two minutes), (3) insist
that any network state assumed as the result of dropping a packet persist for
longer than the longest expected TCP timeout (e.g. greater than three
minutes). Score: fuzzball 0, Internet 1.

Here is another more sinister problem. In a little personal computer with
limited resources it may not be possible to always have a connection block
lying around for incoming TCP connections (big computers like TOPS-20s also
have that problem). A natural scheme for dynamically creating processes and
connection blocks to deal with this is to create a process upon receipt of a
TCP segment that doesn't match any existing connection and wait for the system
and that process to create the connection block, then toss the TCP segment,
which must be parked temporarily somewhere, at it. If for some reason system
resources do not permit the creation of a connection block after a timeout,
the segment is discarded.

All this works fine unless the originator of the connection has an
unrealistically short initial retransmission timeout (anything shorter than
three seconds is asking for trouble - see RFC889). In the case here the time
for the connection block to come alive could be from a fraction to several
seconds, which increases the apparent roundtrip delay accordingly. The result
is that initial SYN retransmissions may either (1) clog up the receiver
buffers, (2) be dropped without trace, (3) elicit ICMP unreachable or quench
messages or (4) elicit TCP resets.

Having been burned by both (1) and (2) (see above), I recently changed our
fuzzballs to (3). The ISI TOPS-20s do (4) in this case, which I wanted to
avoid on general principle, since it is hard to process TCP segments without
TCP connection blocks. My 4.2bsd pals with their artfully short retransmission
timeouts then were oblidged to make sense out of the following behavior: The
first SYN got held in the fuzzball pending creation of the connection block,
while one or two succeeding retransmissions were bounced with ICMP
port-unreachable messages. When the connection block was finally created, a
SYN/ACK was dutifully returned to the apparently confused originator, who
closed the connection (!) and went away only to retry the whole nonsense
several times in quick succession. Since this was mail, the spasm of
connect/close reoccured periodically for several hours until found out by a
reasoning being.

Most of us scratching down in the sewers of the Internet plumbing have come to
realize that ICMP error messages (and TCP resets on initial connection) are a
noisy indication of connection failure, so that the attempt should be retried
a few times (without loss of connection state!). This is especially important
with SATNET paths and overloaded gateways in general. Since I know that many
implementations have had to patiently retry connection-opens in the face of
TOPS-20 resets, it would seem reasonable to do the same thing in the face of
ICMP error messages.

These problems certainly contribute to the incidence of broken connections and
superfluous network traffic. In fact, it would be interesting to tally the
observed number of TCP SYN segments as a percentage of total TCP segments in
the system (perhaps in the gateways?). The above discussion also emphasizes
the fact that unrealistic initial retransmission timeouts are the single most
deadly disease infecting the host population today.

Date:      15 Jun 84 10:42:43 EDT
From:      dca-pgs @ DDN1.ARPA
To:        tcp-ip @, info-nets @, info-unix @, unix-wizards @
Cc:        dca-pgs @ DDN1.ARPA, hawgs @
Subject:   What Are Typical Memory Requirements for TCP/IP? (Also FTP/SMTP/Telnet)
I'm trying to find answers to subj question. What are the numbers
for implementations out there? Pls reply direct to <dca-pgs@ddn1>.

Thank you,
Pat Sullivan

Date:      15 Jun 1984 1444-EST (Friday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        mills@dcn6.ARPA
Cc:        tcp-ip@nic.ARPA
Subject:   Re: Timeout hazards

What an interesting problem! I had never thought about this before.
Most of the CSNET X.25 sites don't run with timeouts enabled, but I
will keep it in mind when fielding problems in the future.

For the record, the current generation CSNET/X.25 code uses a
five-minute inactivity timeout to close channels.

Date:      Sat, 16 Jun 84 00:24:09 EDT
From:      Paul Milazzo <>
To:        Christopher A Kent <>
Subject:   Re: Timeout hazards
I guess that most of the IP-over-X.25 implementations drop the first IP
packet after opening an X.25 channel.  It occurs to me that this
behavior completely torpedoes any plans to run EGP over X.25
connections, since the channel timeouts seem to be shorter than the
recommended HELLO interval, and thus every HELLO message would be

				Paul G. Milazzo <Milazzo@Rice.ARPA>
				(temporarily hiding at BBN)
				Dept. of Computer Science
				Rice University, Houston, TX
Date:      Mon, 18 Jun 84  8:35:09 EDT
From:      Bob Hinden <hinden@BBNCCQ.ARPA>
To:        Paul Milazzo <>
Cc:        Christopher A Kent <>,,
Subject:   Re: Timeout hazards
The VAN gateway will drop the first datagram if it arrives on the Arpanet
side destined for the X.25 network.  If the host (or gateway) on the
X.25 side opens a connection to the the VAN gateway, it will not drop
the first datagram.

I assume that you EGP gateway will be on the X.25 side, so there should
not be a problem.


Date:      Mon, 18 Jun 84  9:07:01 EDT
From:      Mike Brescia <brescia@BBNCCQ.ARPA>
To:        Paul Milazzo <>
Cc:        Christopher A Kent <>,,, brescia@BBNCCQ.ARPA
Subject:   Timeout hazards and EGP stub gateways
There are two minimizing assumptions we can make which can cut down on 
connect time for stub gateways over X.25 nets ($$$net).

- When there is no user data, and the connection been timed out and closed,
  you don't need to reopen the connection just to see if the other side is
  still there.  Assume the other side is up until user data comes along,
  then attempt to open the connection.  If the connection fails, report new
  information (ICMP net ureachable, new routing updates, &c.)

- Using the EGP poll times when first acquiring an EGP neighbor over X.25
  set the intervals for hello and net reachability poll 'very long'.  The
  field is 16 bits of seconds, so you need not open a connection but every
  3/4 day (18+ hours).

The underlying assumption is that, since this is a stub gateway to some
net and there is no other path, there is no need to notice when the gateway
or net goes down until some traffic arises for that net.  When some connection
is attempted to the stub net, the X.25 connection must be reestablished, and
then the net reachability can be deduced.

first packet: (toward X.25 side)
	drop packet and initiate X.25 connection
	if connection fails
	then declare neighbor down 
	     if no other path to NET
	     then declare NET unreachable (internal tables)

second packet: (this is normal gateway action)
	if NET reachable
	then forward packet
	else if NET reachable another way
	then return (ICMP redirect)
	else if NET unreachable
	then return (ICMP net unreachable)

Date:      19 Jun 1984 00:14:23 EDT
To:        hinden@BBNCCQ.ARPA, milazzo@CSNET-DEV.ARPA
Subject:   Re: Timeout hazards
In response to the message sent  Mon, 18 Jun 84  8:35:09 EDT from hinden@BBNCCQ.ARPA


I will have to think about the DLV11 problem.

PATCH.SAV is the DEC patch utility. See RT-11 documentation. Assemble PATCH.MAC
(nothing to do with the DEC module) with MACRO PATCH+CFGLIB.SML/LIB . See
the commands files SYSGEN.COM and follow all the referenced command files
(sometimes many generations removed) to reveal all there is to know about

Date:      19-Jun-84 03:27:41-UT
From:      mills@dcn6
To:        MikeBrescia<brescia@BBNCCQ.ARPA>, PaulMilazzo<>, ChristopherAKent<>,,
Subject:   Re: Timeout hazards and EGP stub gateways


While your method does reveal whether the EGP neighbor on the other side of the
VAN net does or does not respond to conn connnection request, it lacks
sufficient functionality to be usable as an EGP Hello or Update. It also does
not reveal whether the EGP peer is operable at the EGP level, only whether it
responds to an incoming-call packet. We have all seen numerous cases where a
gateway responds at the link level, in spite of brain damage at higher levels.
While either of us can contrive ad-hoc mechanisms to make things EGP work
even when the "first" packet is dropped, the only acceptable course in the long
run is to stop doing that.

While having been guilty of nonchalant dropping of packets myself, I have had
enough trouble recently to think that an exceedingly bad thing to do. The first
packet that comes along after a long while is almost certainly of great value,
even if the one that comes hard on its heals often is not. The lesson would seem
to be: hang on to that first packet tenaciously - if the queue starts to fill
up behind it, then dump a few of those. Even so, we could probably quickly think
up scenarios where even that could get into trouble.

Date:      Tue, 19 Jun 84 16:17:43 BST
From:      Robert Cole <>
To:        Paul Milazzo <>
Cc:        Christopher A Kent <>,,
Subject:   Re:  Timeout hazards
1. Our implementation of IP>X.25 at UCL does not drop packets trying to open
calls, I am appalled that anyone else does (and consequently gets what
they deserve). Our IP>X.25 runns on a very busy 11/23.

2. I believe the running of EGP over such a link to be a suitable subject
for research. It has to work. It cannot be dismissed.

Robert Cole.
Dept of Computer Science
University College London, UK

Date:      21 Jun 84 16:43 EDT
From:      Allan Lang <ntec@nalcon>
To:        tcp-ip@sri-nic
Cc:        ntec@nalcon, avrunin@nalcon, ddn-navy@DDN1
Subject:   DDN Simulation

Who is the point of contact for ARPA/DDN simulation and network
models. We are trying to simulate network loading on a new IMP/TAC

Respond to NTEC@NALCON.

Allan L. Lang

Date:      22 Jun 84 12:05:54-PDT (Fri)
From:      ihnp4!houxm!houxz!vax135!cornell!uw-beaver!ssc-vax!fluke!joe@UCB-VAX.ARPA
To:        Unix-Wizards@BRL-VGR.ARPA
Subject:   Re: internet broadcast addresses
Yes - I have found Berkeley's choice to be quite annoying.  Their use
of this blatantly non-standard bug can cause problems if you are trying
to connect heterogenous machines on the same network.  We have 5 VAXen
running 4.2, 4 SUN-2s running 4.2, a VMS VAX and an IBM 3083 sharing
the same Ethernet and trying to send mail, files, etc., back and forth.
In addition, the UNIX VAXen each send out the rwho packets once every
minute.  So, that is 5 broadcast packets per minute, each with the
internet destination address of (we use net number
192.9.200), or approximately one packet every 12 seconds.  These
packets go out over the Ether with Ethernet addresses of all one's, so
everyone HAS to receive them and then decide if they are important or
not.  The SUNs don't participate in the rwho junk, so they ignore the
packets.  Similarly, the IBM seems to safely ignore the packets.  On
VMS, we are running Compion's Access-I software.  This is a very well
thought out package, with perhaps the only major problem being that it
was *modularly designed*.  No, modularity is great and wonderful, but
if you stick too closely to modularity in network design, especially in
the interface between layers, you may lose big.  When Access receives
the packet, the network interface module strips the Ethernet cruft off
the packet before passing it to IP.  IP says "who is this host  I never heard of them before!" and proceeds to send out
two response packets: one is an ICMP message to the sending host saying
"I don't know who this is" and a retransmission of the
packet it just received.  Well, the NIM can easily send out the ICMP
message because it already knows the ethernet address of the sender,
but it can't retransmit the rwho packet since it doesn't know what the
ethernet address is.  So, the NIM blithely sends out an ARP trying to
find out who is!

I found out about this behavior when we first booted the IBM software.
We had console tracing turned on and noticed a message about ARP adding
a translation for the VMS machine several times ber minute.  At first I
thought that the VMS ARP implementation was screwed up and just sending
out constant ARP packets, but then I looked at some detailed traces
from the Access software (you can get INCREDIBLE levels of tracing from
Access - more than you EVER wanted to know!) and discovered this
side-effect of modularity!  The real problem is that IP doesn't know
that the packet came in as a broadcast packet, and the NIM doesn't
realize that IP is retransmitting a packet that it received as a
broadcast as a specific address packet.  If IP *knew* that the packet
with the 0 address was a broadcast packet, it could just throw it away
and all would be fine.  If the NIM *knew* that the only broadcast
packets it wants to see are.

Sigh.  Aren't non-standards fun!  At least it all works!

Date:      Sat 23 Jun 84 13:02:07-PDT
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
Subject:   gateways
     Now that a year or so has past in the saga of gateways, pinging, and
prime vs. non-prime gateways, I feel adventurous enough to bring up the
issue again.

     Most TOPS-20 systems are still pinging.  When the DCA request came
about a year ago to only ping two gateways, we found that we lost
connectivity with a lot of places with (soon to be illegal) dumb gateways.
Then along came the Milnet transition.

     My compromise so far has been to ping our two assigned Milnet gateways,
a third prime gateway, and all of the dumb gateways on network 10.  The
third prime gateway was selected pretty much randomly; I picked DCEC-GATEWAY
as it is at the other end of the country from our two Milnet gateways.  It
seemed reasonable to know about a prime gateway different from the Milnet
ones if only to start our extensive non-Milnet traffic at a different place
from our (equally extensive) Milnet traffic.

     I imagine that there have been advances in the past year which would
allow a somewhat better strategy without all that pinging.  Perhaps there
is a more suitable "third gateway" for me to use than DCEC.

     I would prefer not to experiment; I have too much else to do.  I am
willing to hear from those people playing "gateway wars" if there is a
better strategy (and from BBN/DEC about how soon we will have a TOPS-20
that doesn't need to ping).  At least for 1822 networks, pinging ought to
be completely unnecessary if only 1822 passed up type 7 indications to the
IP gateway handling level!
Date:      Sat 23 Jun 84 12:05:59-EDT
From:      Bob Cattani <CATTANI@COLUMBIA-20.ARPA>
To:        unix-wizards@BRL.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        hu@COLUMBIA-20.ARPA, chris@COLUMBIA-20.ARPA
Subject:   EGP for Unix 4.2
Has anyone added any gateway support to Berkeley 4.2 Unix?
Specifically EGP?
-Bob Cattani, Columbia University CS Dept.
Date:      25 Jun 84 09:57:26 EDT
From:      dca-pgs @ DDN1.ARPA
To:        tcp-ip @, hawgs @
Cc:        b615 @ DDN1.ARPA, b616 @ DDN1.ARPA, navyusers @ DDN1.ARPA, navyusers @, ddn-usaf @ DDN1.ARPA
Subject:   New TCP/IP LAN Product

Noted in the 15 Jun 84 issue of DATAMATION:

"FUSION is a networking SW package interconnecting a wide
variety of computer processors, OS's, LAN HW, and network
interfaces with a goal of achieving vendor interoperability.
FUSION offers a choice of two LAN protocols, Xerox XNS and
ArpaNet TCP/IP.

The package will network between single and multiprocessing
systems; different OS's; different processor types; and different
LAN HW and network interfaces...It can be ported to any processor
hosting an Ethernet controller and C compiler. It runs on any
OS with a C compiler, and requires no modification of that OS.

                           . . .

FUSION SW prices range from $750 to $6000. [The] FUSION [package]
(plus [Ethernet] network HW...) ranges in price from $1500 to $7500."

Network Research Corp, LA, CA, DATAMATION, 15 Jun 84, p.232.


Anyone out there know anything about this? 
Thanks for all info.

Pat Sullivan
(on loan to) DDN/PMO

Date:      Mon, 25 Jun 84 19:24:52 edt
From:      God <>
Subject:   request to join

	Please add my name to your mailing list. Thank you.


Date:      26 Jun 1984 0631-PDT (Tuesday)
From:      crepea@sri-spam (Ken Crepea)
To:        tcp-ip@sri-nic
Cc:        crepea@sri-spam
Subject:   name addition

Please add my name to your distribution list.  Thanx.

Date:      Tue, 26 Jun 84 23:40:02 CDT
From:      Dave Johnson <dbj@rice.ARPA>
Subject:   TCP urgent pointer
I'm having a bit of trouble with the TCP specs in RFC 793 regarding the
meaning of the urgent pointer in transmitted segments.  On page 17 of RFC
793, the definition of the urgent pointer reads:

  Urgent Pointer:  16 bits

    This field communicates the current value of the urgent pointer as a
    positive offset from the sequence number in this segment.  The
    urgent pointer points to the sequence number of the octet following
    the urgent data.  This field is only be interpreted in segments with
    the URG control bit set.

However, later, on page 56, under the description of the SEND call in the
"Event Processing" section, it says:

      If the urgent flag is set, then SND.UP <- SND.NXT-1 and set the
      urgent pointer in the outgoing segments.

This seems to be in contradiction to the definition of the urgent pointer
quoted above from page 17.  The original definition indicates that the
urgent pointer points to the first octet FOLLOWING the urgent data, but the
setting of SND.UP in Event Processing treats it as pointing to the last
octet that STILL IS urgent.  Digging around in RFC 793 further, I find the
following on pages 40 and 41:

  This mechanism permits a point in the data stream to be designated as
  the end of urgent information.  Whenever this point is in advance of
  the receive sequence number (RCV.NXT) at the receiving TCP, that TCP
  must tell the user to go into "urgent mode"; when the receive sequence
  number catches up to the urgent pointer, the TCP must tell user to go
  into "normal mode".  If the urgent pointer is updated while the user
  is in "urgent mode", the update will be invisible to the user.

  The method employs a urgent field which is carried in all segments
  transmitted.  The URG control flag indicates that the urgent field is
  meaningful and must be added to the segment sequence number to yield
  the urgent pointer.  The absence of this flag indicates that there is
  no urgent data outstanding.

This also seems to tread the urgent pointer as pointing to the last octet
that still is urgent.  As a last reference, I dug into past mail from this
list, and I came across the following:

    Date: 30 Nov 1983 19:27:25 PST
    Subject: re: Philosophy, Consistency, and Questions
    To: tcp-ip@SRI-NIC


    DCP:  Reference Page 56, SEND call, the point of this message: "If the 
    urgent flag is set, then SND.UP <- SND.NXT-1 and set the urgent pointer 
    in outgoing segments."  When I receive urgent data, everything up to AND
    INCLUDING the urgent byte number is urgent.  Therefore, this is an 
    INCLUSIVE number?  Yes?  No?  Why the inconsistency?  Or don't others 
    see it this way?

    JBP:  Yes. I don't know that any one else ever gave much thought to the 
    inclusive/exclusive catagorization of these numbers.  It may be 
    philosophically inconsistent from that point of view, but it does not 
    seem to have any implementation problems.


This seems to agree with my earlier reference to this page of RFC 793 in
that the urgent pointer does NOT point to the first octet FOLLOWING the
urgent data, but instead points to the last octet that STILL IS urgent.

Does all this mean that the original definition of the urgent pointer on
page 17 is wrong?

                                        Dave Johnson
                                        Dept. of Computer Science
                                        Rice University
Date:      27 Jun 84 14:26 PDT
From:      Tom Perrine <tom@LOGICON.ARPA>
To:        Postmaster@COLUMBIA-20
Cc:        TCP-IP@SRI-NIC, Postmaster@LOGICON, John Codd <john@logicon>
Subject:   SMTP amd TCP problems
We keep getting "nasty-grammed" by the COLUMBIA-20.ARPA SMTP mailer when
it attempts to deliver mail to us. 

The first attempt to deliver a piece of mail to us fails, and often
hangs our SMTP server.  The second attempt usually has a header field
called "Delivery-Notice", which informs us that it had to use "50 byte,
pushed segments" or something similiar.

From our end (LOGICON.ARPA) we see the TCP connection closed from
your end *before* the CRLF.CRLF sequence to terminate the SMTP 'DATA'
protocol.  This should not be happening.

Has anyone else mentioned or noticed any similiar problems?

From discussion in the sig-tcp mailing list and our trace efforts, we
suspect that these problems (delay/timeout and segment size/push) are a
"feature" of the gateway. What gateway are you using to reach MILNET?

Tom Perrine

Date:      27 Jun 1984 16:44:33 PDT
To:        tom@LOGICON.ARPA, Postmaster@COLUMBIA-20.ARPA
Subject:   Re: SMTP amd TCP problems
In response to the message sent  27 Jun 84 14:26 PDT from tom@LOGICON.ARPA

The problem looks to me like a combination of a fast timeout and a failure
to push the data (both on the sender side), but....  There could be other
bugs that would produce this effect.  If the receiver side miscalculated
something and/or failed to process a less than full buffer even if pushed,
one might get the same effect.

In general, one should look for the problem in the system that is new to the
game, and one should test with lots of different systems.

There is a file of old discussion about SMTP problems.  This can be copied
from USC-ISIF.ARPA via FTP.  The file name is <SMTP>MAIL.ERRORS.

Date:      Wed, 27 Jun 84 17:54:50 EDT
From:      Doug Kingston <dpk@BRL-TGR.ARPA>
To:        Tom Perrine <tom@LOGICON.ARPA>
Cc:        Postmaster@COLUMBIA-20.ARPA, TCP-IP@SRI-NIC.ARPA, Postmaster@LOGICON.ARPA, John Codd <john@LOGICON.ARPA>
Subject:   Re:  SMTP amd TCP problems
Smells like TCP or IP fragmentation/reassembly problems and I'd put my
money on IP in either your host or a host you connect to.  Is any of
the involved code developed from 4.1a?

Date:      27 Jun 84 2010 EDT
From:      Rudy.Nedved@CMU-CS-A.ARPA
Subject:   Re: SMTP amd TCP problems
LOGICON's SMTP server does not handle interactive character oriented
TELNET connections. It looks like it is a input problem in the SMTP
server. If I rember correctly, Unix will return a positive number
of bytes up to the length of the buffer or the length of data in the
packet whichever is less. Maybe it is expected the <crlf>.<crlf> to
be in the last packet that is part of the message and is caught off
guard when a new packet arrives with just <crlf>.<crlf>.

It should be interesting to see if mail from CMU-CS-A works to LOGICON.
I have silent code in that will do the 50-byte push hack if it does
not work the first time.

Date:      28 June 1984 07:40-EDT
From:      David C. Plummer <DCP @ MIT-MC>
To:        dbj @ RICE
Subject:   TCP urgent pointer
    Date:        Tue, 26 Jun 84 23:40:02 CDT
    From: Dave Johnson <dbj@rice.ARPA>

    I'm having a bit of trouble with the TCP specs in RFC 793 regarding the
    meaning of the urgent pointer in transmitted segments.  On page 17 of RFC
    793, the definition of the urgent pointer reads:

    This seems to agree with my earlier reference to this page of RFC 793 in
    that the urgent pointer does NOT point to the first octet FOLLOWING the
    urgent data, but instead points to the last octet that STILL IS urgent.

    Does all this mean that the original definition of the urgent pointer on
    page 17 is wrong?

Yup.  The urgent pointer is the only inclusive upper limit in
TCP.  Further evidence for this, which you glanced over, is a
statement that if you want to send an urgent pointer you must
send at least one byte of data.  If it were exclusive, you could
set it on a zero length segment and hope it got through.  (This
is kludgy; if you think about it, all the reasons for this just
don't apply and gives weight to the argument it SHOULD be

In the TCP I wrote, I keep track of the urgent pointer for input
and output.  It may even be correct, but since there are no
applications (that use my implementation) that generate it and
no applications that would know what to do if they got it, I
really can't say for sure!

As I recall, the only known use of urgent is to make the TELNET
IAC IP kludge work.  Unfortunately, such a feature is very TCP
specific and doesn't extend well to other byte streams.  A
generic program would have trouble using it.

Date:      28 Jun 1984 15:26-EDT
To:        tcp-ip@SRI-NIC.ARPA
Cc:        vbegg@BBNG.ARPA
Subject:   mailing list for common internet problems
If there is such a thing, I'd like to be on it.  Viv Begg
Date:      Thu 28 Jun 84 17:19:35-EDT
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
Subject:   Re: TCP urgent pointer
	It makes sense that URGENT should be associated with new data
since URGENT in and of itrself says nothing; it merely says that there
is some data in the stream that is urgent. Presumably you have to have
some data to mark URGENT!
	Every protocol has some feature that doesn't map onto another
protocol worth a damn. Mny protocols do have an out-of-band signal,
Date:      28 June 1984 22:32-EDT
From:      David C. Plummer <DCP @ MIT-MC>
To:        JNC @ MIT-XX
Subject:   Re: TCP urgent pointer
    Date: Thu 28 Jun 84 17:19:35-EDT
    From: J. Noel Chiappa <JNC@MIT-XX.ARPA>

    	Every protocol has some feature that doesn't map onto another
    protocol worth a damn. Mny protocols do have an out-of-band signal,

URGENT is not, repeat -NOT-, an out of band signal, at least not
in the sense I was taught.  For one thing, a second URGENT that
is sent before the data that the first URGENT is declaring urgent
has been acknowledged will cause the first URGENT to be
superseded.  (If that sentence didn't make sense, break it into

URGENT, even if you do decide to argue, is very application
dependent.  I don't know if the TELNET spec has been modified to
say programs should attempt to use URGENT thus-and-so.  I'm
pretty sure TCP-FTP doesn't.  SUPDUP certainly doesn't.  Generic
(and reliable) byte streams cannot depend on anything other a
stream of bytes.  TCP provides such a reliable byte stream and
has the PUSH and URGENT flags as well.  I'll bet the only place
the PUSH flag makes any significant difference is Multics.  (The
argument is that most systems process data when it arrives.  What
PUSH is really supposed to mean is that the sender
-told- the byte stream to :FORCE-OUTPUT the data.  In theory, it
should not be set on segments that were sent because the sending
buffer filled up.  (Theory often does not correspond to
practice.)  Therefore, it is possible, at the receiving end, to
have either a full window or a not full window with the push flag
set on the last segment.  Most systems just don't care; if they
have data, they use it.  Multics has an expensive context switch,
so it may decide to defer until either a full buffer or a PUSH
(provided other systems use PUSH in the way I just described).
Note, however, that the defering causes breaks in the pipelining
of data which may be noticable.)  URGENT, in my opinion, is a
crock to make some user-TELNET programmer happy..
Date:      Fri, 29 Jun 84 15:24:40 EDT
From:      dca-pgs <dca-pgs@DDN1.ARPA>
Cc:        dca-pgs@DDN1.ARPA, jmallory@DDN1.ARPA
Subject:   Is JHU/APL or LEAD on the net?
Is Johns Hopkins University Applied Physics Lab or 
Letterkenny Army Depot on the Internet?

Pat Sullivan

Date:      Fri, 29 Jun 84 15:56:28 EDT
From:      Doug Kingston <dpk@BRL-TGR.ARPA>
To:        dca-pgs <dca-pgs@DDN1.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, hawgs@SRI-NIC.ARPA, dca-pgs@DDN1.ARPA, jmallory@DDN1.ARPA
Subject:   Re:  Is JHU/APL or LEAD on the net?
The Johns Hopkins University Applied Physics Laboratory, JHU/APL,
can be reached via BRL-BMD.ARPA.  They connect to us via UUCP.
Their machine is a VAX/780 running UNIX, hostname "aplvax".
Mail addressed to "user%aplvax@brl-bmd" will reach them.  Headers
coming out may be illegal for a short time until BRL-BMD changes
to the latest MMDFII mail system.

Date:      Fri, 29 Jun 84 16:33:10 EDT
From:      dca-pgs <dca-pgs@DDN1.ARPA>
Cc:        dca-pgs@DDN1.ARPA
Subject:   Compion Access-T Query
A big favor, please...

Would everyone who is now using the Compion Access package

for VAX/VMS kindly let me know who you are? Any opinions you
have of this package are welcome, too. My apologies for having
asked this question before. 

Also, replies are invited from anyone who once used Access
but are no longer using it at this time.

Many thanks,
Pat Sullivan