The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1985)
DOCUMENT: TCP-IP Distribution List for January 1985 (91 messages, 43168 bytes)
NOTICE: recognises the rights of all third-party works.


Date:      2 Jan 1985 at 0926-EST
From:      hsw at TYCHO.ARPA  (Howard Weiss)
To:        tcp-ip at sri-nic
Subject:   re: The Netweaver's Sourcebook

As a matter of fact, I saw a short ad announcing Data General's TCP/IP
which stated that TCP/IP is a UNIX network protocol that is used on
local area networks!!  Never believe what you read in the mass media!

Howard Weiss

Date:      2 Jan 1985 at 1055-EST
From:      hsw at TYCHO.ARPA  (Howard Weiss)
To:        tcp-ip at nic
Subject:   re: The Netweaver's Sourcebook
Reference my last message about the Data General TCP/IP - I should
add that the tone of the announcement was that TCP/IP was ONLY
a UNIX protocol and was ONLY used on local nets.

Howard Weiss

Date:      Wed, 2 Jan 85 13:54:28 est
From:      BostonU SysMgr <>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   re: The Netweaver's Sourcebook

If people would like to see a survey about what
organizations are using TCP/IP as their local area
networking protocol send me the answers to the following
few questions and I would be glad to summarize to
interested parties and or the net. Obviously this
is biased towards readers of this list:

1. Major organization? Boston University
2. Minor organizations? Computer Science,Engineering,Comp.Ctr.
3. Official ARPANET member? no
4. Physical layers? ethernet, RS232, broadband
5. Total local nodes? 14+
6. Optional list of implementations/host types? 4.2bsd,TWG,WISCNET others
7. Other Major LAN protocols/#nodes ? DECNET/13,BITNET/4

This does little justice to the relative importance of TCP/IP
as your LAN protocol but I think a summary, especially of
questions 3,5 and 7 would be revealing.

	Mail responses to this address, root%bostonu@csnet-relay
	please keep it short like above.

		-Barry Shein, Boston University

Date:      Tue, 8 Jan 85 9:35:17 GMT
From:      Graham Knight <>
Subject:   4.1 Unix Ether drivers
Dear All,

        At University College London we are installing an
Ethernet and are making connections to VAX 11/750s and PDP
11/44s via DEUNA interfaces. We run Berkley 4.2 Unix on the
VAXs and Berkley 2.8 Unix on the PDPs. To save me re-inventing
the wheel, does anyone know of the existance of drivers for the
DEUNA under these or similar systems (not Berkley 4.2)?
I would be very grateful for any assistance anyone can give me
in locating these.

        Thanks in advance,

                        Graham Knight (knight@UCLCS)

Date:      Tue, 8 Jan 85 16:15:59 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Graham Knight <knight@UCL-CS.ARPA>
Cc:        TCP-IP@SRI-NIC.ARPA, knight@UCL-CS.ARPA
Subject:   Re:  4.1 Unix Ether drivers
We've never spent more than a few minutes converting 4.2 drivers to
2.8 network systems.

The DEUNA driver on our VAX says it came from Lou Salkind at NYU.

Date:      9 Jan 1985 12:38:38 EST
From:      Donald Bailey <DBAILEY@USC-ISI.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Newsletter
Please add my name to the distribution list for your tcp-ip


Don Bailey.
Date:      10 Jan 1985 1212-PST
From:      Contr23 <CONTR23 at NOSC-TECR>
To:        TCP-IP at SRI-NIC
Subject:   UDP spec request
Is it possible to FTP a copy of the UDP specfication?

Jim Baldo Jr.
Date:      Thu, 10 Jan 85 20:02:32 est
From:      schoff@cadmus (Martin Lee Schoffstall)
To:        tcp-ip@sri-nic
Cc:        cadtroy!schoff@cadmus, cic@csnet-sh
Subject:   SMTP option TURN and RFC822

In the old BBNNET software the TURN option was implemented
for (delivermail?).  I happen to be running the sendmail
off the 4.2bsd tape, which does not look like it has the
TURN capability.  The reason for the use of TURN in section
(3.8?) was for cost.  Being one of the few people who pays
for my packets (IP/Telenet) I wouldn't mind seeing this implemented
especially for those in the CSNET community who will be accessing
the Arpanet via Telenet (Apple, decwrl, cadmus, uwisc-rsch.....),
if for no one else then themselves.  Has anyone done this?  And
is anyone interested if it was done?

Date:      11 Jan 85 1135 EST
From:      Rudy.Nedved@CMU-CS-A.ARPA
To:        schoff@CADMUS.ARPA (Martin Lee Schoffstall)
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: SMTP option TURN and RFC822
In general for networking folks like ARPANET, systems do not prevent
users from connecting to remote ports like the SMTP port. This would
allow randoms to screw up mail by abusing the TURN command. Servers
usually don't have the problem given system services allocate
"critical" ports before users do.

In situations like MAILNET and UUCP where some type of authentication
(login) has been done, you can TURN the connection around with less

All you need is one bad apple random on a system and he/she can spoil
it for the rest.

CMU CS/RI Mail support
Date:      Fri, 11 Jan 85 12:16:42 est
From:      schoff@cadmus (Martin Lee Schoffstall)
To:        tcp-ip@sri-nic
Cc:        cadtroy!schoff@cadmus, cic@csnet-sh
Subject:   TURN in SMTP

I have gotten several  messages (thanks!) about TURN and "security".  The
gist of the messages are the following:

- If I am a wicked person with the current strategy the worst that I
	can do is SEND bogus mail (I have in fact done this with a
	shell script and telnet), with no TURN there is no way that
	I can RECEIVE mail destined for someone else.  With the TURN
	I would be able to do that.

Now, as I said before I'm pretty sure that some hosts did in fact implement
this, and there are probably some hosts now that do it, (I'd like to hear
from you).  Have they any experience with individuals taking mail that
wasn't for them?

This next question, I hope will not encourage those who are paid to
be paranoid in the security area, please no flames or pendantic explanations
for "naive" individuals. 

Would it be "reasonable" (not totally secure) to get the IP source
address check it against the local database and allow the TURN if
everything matched?


Date:      Fri, 11 Jan 85 16:18 EST
From:      "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TURN and IP source address
Checking the IP source address is a fine idea, of your system is
civilized enough to let you have it reliably.  Watch-out for multi-homed
hosts ...
Date:      11 Jan 85 1650 EST (Friday)
From:      don.provan@CMU-CS-A.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Subject:   re: TURN and IP source address
What's the IP address got to do with it?  That doesn't make the process
requesting the TURN any more authorized to receive the mail for that host.

Date:      Sat, 12 Jan 85 00:22 EST
From:      Mike StJohns <StJohns@MIT-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: TURN and IP source address.


See RFC931 for a way of verifying who is calling in for mail via
TCP/.IP.  Mike
Date:      Sun, 13 Jan 85 13:34 EST
From:      "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TURN and IP addresses
The responsability of an SMTP server is to deliver mail for a given host
only to that host.  It is the correspondant host's security problem to
control access to connect to the SMTP server on other hosts.

Altrernatively, you can require that the other end's local post be a
particular port.  But your server has done its part if it validated the
IP address.
Date:      Mon 14 Jan 85 13:33:56-PST
From:      David Roode <ROODE@SRI-NIC.ARPA>
To:        ron@BRL-TGR.ARPA, Rudy.Nedved@CMU-CS-A.ARPA
Cc:        schoff@CADMUS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  SMTP option TURN and RFC822
I don't know where it is written that it is the responsibility
of the connection-originating host to restrict access to remote
TCP ports.  It seems workable to me for the policy to be that
a certain subset of originating ports are known to be under system
control, and the others known not to be.  
Date:      Mon, 14 Jan 85 11:10:03 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Rudy.Nedved@CMU-CS-A.ARPA
Cc:        Martin Lee Schoffstall <schoff@CADMUS.ARPA>, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  SMTP option TURN and RFC822
It is the responsibility of the connecting host to restrict what you can
telnet to.  This is why the option to specify ports other than TELNET is
not generally turned on.   Most systems I use don't allow the use of the
TURN command (albeit for non-security reasons, i.e. mail sending and mail
receiving is handled by two different systems).

Date:      14 Jan 85 1803 EST
From:      Rudy.Nedved@CMU-CS-A.ARPA
To:        David Roode <ROODE@SRI-NIC.ARPA>
Cc:        ron@BRL-TGR.ARPA, Rudy.Nedved@CMU-CS-A.ARPA, schoff@CADMUS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: SMTP option TURN and RFC822
As far as I know vanilla TOPS-20AN, vanilla Unix and other operating
systems can open almost all the ports if you take the union of the
"allowable" range the various operating systems give users. If you
key off of the operating system type field in the host table then you
are at the mecry of whether the information is correct and whether
the system is unmodified or not.

As I said originally, the idea in general to allow the TURN command
for all ip hosts is a bad one at least for the CMU users.

Restricting the TURN command to certain trusted ip nets or complete
host addresses discourages problems. Restricting access to remote
port 25 discourages problems....neither is a solution.

At CMU, we have various protections mechanisms for various
situations.  We implement them based on what has happened and what we
know of the situation that encouraged the incidents. We have gateways
that will not forwarded packets based on soruce and destination so
that network backups of timesharing machines can not be read by
outsiders, we have access bits that disable any direct connection to
the network, we have gateways that reject all external connections
except to port 25 but allow the any internal connections to be made.
Each mechanism is applied dependent on the situation....otherwise we
would incur larger costs on some of the work being done.

At some point, we would like to have a mechanism set up so that any
mail sent from a controlled machine, thru controlled machines and
networks to a controlled machine will be "okayed" and mail indicating
mail from within the enviroment but came from an uncontrolled machine
and/or network would be marked as "suspect" with flashing lights and the long long term, we want public key encryption of mail
messages...probably multimedia mail....but that is a dream....

It is unfortunate that most of the protection mechanism were
installed after abuse occured...which was caused from all types of
people and from inside and outside of CMU. Sigh.

Date:      Thu, 17 Jan 85 20:56 EST
From:      "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   UDP jumbograms
Um, is there really a limit of 512 on UDP grams?  I thought that UDP
jumbograms were kosher?  (as fragmented into multiple IP grams).  Can
someone expound on the issues with UDP packet sizes?
Date:      Thu 17 Jan 85 22:03:05-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
Cc:        JNC@MIT-XX.ARPA
Subject:   Re: UDP jumbograms
	As far as I know you can send any size UDP packet you want, up
to 2^16 bytes. Nobody is required to take one larger than 576 bytes
without consent (in line with the general IP rule about datagram size
(do you know all about the consiguences of that number?)), and as far
as I know there is no protocol (like there is in TCP) for negotiating
a larger one. Probably whatever application you built on top of it
would have to have a special purpose thing to do so if you wanted.
(Note: it should default to 576 if the other guy never replies.)
Date:      Thu, 17 Jan 85 22:18:40 est
From:      Alan Parker <parker@nrl-css>
To:        tcp-ip@nic
Subject:   EGP question
I want to run the EGP software for 4.2 that is described in rfc911.  My
question is, How do I know which neighbor egp gateways to list in the 
etc-egp file?   For example, the sample etc-egp file from ISI has this
in it:

	egpneighbor  isi-gateway	# on arpanet - core EGP
	egpneighbor	 	# css-gateway - core EGP
	egpneighbor  purdue-cs-gw	# on arpanet - core EGP
	# egpneighbor	# bbn-minet-a-gw - core EGP east coast
	# egpneighbor	# aeronet-gw	 - core EGP west coast

Given that I'm on the Milnet part, and my physical location of
Washington, DC., how do I determine what to use here.   Thanks.

Date:      Fri, 18 Jan 85 20:27:19 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Alan Parker <parker@NRL-CSS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  EGP question
Gateways on the MILNET side should use both the minet and aero gateways
to be safe.

Date:      19 Jan 1985 01:25-PST
From:      William "Chops" Westfield <BillW@SU-SCORE.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, remarks@RUTGERS.ARPA
Subject:   Re: CPU use by TCP/IP
Based on the curernt status of TCP, expecially TOPS20 TCP, it probably
is not practical to run all of your local users though TCP ethertips.
In addition to PUP being faster, people at Stanford (esp Kikr Lougheed)
have done a lot of work to optimize PUP Virtual terminals.  However, there
are sites like ISID who have practically all of their users comming in
through ARPANet connections.

The TCP Ethertip code is currently under development here, and with
necessity being the mother of invention, various TCP implementations
may be improved - there are a couple of less obvious ideas (aside
from just making the code better!).  One solution is to try to cut
back on the number of packets transmitted by doing the equivilent of
TEXTI on the ethertip, and transmitting a whole line of data (or more
or less) at a time.  DECNet does/will do this sort of thing, so it
might be practical to tie this to otehr protocols too.  Another idea
is to multiplex more than one terminal connection over a single TCP
connection. (I gather DEC has done something like this for LAT - then
again, maybe LAT or something similar can be used for terminals).

Remember that even a normal terminal line can eat up nearly all your
CPU if it is "running open"...

Date:      19 Jan 85 01:21:40 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        remarks@RUTGERS.ARPA
Subject:   CPU use by TCP/IP
I have been shooting my mouth off locally about networking technology. I
had somehow gotten the impression that it would be possible to unplug
all our terminals from our hosts and plug them into terminal servers,
and then access our hosts via Ethernet.  I have been doing a bit of
testing locally and am beginning to wonder whether this is really going
to work.  What bothers me is CPU time.  For TOPS-20 my measurements
suggest that the Internet fork takes around 1.5% of the CPU for an
incoming Telnet connection. When you compare this to CPU time figures
for editors and other things, it is perfectly reasonable that IP and TCP
should take this much CPU time. Unfortunately, this causes sort of a
problem.  I would like to connect at least 30 users via TCP.  It is
going to be somewhat embarassing if 45% of my CPU goes to TCP/IP.  I
have also taken a look at TCP/IP on our Pyramid (4.2BSD).  I don't have
a good figure for that, because I'm not yet sure I know how to measure
all of the CPU time that is going into TCP, IP, and the Telnet protocol
handling.  But I tried typing out a file at 9600 baud, and apparently
the TELNETD associated with my process used 40% of the CPU (taking CPU
time from PS divided by wall clock).  The same test on our SUN shows
TELNETD taking 75% of the CPU.

The only place I know of that really uses terminal servers for almost
everything is Stanford.  But they are using PUP.  We also have PUP (much
of the code taken from Stanford) on our DEC-20, and it definitely uses
less CPU time than TCP/IP.  But PUP is a "dead" protocol, and I do not
look forward to bringing it up on every random machine we need to talk

At the moment, it looks to me like TCP is great for personal machines, 
where a few percent of the CPU isn't much problem, but it isn't
practical for large-scale timesharing.  I am beginning to think that our
computer center has the right idea with the terminal switch that I have
been considering to be a white elephant, or that we should be looking
into technology like the Bridge systems, where they implement TCP in a
little black box, and give you RS232 that you can bring in through a
normal front end (or even as a DH11 lookalike plugged directly into the

Am I missing something?
Date:      Sat, 19 Jan 85 04:38:09 MST
From:      lepreau@utah-cs (Jay Lepreau)
Cc:        remarks@rutgers.ARPA, tcp-ip@sri-nic.ARPA
Subject:   Re:  CPU use by TCP/IP
	Oink oink!
	That is the sound that data makes as it travels through
	4.2BSD's pseudo-tty driver.

That's the beginning of an old bug report and fix from Brian Thompson
at Toronto.  It seems almost certain that you (or pyramid and sun)
haven't fixed that bug in vanilla 4.2.  A simple fix increases
throughput, and lessens cpu load, by a factor of at least 6.  It was on
Usenet and unix-wizards: contact me if you need it. 

But that doesn't solve the problem by any means.  As dpk says, it's not
so much TCP/IP, it's much more the pty's and daemons, at least on 4.2.
The UCB group's performance paper at last year's USENIX estimates that
remote terminal users present thrice the load of local ones, mostly from
the extra procs, ptys, and context switches.  (In 4.2 you can't measure
the actual TCP/IP overhead from the process time, as most of the work is
done at software interrupt time, likely not associated with the process
of interest.)

There's no real need to go thru all those internal interfaces and
processes and context switches, and there are people working on cleaning
it all up (in 4.2).  It would simplify the kernel code immensely, too.
At that time I might feel comfortable with getting rid of RS232 lines
on our machines.  I have also heard of other networking schemes stuffed
into 4.1 which went direct to the 'tty' and avoided the daemons.  But
meanwhile I sure would hang on to those DH's and port selectors!

Re your Tops-20 figures: offhand, I don't think that 1.5% is so
unreasonable (although on a 20 local terminals presumably compare
much more favorably because they are offloaded to a frontend).  Those
lines are "never" going to be spewing full speed simultaneously. Knowing
how slowly the frontend RSX20F spits stuff out, it could be using almost
that much itself (but I'm no 20 wizard).  Since the 20 currently offloads
local terminal processing, maybe this does speak for offloading the
networking onto something that plugs into your bus.

In general, tcp IS kind of heavy for local networks.... but it's not the
main problem at this point (on 4.2 at least).
Date:      Sat, 19 Jan 85 4:17:18 EST
From:      Doug Kingston <dpk@BRL-TGR.ARPA>
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA, remarks@RUTGERS.ARPA
Subject:   Re:  CPU use by TCP/IP
The basic problem you have been looking at is not so much
TCP/IP as it is all the single character packets being 
thrown around in your machines by the telnetd processes.
In a regular connection you have:

	TERMINAL -- Interface -- OS -- Process

For a typical telnetd implementation you have:

	TERMINAL -- NET -- Interface -- OS -- Daemon --

		-- OS -- PseudoInterface -- OS -- Process

This is an implementation issue.  There are similar problems
with the MPX Blit support code for 4.2BSD.  This is an area
that would benefit greatly from some development effort.

Date:      Sat 19 Jan 85 11:07:57-PST
From:      David L. Kashtan <KASHTAN@SRI-IU>
To:        hedrick@RUTGERS.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  CPU use by TCP/IP
When I ported 4.2BSD TCP/IP to VAX/VMS (yes, it runs in the VMS kernel in
exactly the same 85 11:07:57-PST
From: David L. Kashtan <KASHTAN@SRI-IU>
Subject: Re:  CPU use by TCP/IP
To: hedrick@RUTGERS.ARPA
Cc: tcp-ip@SRI-NIC.ARPA

When I ported 4.2BSD TCP/IP to VAX/VMS (yes, it runs in the VMS kernel in
exactly the same way it runs in the UNIX kernel) it was necessary for me
to implement something called a Mini-Process that is nothing more than a
linked list of procedures to call and a tiny bit of state information.
A very fast (~20usec) scheduler was then implemented that ran off one of
the network interrupts (NETISR_SCHEDULE) so that the UNIX code could
think it had something in the way of a user process manipulating the data,
while VMS still thought it was a device driver (so that the asynchronous
nature of VMS I/O could be maintained).  Sleep/Wakeup operated on these
mini-processes so that the original UNIX code could be used.

With such a fast process scheduler implemented, I decided it would be
really interesting to place the incoming TELNET service in the kernel.
One miniprocess awaits network data, runs the TELNET protocol and sends
the characters directly to the VMS terminal driver.  Another miniprocess
awaits data from the VMS terminal driver, places it in MBUFs and queues
the mbufs to the TCP socket -- all in the kernel.  I was VERY surprised
to find that the overhead for a TCP telnet connection was now almost exactly
the same as overhead for hardwired DZ lines (and slightly greater than
DMF lines).

This same technique could be used in the UNIX kernel.  The mini-process
could feed the telnet stream directly into the TTY line discipline.
(Greg Chesson at Silicon Graphics has done this for his XNS remote
login service and it works extremely well).
Date:      19 Jan 85 14:25:45 EST (Sat)
From:      Gary Delp <delp@udel-eecis1.delaware>
To:        tcp-ip@bboards.delaware
Subject:   Fin-wait-1,2 states
Sorry if this has been handled long ago, but I have a state question.
On page 73 of Sept 1981 RFC 793 the specification is talking about
what to do with incomming segments.
Lots of checks have already been made, and now it is time to check the
acknowledgement number and update things acordingly.
The question I have relates to the states FIN_WAIT_1 and FIN_WAIT_2
It seems that the only way to get to FIN_WAIT_2 is from FIN_WAIT_1, by means
of the check-- has my FIN been acked?  If this is true, then move to FIN_WAIT_2.
The next check is for the FIN_WAIT_2 state, and says to check if the
retransmission queue is empty. If so then tell the Close call that it is ok.

Now the test I am using for both of these tests is the same.  Is
SND.UNA +1 = SND.NXT.  Do I need to make both tests?
Obviously not if this is what I am testing.  What am I missing?

-Gary Delp
Date:      19 Jan 85 23:21:35 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA, bein@SRI-UNIX.ARPA, clynn@BBNA.ARPA
Subject:   possible missing mail
We had a disk crash this morning.  Any mail sent to me on Friday or
Saturday should be presumed to have been lost.
Date:      Sun, 20 Jan 85 01:53 EST
From:      Mike StJohns <StJohns@MIT-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        delp@UDEL-EE.ARPA
If your fin was acked AND you didn't receive a fin at the same time,
    MOVE to FIN_WAIT_2,

ELSE if your fin was acked AND you DID recieve a fin from the other side
in the same segment, move to TIME_WAIT

(That's from FIN_WAIT_1 state...  you are correct, the only way to get
 to FIN_WAIT_2 is from FIN_WAIT_1)

If you reach the FIN_WAIT_2 state, your retransmission queue must be
empty 'cause everything up to your fin has been acked.

(From MIL-STD1778)

if (sv.send_finflag = TRUE) ; Did I send a fin then
  if ((from_NET.seg.ack_flag = TRUE) and
      (from_NET.seg.ack_num = sv.send_next))  ; Did this segment have an ack
  then return (YES)
  else return (NO)

This is from the function FIN ACK'd?, page 122 of the 12 August 83
draft.  I suggest getting a copy of milstd 1778, it has lots of nice
state tables and psuedo code on implementing TCP.  Much of the stuff is
a lot clearer.  Mike
Date:      20 Jan 85 21:54:14 PST (Sun)
From:      karels%ucbarpa@UCB-VAX
To:        tcp-ip@sri-nic.ARPA
Cc:        HEDRICK@rutgers.ARPA, remarks@rutgers.ARPA
Subject:   Re: CPU use by TCP/IP
I believe that it is possible to get fairly good performance with
network terminal traffic on Unix by interfacing the network directly to
the terminal handler, as described by Jay Lepreau and David Kashtan.
This involves significant changes to the network and protocol
interfaces, but is worth doing for this and other reasons.  I hope to
start some of this work sometime this year.  Something similar has been
done by Dennis Ritchie with Datakit.

However, I think that it is possible to go a step farther.  By using a
network terminal concentrator as a front end processor, it should be
possible to reduce the number of packets that the host has to deal with
considerably.  This would require a specialized protocol between the
terminal concentrator and the host, which would allow local echoing and
packetization when in line mode.  Synchronization when switching
between modes would be the hardest part of designing the protocol.
Multiplexing separate terminal lines in a single "connection" and
within packets should be fairly straightforward.

Date:      21 Jan 1985 0016-EST (Monday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        tcp-ip@sri-nic.ARPA
Subject:   sorting host table entries
Despite messages from Jon Postel to the contrary, several hosts
(classes of hosts?) are still sorting host entries. The incoming IMP
link to Purdue has been flaky lately, but our connection via a gateway
to net 128.10 has been working just fine. Nevertheless, when the IMP
connection flakes out, mail drops off; when it is reset, mail floods
in. Offenders that I have noticed are USC-ISID, USC-ISIF, SRI-NIC,
MIT-MC, SUMEX-AIM, and MIT-MC. There are probably more. 

I'd appreciate it if those responsible for these hosts would fix their
code, so that our mail would continue to flow unimpeded!

Date:      21 Jan 1985 0025-EST (Monday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        remarks@RUTGERS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: CPU use by TCP/IP
I've thought about this, too, and came to the conclusion that using TCP
with Telnet is not a good solution. But that isn't to say that the
Internet protocols don't offer a solution.

I think that we shouldn't be looking to TCP, but to UDP, for our
solution. TCP is highly engineered (some might say over-engineered) for
reliability over long lossy links; the type of use we want for terminal
service is over highly reliable, small diameter local networks. 

By the same token, Telnet connections tend to have a lot of
per-character overhead, sending one or two data characters in a TCP
segment -- worst case is 40 bytes of overhead for one data character!

I've done a preliminary design of a UDP-based protocol that is very
simple, and concentrates the characters from several terminals to the
same host in a single packet. I haven't dealt with NVT issues yet, but
they shouldn't be terribly difficult. The processing of incoming
packets is small enough that it could be embedded in the kernel, and
look to the operating system like any other terminal device. This is
much like what I believe DECNET's LAT to do, but LAT is limited to a
single ethernet, where this protocol could work through gateways and

Unfortunately, I don't have time or resources to actually build this
protocol; if someone does, I'd be thrilled to discuss it further.
Something like this might let us dump all our rs-232 lines into hosts.

Date:      21 Jan 85 0146 EST
From:      Rudy.Nedved@CMU-CS-A.ARPA
To:        Christopher A Kent <cak@Purdue.ARPA>
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Re: sorting host table entries

I thought you were suppose to use the locally connected net address
for a host and if that does not work then use the first address in
the list? The result is ARPANET sites will use the ARPANET address
and MILNET and other nets will use your subnet address (unless they
are on your class B network).

If the class B address is better then why do you still have the
ARPANET address? Does the reason exist for users not on PURDUE? Maybe
PURDUE tries to use the address which gives it the optimal transfer
performance....using the ARPANET address to talk to ARPANET sites and
the class B address to talk to class B sites...

Maybe the problem is instead that mailers are too stupid to iterate
over the list of addresses before giving up?

Or maybe gateways should be consulted if they can get to a given

Is something clearly wrong or is this a somewhat helpful improvement
for some cases?

Date:      Monday, 21 Jan 1985 09:53-PST
From:      imagen!
To:        shasta!tcp-ip@sri-nic
Subject:   Re: Telnet efficiency

The problem that people have mentioned with 4.2's scheme of putting
telnet in a user process are very significant.  There is another
problem that might be causing trouble.

One common problem with Telnet/remote-echo is that many TCP
implementations effectively send TWO acks per packet, one to ack the
tcp data and one that contains the user-level ack (the echoed
character).  I believe that this problem has been fixed for the Tops-20
tcp-ip (although it tends to creap back -- someone should check).  Our
4.2 unix also avoids the problem, but I've seen some that do not.

If both the local and foreign tcp have this bug, you really lose when
you type a sequence of characters at telnet.  In this case, one
character sent and echoed at the telnet level requires 4 tcp-level
packets -- one to carry the character one for its tcp-level ack, one to
carry the echo and one for the echo's tcp-level ack.  

Many of the processing costs for a protocol are per-packet.  I believe
that per-packet costs dominate the cost of a telnet connection.  If
your server sends double the required number packets, then the cost of
running it should be close to double what it should be.  Worse, if both
server and client tcp's have the same bug.

The canonical solution to this problem is to send TCP acks only after a
short dally, giving time for the echoed data to be turned around.

[Plug:] If this general sort of problem interests you, then my master's
thesis might too -- An Argument for Soft Layering of Protocols, MIT Lab
for Computer Science, 1983.

- Geof
Date:      Mon 21 Jan 85 10:49:29-PST
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
To:        cak@PURDUE.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: sorting host table entries
The "sorting" that SUMEX-AIM does is simply to prefer the network 10
address over the net 128.10 address, simply because SUMEX-AIM is on
network 10 and so doesn't need to use a gateway.  I wasn't aware that
this kind of sorting was banned.  If so, that would mean that all of
our internal Stanford traffic to our multi-homed hosts (e.g. SAIL,
Score, SUMEX) would have to go through the Stanford gateway instead
of the local network because the ARPANET is listed first!!!

Come on, this isn't realistic.  A host should be allowed to prefer
addresses on a directly connected network over the NIC's ordering!
Why doesn't Purdue flush its losing address from the NIC table and
put it back in later.
Date:      Mon 21 Jan 85 11:22:11-PST
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
To:        cak@PURDUE.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: sorting host table entries
     The problem is that you are getting more and more into the
situation where the algorithm for host selection becomes more
and more hairy and less and less specifiable for the general
case.  There are *lots* of sites out there which don't have a
system programmer.

     Consider this.  You're a Milnet site, and want to talk to
SRI-NIC.  But its ARPANET address is listed FIRST.  Duh.  Do we
use the Milnet address because its the same network as what we
are on, or do we use the ARPANET because it is listed first?
Or do we wire in special knowledge of the NIC?

     It is one thing to sort addresses in numerical order (as
some hosts were doing).  I think it is quite another to try to
optimize network connectivity by using a better pathed address.

-- Mark --
Date:      21 Jan 1985 09:47:10 EST
Subject:   Re: CPU use by TCP/IP
In response to the message sent  19 Jan 85 01:21:40 EST from HEDRICK@RUTGERS.ARPA


As long as we continue to use character-at-a-time access for the TOPS-20s
and Unices, we have to accept the fantastic overheads required. My extensive
instrumentation in our fuzzballs confirms your observation that interactive
TCP connections are not cheap. I have also found that a little care in
packetization (using short timeouts to encourage longer packets) goes a very
long way toward reducing the overhead. What goes even further is operation
in line-at-a-time mode. My personal belief is that the stunt box (TCP
concentrator?) should go closer to the terminals than the host and
that it should do the echoing and other fancy dances. Using the stunt
box as a reverse-TAC will probably move the overhead to the serial-multiplexor
interrupt routine.

Date:      Mon 21 Jan 85 13:12:07-PST
From:      Greg Satz <SATZ@SU-SIERRA.ARPA>
Subject:   Re: UDP jumbograms
Would someone be so kind as to offer a status report of RDP. Last I heard,
it was in the testing phase.
Date:      21 Jan 1985 11:00:01 EST
Subject:   Re: UDP jumbograms
In response to the message sent   Thu, 17 Jan 85 20:56 EST from Margulies@CISL-SERVICE-MULTICS.ARPA


By default, hosts must be able to reassemble 'grams up to 576 (total) in length.
There is no mechanism in UDP (as in TCP) specifically designed to change this.
Thus, any specification on maximum length (576, 512 or otherwise) must
necessarily be part of the particular protocol specification.

Date:      21 Jan 1985 1407-EST (Monday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        Mark Crispin <MRC@SU-SCORE.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: sorting host table entries
	From: Mark Crispin <MRC@SU-SCORE>

	[T]hat would mean that all of our internal Stanford traffic to our
	multi-homed hosts (e.g. SAIL, Score, SUMEX) would have to go through
	the Stanford gateway instead of the local network because the ARPANET
	is listed first!!!

Don't be so dramatic -- that isn't necessarily so. We use our local net
addresses for internal mail, but we respect the wishes of other hosts,
as expressed in the host table, too.

	Come on, this isn't realistic.  A host should be allowed to prefer
	addresses on a directly connected network over the NIC's ordering!

As long as it doesn't affect connectivity. We have a reason for putting
our net 10. address second; it's less reliable.

	Why doesn't Purdue flush its losing address from the NIC table and
	put it back in later.

Because the IMP interface seems to go catatonic every other day or so,
and all it takes is for someone to notice and tweak the cable. Getting
something done through the NIC takes weeks.

The ultimate solution here should be that mailers should be clever
enough to try all of a host's addresses before giving up, not just one.

In general, I'm against anything that adversely affects mail on my
machine. Mark, I seem to remember hearing similar sentiments from you
on various occasions -- it shouldn't be so hard for you to understand
my position.

Date:      Monday, 21 Jan 85 17:56:45 EST
From:      rmcqueen(Robert C McQueen)%sitvxa.BITNET@WISCVM.ARPA
To: ,
Subject:   TCP/IP for UNIX 5.2
Is there an implementation of TCP/IP for UNIX 5.2 systems?

Robert McQueen
Date:      Mon 21 Jan 85 22:17:04-PST
From:      Mark Crispin <MRC@SU-SCORE.ARPA>
To:        james@GYRE.ARPA
Cc:        cak@PURDUE.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: sorting host table entries
In other words, have yet another locally-maintained file which must
be frequently edited and maintained by a system programmer on a high
salary.  What a bad idea.

Let's not forget having to modify the operating system and/or utility
software to support this new table.  Let's not forget the expense of
merging in all those operating system changes every time the vendor
releases a new version.

Basic Internet support should work on a site WITHOUT a system programmer.
Date:      Mon, 21 Jan 85 22:35:54 est
From:      James O'Toole <james@gyre>
To:        mrc@su-score
Cc:        cak@purdue, tcp-ip@nic
Subject:   Re: sorting host table entries
How you sort your network addresses is up to you, but you ought to
assume that if someone else gives you their network addresses
in a particular order they want you to use them in that order.

My solution to this problem is to keep a "loctab.txt" file similar
to the "nictab.txt" file, where I list addressing information which
takes priority over nic-supplied info, for umdnet hosts.  For stanford,
that means listing all your hosts in your own local file with local
network addresses first.  It seems to work well here.

Date:      Tue, 22 Jan 85 12:06:34 CST
From:      Paul Milazzo <milazzo@rice.ARPA>
To:        Mark Crispin <MRC@SU-SCORE.ARPA>
Subject:   Re: sorting host table entries
Back when Rice was running the BBN TCP for 4.1bsd UNIX, I modified
BBN's hashed host table generation software to accept a list of
"preferred" network numbers, and sort a host's addresses according to
the specified preferences.  Addresses on networks not mentioned in the
list were left alone.

The preference file was very short, containing only the list of host
interfaces in order of decreasing speed, and certain other network
numbers one hop away from PDN (which we were using at the time).  The
file was only modified when a new interface was added, or a particular
Internet network administrator suggested using an alternate address for
hosts on that network.  Even the most novice of host administrators
could maintain such a file, and the performance and economy (if you
have to pay for packets) gains are significant.  In addition, Chris
Kent's request to use PURDUE.ARPA's net 128.10 address was easily

Sadly, I never finished my attempt to do something similar with the
4.2bsd host table, and so we've now returned to the time-honored method
of stuffing a huge list of special cases in front of the NIC host
table.  Sigh...

				Paul G. Milazzo <milazzo@rice.ARPA>
				Dept. of Computer Science
				Rice University, Houston, TX
Date:      22 Jan 1985 11:41-EST
To:        cak@PURDUE.ARPA
Subject:   Re: CPU use by TCP/IP

your solution is reminiscent of the interior of Tymnet.

I take it you would trade off the end/end reliability of TCP for
the much lower overhead of muxed UDP packets - relying on the LAN
to be reliable?  I had the impression that CSMA-CD type local
area nets have a non-negligible collision rate which would make
itself apparent in the form of lost characters - or would there
be some kind of retrnasmission upon detection of collision -
hoping that the rate of undetected collision will be negligible?

Date:      22 Jan 1985 14:51-PST
From:      the tty of Geoffrey S. Goodfellow <Geoff@SRI-CSL.ARPA>
To:        cak@PURDUE
Subject:   Re: CPU use by TCP/IP
I find the notion of `accepting' occasional errors like one
sometimes does from noise on a dial up line appalling, to say the

I have suffered some of my worst dial up woes when in the Los
Angeles area or other parts of the country which are served by
Independent Telephone Co's who think likewise about `quality'.
Insertion of spurious characters or mangeling data in my input
and output streams is no fun at all -- especially when one of
them causes something like your screen to retransmit.  As a
hacker, I know how to handle such a situation, but for USERS, no

I think the correct direction of effort should be applied to
decreasing the number of single character packets which are sent
over a TELNET connection.  Something like a muchly revamped RCTE.
Who knows, it should even give us better response time by echoing
and performing display editing locally, while maintaining the
current quality and integrity we've been spoiled with all these

Date:      22 Jan 1985 1152-EST (Tuesday)
From:      Christopher A Kent <cak@Purdue.ARPA>
Subject:   Re: CPU use by TCP/IP
My analogy for the protocol is dialup rs-232 lines; if an occasional
packet is lost, it's like occasional line noise on a dialup; we
tolerate it, redraw the screen inside our editors, etc. It's something
that you will notice immediately. The protocol wouldn't be used for
file transfer, so absolute reliability may well not be necessary.

On the other hand, the packet damage rate on a busy Ethernet may well
be non-negligible, and this choice will have to be rethought. I can't
know without building or simulating.

Date:      Tue 22 Jan 85 12:02:49-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
Subject:   Re: sorting host table entries
	Maybe I'm stupid to start this up again, but people seem to
have been thinking exclusively about a model of name table usage which
I consider obsolescent; i.e. one where each host has a (complete) name
table. To begin with, I think that most hosts should rely on name
servers for name->address bindings; the cost and hassle of updating
those name tables in every 4.2 system at a site is ridiculous. There
are always problems with some host not being able to access some other
host because they haven't gotten the latest release of the name table.
Fargh! Silliness! The V6 PDP11 Unix software MIT did had no local host
table, but used name servers for everything, and it worked well and
with no maintainence. Secondly, with the domain name system coming in,
you will have to talk to name servers to resolve non-local names
anyway. Seems to me people should start junking local name tables.

	I also agree with Mark that the correct way to handle multi
homed hosts is to use the address that is closest to you. THIS
DECISION SHOULD BE DONE BY A GATEWAY! Gateways should be the only
things in the system that know about connectivity. Normal hosts should
not include gateway functions because that functionality is likely to
change; i.e. EGP is not the final word. If you want to spend your life
keeping up with the changes in gateway algorithms, fine, otherwise
it's a bad idea.
	(Parenthetically, I think it was a poor move for Berkeley to
put gateway functionality in 4.2. Lots of people are cheaping out and
using 4.2 systems for gateways, but UNIXes make lousy packet switches,
in addition to which Berkeley does not seem to talk to the rest of the
IP gateway group. This is a problem since most of the accumulated
wisdom of that group is not written down. In general, I would resist
making doubly homed hosts gateways. It will only get to be more hassle
as time goes on.)
	As far as the mechanics go, either the name server could
figure out which path is best, or the client could. The first is
difficult, since the server may be somewhere else in the network, and
thus unable to ask gateways local to the requestor which is the best
route anywhere. The problem with the latter is that currently the Name
Resolution protocol only returns a single address. Finally, there's
currently no defined way to get a gateway to do that (time for a new
ICMP option). However, enough on the 'right way' to solve this

	It seems to me that if you have a flaky network interface
and you leave that address in the host table it's your fault if some
poor host decides to use that address and loses. To suggest that
lots of other hosts make a local adjustment because of problems
at a single site elsewhere seems to me a very poor way to attack
the problem.

Date:      22 Jan 1985 1236-EST (Tuesday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        J. Noel Chiappa <JNC@MIT-XX.ARPA>
Subject:   Re: sorting host table entries

I couldn't agree more heartily with the sentiment that everyone should
be using name/domain servers. If that were the case today, I would
gladly remove our flaky inerface address from the list of addresses for
PURDUE; but the current paradigm of sending mail to folks at the NIC
has about a 10 day turnaround, both to get the address out and then
again to get it in. And we should have the problem licked by then.

Now, I find this a bit confusing:

	I also agree with Mark that the correct way to handle multi homed hosts
	is to use the address that is closest to you. THIS DECISION SHOULD BE
	DONE BY A GATEWAY! Gateways should be the only things in the system
	that know about connectivity. 

But if you're directly connected to one of the networks that is
involved on the multi-homed host, there is no way for a gateway to get
into the act. So the scheme that Mark (and others) is espousing still
fails. (Probably the right answer is to disallow multi-homed hosts, and
only allow gateways to have two addresses ... or if IP had the concept
of unique machine identifier, rather than just network addresses, this
problem wouldn't exist at all.)

Date:      Tue 22 Jan 85 14:55:45-MST
From:      Randy Frank <FRANK@UTAH-20.ARPA>
Subject:   Re: CPU use by TCP/IP
Even below the IP level most Ethernet low level drivers typically re-transmit
when the hardware informs the driver of a collision.  Thus, the concern
would be how many collisions weren't detected, which I would suspect to be
small, or potentially packets that were thrown away by the receiving host
due to lack of buffer space, etc.

It's interesting that this same debate must be exactly what DEC went through
with their Ethernet terminal server.  As I understand it, the DEC device
was originally supposed to talk full DECNET (NSP on top of Ethernet), and
DEC eventually came up with LAT, a very simple protocol designed for terminal
traffic local to an Ethernet precisely because they realized that full DECNET
was too great a penalty for large numbers of networked terminals.

I think users would certainly tolerate error rates of the same magnitude as
they experience on standard async connections in return for a protocols that
had a low cost and latency.
Date:      22 Jan 1985 16:26-PST
From:      Joel Goldberger <JGoldberger@USC-ISIB.ARPA>
Subject:   Right solution, Wrong problem
I applaud any and all efforts being made to improve the performance of
TCP implementations, but believe that concentrating on improved TELNET
response is misguided.  As everyone begins installing personal
workstations and using mainframes as servers the need for TELNETting
anywhere should decrease dramatically -- we have started seeing this at
ISI already.  The use these workstations make of servers requires either
new protocols or in some cases the conversion of existing protocols
(Leaf and Courier) to TCP.  All of these facilities could profit from
greater efficiency in the underlying TCP, but the techniques used to
improve the throughput for TELNET-like activity may not have a
significant effect on these block-oriented activties.  It would be
wonderful if one could optimize for both character and block activity,
but it doesn't seem terribly likely.

- Joel Goldberger -
Date:      Tue 22 Jan 85 13:56:42-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        cak@PURDUE.ARPA
Subject:   Re: sorting host table entries
	You are correct in that if the requestor is directly connected
to one of the nets the target is on then there's no way for the
gateway to say anything useful. However, fixing this is a one line
check in the algorithm. (Using the directly connected net can be a
problem with nets like the ARPANet which don't fit into the node-arc
graph model of nets and gateways well since the network does not have
single 'cost' you can put into the routing algorithm. It should really
be considered as a set of links. But this is a different, and larger,

	However, this does not refute the point that Mark brought up
that one sites' optimal ordering in the host table (in terms of
contacting a given destination) is another guy's pessimal ordering;
the example of contacting SCORE from internal Stanford hosts is a good
one. There is no way you can rely on the ordering of names in a single
host table to do optimal routing for everyone; to have a separate host
table for each site (with local optimal routing information thrown in)
invites chaos. I don't really see any reasonable alternative to using
an algorithm, for which the scheme proposed sounds best: i.e. use the
address which is closest to you.
	While it is correct that it would be better if IP had unique
machine identifiers, this still would not fix the problem under
discussion. Presumably that machine id would map to a list of
network attachment points. You then have the original problem!
Which one do you pick?

Date:      22 Jan 85 17:08:17 PST (Tuesday)
From:      Kluger.osbunorth@XEROX.ARPA
To:        tcp-ip@SRI-NIC.ARPA
Cc:        Kluger.PA@XEROX.ARPA
Subject:   Re: CPU use by TCP/IP


I'm wondering how many IP hosts are connected to the

(Are the MilNet and ArpaNet part of the same IP Internet?)

Anyhow, I'm wondering what the number of computers (NOT terminals
connected to those computers) is that can send IP packets to each other.
1000? 4000? 10K?



Date:      22 Jan 85 17:48:36 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP/IP as a terminal server protocol
I would rather see work go into cleaning up existing TCP/IP
implementations, rather than adding yet another protocol.  The one great
advantage of TCP/IP is that there are a lot of implementations of it. If
we come up with an IP LAT, how long will it take to spread to that whole
set of machines?  Would I have to have two sets of terminal servers,
LAT and TCP/IP for machines that didn't have LAT?

Also, I am not yet convinced that cleaning up TCP code won't solve the
problem.  Clearly a full TCP will have more overhead than a specialized
protocol.  But will that overhead be enough that we care? Two people
claim to have measured the overhead of the Wollongong VMS implementation
and seen that it is no worse than DZ's.  I realize DZ's are no great
triumph, but they aren't that bad either.  (And presumably TCP will fix
the one area where they are the worst: outputting large amounts of data.
DZ's have to field interrupts on each character, whereas TCP doesn't.)
If TOPS-20 and Unix had similarly low overhead, I would be happy to use
TCP rather than a protocol which wouldn't go through gateways and might
drop data.  

If you want to do protocol development, I would rather see it go into
Telnet negotiations that would allow for echo in the front end on Unix
and DEC operating systems. That is, a system whereby a program could
specify what characters it is safe for the terminal server to handle and
what characters should cause a packet to be sent.  This would allow for
local echoing in many cases. This would probably give at least as much
of an improvement in  performance as LAT.  And it would have the
advantage that if a host didn't support them, we could still talk to it.
If LAT is implemented the way TCP is now (i.e. with 3 process
activations need per character echo), it will have exactly the same

Since my original message, I have been doing a lot of testing.  It is
clear that most of the slow response on TOPS-20 and Unix is because part
of the implementation is in separate processes.  I did some gimicking of
TOPS-20 to force immediate activiation of the Internet fork when a
packet arrived, and the response now feels like it is local most of the
time.  (I conjecture that the exceptions are times when the scheduler
decides it has seen enough of the Internet fork and does something
else.)  I have also been able to make improvements in overhead for
typing out files by some tuning changes that make TCP use larger
packets.  (These efforts are still in progress.)  Based on these tests,
I believe that an implementation that puts everything in the kernel
(e.g. the Wollongong one) should give quite good response.  I think
proper tuning should give reasonable results when typing out files.  And
a protocol to handle local echoing should fix things for normal typing
and echo (if indeed there is a problem remaining to be fixed after all
the tuning and cleanup is done).
Date:      Tue 22 Jan 85 18:21:11-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   [J. Noel Chiappa <JNC@MIT-XX.ARPA>: Re: TCP/IP as a terminal server protocol]
Mail-From: JNC created at 22-Jan-85 18:20:29
Date: Tue 22 Jan 85 18:20:29-EST
From: J. Noel Chiappa <JNC@MIT-XX.ARPA>
Subject: Re: TCP/IP as a terminal server protocol
In-Reply-To: Message from "Charles Hedrick <HEDRICK@RUTGERS.ARPA>" of Tue 22 Jan 85 17:48:36-EST

	Exactly. Building yet another protocol is the last thing we
need; fixing the one we've got is what we ought to be thinking about.
The problem often is a poor structure in the implementation of TCP,
not TCP itself. Using a different protocol with the same poor structure
would give results that are no better. (For example, the CHAOS protocol
implementation in TOPS-20 does not have to wake up a fork to process
'expected' data the way the TCP one does. It also has a much better
response and takes less of the CPU.)
	For more on this subject, see RFC817, "Modularity and
Efficiency in Protocol Implementation", also in the IP Protocol
Implementation guide.
Date:      22 Jan 85 23:03:44 PST (Tuesday)
From:      Kluger.osbunorth@XEROX.ARPA
Cc:        cak@PURDUE.ARPA, HEDRICK@RUTGERS.ARPA, remarks@RUTGERS.ARPA, tcp-ip@SRI-NIC.ARPA, Kluger.osbunorth@XEROX.ARPA
Subject:   Re: CPU use by TCP/IP


Collisions on a CSMA/CD LAN are a feature, not a bug. Collisions are a
part of the distributed channel access control algorithm.

Packets/Characters are not lost during a collision, the packet
transmission attempt is aborted, then retried. If 16 attempts (Ethernet
& 802.3) fail, then the entire packet is discarded (rare).

A CSMA/CD net provides the usual datagram services to its service level
client--low overhead, best effort delivery, error detection not
correction, etc. The packet collision/retransmission stuff is hidden to
the client.

I second your point of note about a tradeoff between the overhead of a
Reliable/Errorless protocol and counting on the high reliability of a
CSMA/CD LAN Datagram delivery system.

Personally, I want 100% of my traffic to be transported by
Reliable/Errorless protocols. But then again, people for years and years
have accepted unreliable asynch transports.

One study showed the native packet loss rate of well-designed Ethernet
components to be less than one packet in two million [Shoch]. Not bad,
but certainly not errorless.


[Shoch] J.F. Shoch and J.A. Hupp, "Measured Performance of an Ethernet
Local Network," CACM, vol. 23, pp 711-721, Dec, 1980.
Date:      Wed 23 Jan 85 01:18:53-PST
From:      David Roode <ROODE@SRI-NIC.ARPA>
To:        Kluger.osbunorth@XEROX.ARPA, CERF@USC-ISI.ARPA
Subject:   Re: CPU use by TCP/IP
Personally, I prefer driving on Freeways with perfect accident
records.  But, people for years and years have accepted unreliable
internal combustion engine transports (and spacing of less
than 100 meters between transport units).

Date:      Wed, 23 Jan 85 04:07:07 est
Subject:   Re: CPU use by TCP/IP
LAT is error-free, and should be very efficient for local terminal
service (it was designed to completely replace host async lines, and
designed around the characteristics of the ethernet).  While it doesn't
include features for internetwork routing, I doubt it would be very hard
to let it ride on top of IP, and since all traffic between any two given
hosts is multiplexed into a single virtual circuit, it should provide a
substantial improvement over TCP in any terminal server application.

Given that DEC already has or is working on TOPS-20, VMS, and UNIX
implementations, I would recommend that people take a good look at LAT
before designing something else.

- Chris
Date:      Wed, 23 Jan 85 07:30:47 est
From:      jqj%gvax@Cornell.ARPA (J Q Johnson)
To:        tcp-ip@sri-nic.ARPA
Subject:   Re: CPU use by TCP/IP
At Cornell we have been using PDP-11 based terminal concentrators for most 
of our async traffic for quite some time.  Some experience:

1/	We have experimented with an Ethernet terminal concentrator using
an unreliable protocol (basically, data from multiple lines muxed into an
IP packet).  It works fairly well -- efficiency is an order of magnitude
better than obtainable with that many telnet or rlogin connections to a
standard 4.2BSD.  Unfortunately, we find that even on a directly connected
and lightly loaded Ethernet (i.e. negligible collisions) we lose about 1 
packet in 1000, mostly because the backend machine is busy doing something 
else and doesn't get around to serving the Ethernet interface (3Com in this
case) in time.  A .1% rate is an unacceptable packet loss, so we have 
considered adding a reliable packet protocol, either RDP on IP or perhaps 
2/	LAT boxes are very nice.  Several VMS systems on the college
Ethernet use LAT for terminal concentration, and it works very well.  In
CS with multiple Unix machines, I am waiting anxiously for an implementation
of LAT for 4.2BSD.  Note, though, that LAT is not appropriate for use on
anything except a locally-connected Ethernet.  Even layered on IP it
would probably be a loser since it assumes you don't get gateway delays.

3/	One reason to prefer a muxed protocol over telnet/tcp/ip is that
it reduces the total Ethernet traffic.  My impression from various studies
is that a local Ethernet can support a thousand or few terminal
sessions on an Ethernet using telnet style protocols.  Given a reasonable
load of workstation traffic in addition, the number of telnet sessions
supportable probably drops to the low hundreds.  If you use 32-line terminal
concentrators and muxed data, the number of packets (which is pretty much
the only needed measure of Ethernet load) attributable to terminal
traffic drops by an order of magnitude.

Date:      Wed, 23 Jan 85  7:37:11 EST
From:      Andrew Malis <malis@BBNCCS.ARPA>
Cc:,, malis@BBNCCS.ARPA
Subject:   Re: CPU use by TCP/IP

For your own edification, you should ftp to SRI-NIC and grab a
copy of the file <NETINFO>HOSTS.TXT.  This contains a list of all
of the "officially registered" networks, gateways, and hosts in
the ARPA Internet, which includes the ARPANET and the MILNET.

As of this morning, the file contained 188 networks, 100
gateways, and 1158 hosts.

To answer your other question, the ARPANET and the MILNET are
both parts of the Internet.  At the moment, full Internet access
is allowed to the MILNET, but an administrative switch (not
currently in use) is available that would restrict MILNET access
to mail only, cutting off TELNET and FTP access.


Date:      23 Jan 1985 13:27:03 PST
Subject:   re: CPU use by TCP/IP  -- LAT


Can someone send me the spec for LAT to publish as an RFC?

Date:      23 Jan 85 10:40:11 EST (Wed)
From:      Mike Minnich <mminnich@udel-huey>
To:        J Q Johnson <>
Subject:   Re: CPU use by TCP/IP
As one of the implementors of the ethernet terminal concentrator
(based on an earlier, non-ethernet terminal concentrator from Cornell),
I'd like to add some further comments:

The lack of any real protocol (other than that of multiplexing 
host-to-concentrator data in single ethernet packets) might come as 
a surprise.  However, at the time, we were faced with developing 
and putting into use an alternative terminal switch in about a
month's time --- not a realistic time frame for the design, 
implementation, and testing of a new protocol.  

We also wanted to resolve how to support multiple, simultaneously
active terminal to host connections.  Telnetting/Rlogging from
host to host was not felt to be a viable solution, for all of the
reasons that have been brought out in this discussion.  The 
Cornell terminal switch had this capability.

Such an implementation has worked extremely well for us.  Currently,
the switch supports 80 terminals, with three 4.2BSD hosts on the
back end.  

Things it does well:

1)	Decreases interrupt load on the hosts due to terminal I/O
	by an order of magnitude.  Typically, switch-to-host packets
	contain 2-5 characters, while host-to-switch packets are
	often in the 200-500 byte range.

2)	Decreases CPU usage on the hosts due to IP/TCP/TELNET by 
	eliminating the need to telnet/rlogin from one host to another.

3)	Provides an efficient network <--> terminal handler interface.
	Because there is very little protocol overhead, ethernet
	packets are simply demultiplexed and the resulting character
	data is injected directly into the terminal handler.  On
	output, however, all the tty queues are swept to make as much
	use of the available room in the packet as possible.  This
	buys alot, and is a big win over tty-at-a-time output, be it
	DZ or DH.   This architecture is somewhat comparable to 
	David Kashtan's implementation of TELNET under VMS/Eunice

Things it does not do:

1)	As mentioned, there exists no reliable protocol upon which this
	rides.  It needs one.  From my experiences, TCP is not the
	appropriate choice;  something along the lines of RDP would
	do the job much more efficiently.  The issue of whether or not
	the protocol should ride on top of IP is open to debate.
	I tend to agree with J. Q. in that one of the toughest problems
	to be solved here is the potentially long queuing delays.
	Clearly, a reliable protocol is needed first.  Just putting
	the current implementation on top of IP would be silly.

2)	It does not do 'cooked' mode processing.  This is also open to
	debate.  A good reason for not doing it is that then it doesn't
	matter what kind of machine the switch is talking to
	(VMS/UNIX/etc).  This also allows the switch to support 
	more terminals.

3)	It does not do windows.

Further thoughts?


Date:      Wednesday, 23 January 1985 11:02:51 EST
Subject:   Re: CPU use by TCP/IP
Perhaps to muddy the waters a bit ...

We've been using PDP-11 (PUP) based "conventional" terminal
concentrators on our local 3Mb ethernet at CMU for a few years.  I
happen to be typing this from one at the moment and just looked at the
current connection distribution on my concentrator.  Of the 13 (out of
24) terminals in use, there were 3 connections each in use to two of
our systems, and 1 single connection to each of 7 of our other

In a connection distribution such as this, a multiplexed protocol might
not actually reduce the ethernet load all that much.  Perhaps this
effect could be mitigated by strategically arranging concentrators and
terminals so that people likely to work on the same systems shared the
same concentrator.  At least in our environment, such an approach
probably wouldn't be practical, though.

				- Mike Accetta
Date:      23 Jan 85 1155 EST (Wednesday)
From:      don.provan@CMU-CS-A.ARPA
Subject:   Re: CPU use by TCP/IP
I agree with Mike that many situations won't be helped by multiplexing
terminal traffic.  I think it would be quite rare for multiplexing to
help at all outside of a single network.  In most cases, it won't
even help outside a single ether cable.

I seem to remember from last DECUS that DEC was keeping the LAT
protocol under wraps, so we may have trouble "adopting" it.  I could
be thinking of something else, so feel free to correct me.

The important point, though, is that LAT like things are going to be
very important in LANs.  Some thought has to be given to the method
of connecting a LAT terminal out into another network effeciently.
The "best" method would seem to be a terminal gateway, a gateway that
can unpack terminal traffic and fire it out using TelNet or any
improved non-multiplexed protocol.  Generally such high level gateways
are pooh-poohed, but with something "as simple" as terminal traffic,
perhaps it wouldn't be too difficult.  It would certainly be the most
direct method.
Date:      Wed 23 Jan 85 16:26:39-PST
From:      Tony Holland <tony@SRI-KL.ARPA>
To:        srinet@SRI-KL.ARPA
Subject:   re:help needed
After a couple of calls to 3-Com they have admitted that there ethernet
PC interface will not work with the Interlan transceiver and they had no plans
to correct the problem. I will not need the loan of the xceivers as I am
going to order the 3-com part.

Just a warning to anyone out there planning to bring up a PC on SRInet to
get the 3-Com xceiver as well.

Thanks again for all of the responses I got.

Date:      23 Jan 1985 21:56 PST
From:      Art Berggreen <ART@ACC>
To:        tcp-ip@sri-nic
Cc:        info-unix@brl
Subject:   RE: TCP/IP for UNIX 5.2

> Is there an implementation of TCP/IP for UNIX 5.2 systems?

Try TWG (The Wollongong Group), they are porting the BSD 4.2
networking kernel into System V as a device driver.  The
Berkeley "Socket" operations are supported as library routines
which map into driver ioctls.

    TWG is at (415)962-7100

Date:      23 Jan 1985 2215-EST (Wednesday)
From:      Christopher A Kent <cak@Purdue.ARPA>
Subject:   re: CPU use by TCP/IP  -- LAT

DEC is keeping the protocol spec "Company Confidential" until further

Date:      24 Jan 1985 13:13:09 PST
Subject:   re: The Invisible LAT !

Great!  Here we are told about the greatest thing since sliced bread, and
instructed to investigate it at once, only one minor detail -- its company

Date: Wed, 23 Jan 85 04:07:07 est
Subject: Re: CPU use by TCP/IP


Given that DEC already has or is working on TOPS-20, VMS, and UNIX
implementations, I would recommend that people take a good look at LAT
before designing something else.

- Chris
Date: 23 Jan 1985 13:27:03 PST
Subject: re: CPU use by TCP/IP  -- LAT


Can someone send me the spec for LAT to publish as an RFC?

Date: Wed, 23 Jan 85 20:13:58 est
Subject: re: CPU use by TCP/IP  -- LAT

As far as I know it's still company confidential, and I'm bound by some
agreement not to give it out, but I'll be visiting DEC next week and I'll
see what the story is...

- Chris
From: Christopher A Kent <cak@Purdue.ARPA>
Date: 23 Jan 1985 2215-EST (Wednesday)
Subject: re: CPU use by TCP/IP  -- LAT


DEC is keeping the protocol spec "Company Confidential" until further

Date: Thu 24 Jan 85 08:36:08-EST
Subject: re: CPU use by TCP/IP  -- LAT

dec is being a pain about it right now.  it is company confidential.  i will
see if i can get it released.  sounds sort of counter productive to me.

Date:      Thu, 24 Jan 85 16:18 PST
From:      Provan@LLL-MFE.ARPA
Subject:   milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
Last night (23-jan-85) around 5 p.m.  PST, weird things started happening
to connections I had opened from LLL-MFE to CMU-CS-A.  The symptoms were
lost packets, redirect messages, and destination (network) unreachable
messages.  I'm not sure, but I believe source quenches were showing up,
also.  As far as I could tell, no packets were actually getting through
to CMU-CS-A.  After 5 or 10 minutes at 5 p.m., I gave up.  When I
reestablished the connection later in the evening (9:30 p.m.), it was
fine for a while but then flaked out again.  I have used a connection
today extensively and had no problems.  Our prime arpa/milnet gateway is

Now I wouldn't stake my life on my IP/TCP implementation (which is
running at both LLL-MFE and CMU-CS-A), but I have tested redirecting in
the past, including redirecting to my secondary gateway, and it used to
work.  The only check I made was to see if my prime gateway had
redirected me to the secondary or some other gateway, which shows up as
my software trying this other gateway first until redirected back.  No
such change had occurred.

Can anyone give me a clue what was going on?  I'd like to know whether
it's some problem in my software or just something peculiar happened
and I was the only one to notice.
Date:      Thu, 24 Jan 85 09:37 IST
From:      Henry Nussbacher  <VSHANK%weizmann.BITNET@WISCVM.ARPA>
To:        <>
Subject:   Reliability of an Ether (CSMA/CD LAN)

Allow me to quote from CACM, July 1976 pp 395-403:
(Apologies to all who know this and take this as an axiom)

Ethernet: Distributed Packet Switching for Local Computer Networks
          by Robert Metcalfe and David Boggs

3.4 Reliability

An Ethernet is probabilistic.  Packets may be lost due to interference
with other packets, impulse noise on the Ether, an inactive receiver
at a packet's intended destination, or purposeful discard.  Protocols
used to communicate through an Ethernet must assume that packets will
be received correctly at intended destinations 'only with high

End of CACM quote.

There is another paragraph after the one quoted above that discusses
the trade-offs between error-free and economy.  People interested in
this area should read over this article quite acrefully.  There are
many other interesting paragraphs that point out weaknesses in Ethernet.

Henry Nussbacher
Weizmann Institute of Science
Rechovot, Israel
Date:      25 Jan 85 03:00:21 PST
To:        Provan@LLL-MFE.ARPA
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
I don't have any answers, but I can add another instance of the same
sort of question.

Ages ago, I patched our mail gateway to printout a line of info whenever
it processed an ICMP redirect. That code is still there...

Just before the Christmas break, I noticed a blizard of printout. It
looked like the gateways couldn't decide who I should be sending the
packets to. Here is a very small sample. I think it kept going like this
until we gave up trying to open the connection.

18:42:39 Important: BBN-MILNET-GW told us to send packets to
[] via [] = CSS-GATEWAY.
18:42:40 Important: CSS-GATEWAY told us to send packets to []
18:42:40 Important: BBN-MILNET-GW told us to send packets to
[] via [] = CSS-GATEWAY.
18:42:41 Important: CSS-GATEWAY told us to send packets to []
18:42:44 Important: BBN-MILNET-GW told us to send packets to
[] via [] = CSS-GATEWAY.
18:42:44 Important: CSS-GATEWAY told us to send packets to []
18:42:44 Important: BBN-MILNET-GW told us to send packets to
[] via [] = CSS-GATEWAY.
18:42:45 Important: CSS-GATEWAY told us to send packets to []
18:42:45 Important: BBN-MILNET-GW told us to send packets to
[] via [] = CSS-GATEWAY.
18:42:45 Important: CSS-GATEWAY told us to send packets to []

Date:      Friday, 25 Jan 1985 10:09-PST
From:      imagen!
To:        shasta!tcp-ip@SRI-NIC.ARPA
Subject:   Re: CPU use by TCP/IP

Everyone out there seems to believe that the real advantage of an
Ethernet terminal concentrator is that it decreases the number of
packets sent across the network, by combining the characters for many
terminal sessions.  This sounds reasonable.  If you want to add
reliability, however, I don't think you have to reinvent the wheel.
Why not define a terminal multiplexing protocol on top of TCP?

- Geof
Date:      25 Jan 1985 1320-PST (Friday)
From:      Jeff Mogul <mogul@Navajo>
To:        TCP-IP@SRI-NIC
Subject:   YASP
My comments on RFC932, "A Subnetwork Addressing Scheme":

Technically, there is not much wrong with the scheme.  While
it does limit the number of subnets per network, and also the
number of hosts per subnet, there are administrative solutions
to both problems.  Moreover, it does have the advantage that
the required modifications are to a small set of implementations
(i.e., the gateways), which are more likely to follow the mandates
of "standards" than the myriad non-gateway hosts.

The trouble with it is that, in order to work, it requires "a
scheduled mandatory change by every gateway implementation" before
anyone could make use of it.  One of the things that makes subnet
schemes attractive (really, essential) is that they remove pressure
on routing tables.  If we switched to the "B 1/2" scheme before
some gateways are updated, their routing tables will explode.
[Noel Chiappa informs me that currently, the core gateways already
have no space left in their routing tables.  If Stanford University
alone were to trade our single subnetted network for a set of Class
C nets, we would add 22 entries to the tables tomorrow, and more
every month!]

So, given the realities of scheduling and implementing  a change like
this, it probably won't be done for 1 or 2 years.  Meanwhile, what
are we going to do?

Philosophical point: activists for non-violent political change are
fond of saying "think globally, act locally".  The problem with RFC932
is that it is global solution to locally solvable problem.  I believe
that any of the other three proposed subnet schemes (my RFC917, Jon
Postel's RFC925 for Multi-LAN ARP, and (no RFC yet) Berkeley's scheme,
which differs from RFC917 in details which are, for now, irrelevant)
are better because they are local solutions, and do not require a

RFC932 says of RFC917 "While the modification is a simple one,
it is necessary to retrofit it into all implementations, including
those which are already in the field."  This is a misunderstanding,
and the global/local distinction shows why: only those organizations
that desire to use subnets must implement RFC917 (or RFC925 or
Berkeley's scheme.)  Further, for the large class of networks that
already use RFC826 ARP (especially EtherNets), the change need be
made ONLY to gateways.  Finally, the proof is in the pudding:
we have existing implementations, in use, for 4.2BSD and TOPS-20,
and at least one commercial product (Imagen) supports RFC917.

I would support RFC932 over the other three schemes for one reason:
because it cannot be done without a globally mandated change, it
provides a political vehicle for bringing some order to the problem.

I'd rather see one of the other schemes mandated as "Standard",
or rather "standard if you support subnets at all", if the
powers that be are willing to take a stand.  In this respect, I
was distressed to learn that 4.3BSD and the next Ultrix
release will apparently include the Berkeley scheme, even though
this was never proposed as an RFC.  (On the other hand, careful
students of RFC917 will observe that the Stanford scheme can
replace the Berkeley scheme incrementally; you can switch over
one host at a time without breaking anything.)
Date:      Fri, 25 Jan 85 11:10:06 -0200
From:      joel%wisdom.BITNET@Berkeley (Joel Isaacson)
To:        tcpip-list%wisdom.BITNET@Berkeley
Subject:   Request for RT/11 implementation
  Does anybody know of a RT/11 TCP/IP implementation. Any information on a
TCP/IP running on either a bare machine or other operating systems
would be helpful.

                                Joel Isaacson
Date:      26 January 1985 09:44-EST
From:      David C. Plummer <DCP @ MIT-MC>
To:        cak @ PURDUE
Subject:   re: CPU use by TCP/IP  -- LAT
    From: Christopher A Kent <cak@Purdue.ARPA>
    Date: 23 Jan 1985 2215-EST (Wednesday)

    DEC is keeping the protocol spec "Company Confidential" until further

Does anybody happen to know if they are just doing the easy ascii
things, or if they are doing a full blown general protocol?  From
what I have seen from DEC in the past, their protocols are well
suited for their own terminals (VT100s, bogus ideas of scroll
windows, etc) but not well suited for other vendor's terminals.
In particular, do they address the issues of generic operations,
graphical output (raster, vector and BITBLT operations) and
graphical input (mouse, tablet)?  I guess my question can be
reduced to: is it a bastardized TELNET or a glorified SUPDUP?

Maybe I'm off base and LAT is yet another byte stream
transmission medium (such as TCP and CHAOS) over which you can
implement an existing byte protocol (such as TELNET and SUPDUP)?

P.s.: those interested in front end echoing should find out if
the Multics people ever published what they did with ECHO
NEGOTIATION (ECHNEGO) between the mainframe and the front end.
This allowed them to run a reliable EMACS with 90+% of the
buffering and echoing happeneing in the front end so the
mainframe didn't have to wake up for every character.

Date:      Sat, 26 Jan 85 19:48:40 EST
From:      Doug Gwyn (VLD/VMB) <gwyn@Brl-Vld.ARPA>
To:        Robert C McQueen <>
Subject:   Re:  TCP/IP for UNIX 5.2
Yes, but not from AT&T, who seem to think TCP/IP is unimportant.
Several UNIX System V VARs have added TCP/IP, usually taken from
the 4.2BSD system.  I think Unisoft is one of these VARs.
Date:      27 Jan 85 00:56:48 PST
Subject:   Collisions on Ethernets and TCP/Telnet CPU load
Collisions on an ethernet are as normal as queueing delays in a gateway.
Unless the net is grossly overloaded, they should never cause any
packets to be lost. The ethernet controllers that I've worked with in
various machines made by Xerox all do the backoff and retransmissions in
microcode. The driver doesn't even get an interrupt until the packet has
been sent (or encountered 16 collisions). The "automatic" retransmission
of packets after collisions is critical to the operation of CSMA/CD
networks. (Without reliable collision detection or automatic
retransmission, the network is an Aloha style network.) It's unfortunate
that the same word is used too describe both this case and the high
level end-to-end retransmissions. 

There is an important limitation in the collisions/queueing delay
analogy. It is fairly common to configure a system where the phone line
queues in a gateway actually overflow often enough to be interesting, or
important, or even frustrating. On the other hand, you have to work
pretty hard to generate enough traffic on an ethernet to notice
collisions, much less encounter 16 of them on a particular packet.
Typical throughput for FTP is 100 to 250 kilobits/second. At that rate
it takes many pairs of machines to use enough bandwidth on an ethernet
to generate any significant number of collisions, and they all have to
actively transfering files at the same time to be interesting.

If any of the ethernets around here ever got enough traffic to cause
problems with collisions, we would split it into two. We have split
ethernets because we ran out of Pup host numbers (8 bits) but I've never
seen any troubles with too many collisions or too heavy a load.

Here are some statistics from a pair of ethernets connected to a gateway
that's been up for 100 hours. The "Lds" line is a histogram of the
collisions. (The first net has >100 machines on it. The second one has

  Rcv: pkts 5455055, words 448963384, bad 273, missed 8423, idle 1527
    crc 2, bad alignment but ok crc 0, crc and bad alignment 64
    ok but dribble 0, too long 207, overrun 0
  Xmit: pkts 5552909, words 322565904, bad 243
    underrun 0, stuck 90, too many collisions 243
  Lds: 5546567 6198 109 26 3 0 2 0 1 0 2 0 0 1 1 1

  Rcv: pkts 2129116, words 184658848, bad 116, missed 2441, idle 623
    crc 1, bad alignment but ok crc 0, crc and bad alignment 4
    ok but dribble 0, too long 111, overrun 0
  Xmit: pkts 1374624, words 99735145, bad 0
    underrun 0, stuck 23, too many collisions 0
  Lds: 1373961 613 27 16 5 0 1 0 0 1 0 0 0 0 0 0

From my experience, the only important reason for lost packets is that
the receiver wasn't ready when the packet was sent. (This is especially
true for gateways that are running out of CPU cycles.) The next most
likely problem is the net getting shorted for a few seconds while a new
transciever is installed or things of that nature. (That probably
accounts for all 243 packets above that had more than 16 collisions.) As
you can see, CRC errors are totally insignificant around here.

If you setup properly designed test programs on a pair of machines, you
can exchange packets for hours without missing a single packet.

When I do back-of-the-envelope timing calculations involving ethernets,
I almost always ignore collisions. If the load on an ethernet is less
than 30%, that's not an unreasonable assumption.

These comments, of course, apply to the way we use ethernets within
Xerox. Most of our traffic is FTP or mail or RPC/Leaf talking to file
servers. If you are running voice on your ethernet or doing swapping
from a remote disk you might generate enough load to cause problems.
Even then, the queueing delays will probably become intolerable long
before any packets are lost due to too many collisions.


There is an interesting subtle point about ethernets. It takes 3
stations transmitting at the same time to generate a significant number
of collisions. If this section makes sense to you, you have an excellent
understanding of the way ethernets really work. If you can't figure out
what I'm trying to say, and are still curious, let me know and I'll try

Consider what happens if you have one machine blasting packets into the
ether. (Send to a host number that doesn't exist so you are sure that
nobody will object and confuse the experiment.) If you look at the coax
with a scope, you will see a packet, then a gap while the machine gets
ready to send again, then another packet....

When you fire up the second machine, it will either start to send during
a gap between packets, or it will start trying to send in the middle of
a packet the first machine is sending. If it gets started during the gap
and finishes, there won't be any collisions. If it gets ready to send
while the first machine is still sending, it will wait (defer) until the
packet in progress ends. Then it will start to send. There is no
collision during the switchover. If the second machine starts during a
gap but doesn't finish before the first machine would normally start
again, the first machine will defer until the second machine finishes

Unless something funny is going on, both machines will get into
lockstep. The slightly faster one will catch up to the slower one and
then it will always have to wait for the slower one to finish before it
gets a chance to send. On a scope, you will see pairs of packets right
next to eachother, then a gap, then another pair of packets, then
another gap, .... If the machines are fast enough, the gap goes away.
That's also stable: each machine just waits for the other one to finish
and then starts to transmit.

I'm not saying that you can't get ANY collisions with 2 machines, but
you won't be able to get enough to be interesting. To make a collision
with 2 machines, both have to start sending "at the same time". The size
of that window is the round trip time between the two machine. On a 10MB
ethernet, the worst case is 50 microseconds. That's small enough so that
you won't be able to hit it very often unless you write some really
tricky test programs. (You could, for example, synchronize the clocks on
a pair of machines by noting the time when a packet finished.)

You might be able to outfox me if your controller is fast enough to get
the next packet ready to send during the 9.6 microsecond gap at the end
of a packet. I've never worked with any machine fast enough to do that.
Count the memory cycles needed to finish off sending a packet and follow
the chain to the next packet and ....

With 3 machines it's easy to get collisions. All you have to do is
transmit fast enough so that the second and third machines both try to
start sending while the first machine is actually transmitting a packet.
Then they will both collide when the first packet ends. (In essence, the
first machine is providing a low level clock that allows the other two
machines to get synchronized.)

I do this by telling the first machine to send long packets, and the
other two machines to send short packets. Everything gets into lockstep
and the second and third machines collide every time.


I'm not too surprised that (untuned) TCP/Telnet uses a large fraction of
a CPU.

If I was working on that problem, I would try real hard to tune my
implementations before I started inventing new protocols. Sure,
everybody wants to be a hero and invent a new protocol, but if you start
down that path, be prepared for the nitty details like making something
work through gateways, and living with subnets and ....

I'd expect that it wouldn't be too hard to get the CPU time used by
TCP/Telnet down below that used by the interrupt routines to drive
character-at-a-time hardware. This assumes that the interesting case is
things like typing out a whole screenful. In that case, you can get a
lot of characters into a packet so that the high per-packet overhead
isn't so important.

I don't consider "it happens on noisy lines" to be a valid excuse for
tolerating lost typein. It's too easy to maintain and check a sequence
number. If the loss rate is low enough so that you would even dream
about putting up with lost packets, you won't have to consider
retransmiting often enough to use enough CPU cycles to notice. (The real
problem will probably be testing the code to make sure it works right
when you actually do drop a packet.)

I also second the comment about being careful not to over tune the dumb
terminal case. I think that smart workstations are the wave of the
future, but you could probably guess that by looking at the return
address of this message.
Date:      27 Jan 1985 12:01-EST
To:        Kluger.osbunorth@XEROX.ARPA
Subject:   Re: CPU use by TCP/IP

Thanks for the additional detail - I was not sure how uniformly everyone
introduced the automatic retransmission facility into the collision 
detection mechanisms on ethernets.

I confess to being puzzled, as are a few othes in this exchange, at the
high priority being placed on efficient handling of very short packets
containing a few characters of information - as in terminal servicing.

I would have thought that local area nets and work stations or PCs would
be emerging as the front end handling most character level interaction
with the user. The back end to the mainframe would be concerned with
larger scale transactions (pages from disk, file access, etc.).

I must have my head stuck in the wrong sandpile...

Date:      27 Jan 85 19:20:46 EST
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        Kluger.osbunorth@XEROX.ARPA, cak@PURDUE.ARPA, remarks@RUTGERS.ARPA, tcp-ip@SRI-NIC.ARPA
Subject:   Re: CPU use by TCP/IP
While it is true that computing is moving towards workstations,
it is going to be several years before that movement is complete.
Few places have enough money to put their undergrads on SUN's,
and at most places faculty and grad students are still in the
process of moving from timesharing to workstations.  In the interim,
many of us still have to worry about communications for conventional
terminals.  At Rutgers, we have terminal locations scattered
around 4 locations at New Brunswick, separated by several miles
from each other.  We also have branches at Newark and Camden.  We
are in the process of establishing high-speed communications among
all of these sites, probably using a broad-band cable and T1.  Once this
is done, I would like to be able to attach all of these terminals
to terminal servers and connect them by TCP.  The alternative is
various systems of multiplexers, high-speed modems, and digital
terminal switches.  The latter setup will not be as flexible, and
will represent movement in precisely the opposite direction from
where we want to go in the long run. That is, it will involve
substantial investment in equipment specialized for handling RS232,
as opposed to equipment compatible with TCP.  It would also require
us to split up our T1 bandwidth to allow for several RS232 channels,
rather than using the entire bandwidth for carrying packets.

I don't think we are alone in this situation.  Very few places
have gotten rid of all terminals.  Broad-band cable, T1, multiplexers,
terminal switches, and the like, are just now becoming inexpensive
enough and reliable enough that many places are either installing
them or considering doing so.  If TCP can be made to handle terminals
efficiently, during what I hope to be a few years of interim situation,
it can save us from having to invest in equipment that we would rather
not have in the long run.  We have been hearing that everyone is about
to move to workstations for the last several years.  While it is
clear that movement in this direction is happening, it is also clear
that it is happening more slowly than some had predicted, especially
for people outside major computer science research centers.  (Please
note that I am concerned here about all of Rutgers, not just our
computer science department.)  I have always been a supporter of long
term planning.  However I would hope that we would not let our
enthusiasm about sexy technology prevent us from thinking about the
best way to handle all those people who are still computing on VAXen
and DEC-20's.

Date:      27 Jan 1985 1955-EST (Sunday)
From:      Christopher A Kent <cak@Purdue.ARPA>
To:        David C. Plummer <DCP@MIT-MC.ARPA>
Subject:   re: CPU use by TCP/IP  -- LAT
It seems that LAT is a very lightweight non-reliable virtual circuit
protocol. They have some terminal support during the setup phase (like
passing login and terminal type information) and rudimentary support
for handling multiple sessions, but there is no effort to do generic
operation support.

The reason that it is "more efficient" is that the terminal front end
holds off sending packets for between 16 and 30 milliseconds. This
shouldn't be noticable to most typists (I think that 30 ms is the delay
between characters typed at 100wpm) and they package up all the
characters waiting for a particular host in one packet. 

Since the protocol is so lightweight, it's easy to put it in the
kernel, and slide it between the network layer and the terminal layer,
so LAT connections can be made to look like ordinary hardwired

I'm not sure how they handle host name to address binding, but I
believe that hosts willing to accept LAT connections broadcast
something indicating that, including their name, and the front ends
cache the information.

Date:      Sun 27 Jan 85 20:14:30-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
Cc:        JNC@MIT-XX.ARPA
Subject:   Subnets
	Before a large round of verbiage overloads this poor mailing
list (which is immediate distribution, not digested, please remember)
I should note that the 'Gateway Task Force' of the Internet Working
group is even now considering the issue of subnets and the 4 different
proposals. Anything that group recommends (should it even be able to
agree) would also be discussed by the 'Internet Acitivities Board'
before being blessed.
	I guess what I'm really saying is 'don't flame randomly'. New
ideas and analyses are welcome, but please check around to make sure
you do have something new and worthwhile before inflicting it on
	My poor mailbox (and I) thank you humbly...
Date:      27 Jan 85 21:47:05 EST (Sun)
From:      Dennis Rockwell <drockwel@CSNET-SH.ARPA>
Cc:        Provan@LLL-MFE.ARPA, tcp-ip@SRI-NIC.ARPA, cic@CSNET-SH.ARPA
Subject:   Re: milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
	Subject: Re: milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
	Date: 25 Jan 85 03:00:21 PST

	[ ... ]

	18:42:39 Important: BBN-MILNET-GW told us to send packets to
	[] via [] = CSS-GATEWAY.
	18:42:40 Important: CSS-GATEWAY told us to send packets to []

	[ ... ]

This is a case of EGP/GGP incompatibility.  CSNET-RELAY (a Vax running Paul
Kirton's EGP) talks EGP to CSS-GATEWAY, which tells the rest of the gateways
via GGP.  Unfortunately, GGP does not say "send packets to net foo through
gateway bar", it says "I am N hops from net foo" (which worked just fine
when all gateways spoke GGP).  Thus, the GGP gateways redirect you to CSS
(because CSS tells the core gateways "I am two hops away from 128.42"
because CSNET-RELAY tells CSS that CSNET-RELAY is reachable through itself
at a distance of one hop) and CSS sends you to CSNET-RELAY (which is the
correct gateway for net 128.42).  When the core gateway GGP goes through
another few evolutionary changes (or is redone completely), this condition
should be corrected; the gateway folks are well aware of it.

My question is, why do you throw away the information that CSS gives you and
keep trying BBN-MILNET-GW?  You have found a stable route through
CSNET-RELAY (which certainly doesn't tell you to go anywhere else).

Dennis Rockwell
CSNET Technical Staff
Date:      Monday, 28 Jan 1985 09:05-PST
From:      imagen!
To:        shasta!
Subject:   Re: Reliability of an Ether (CSMA/CD LAN)

What Henry Nussbacher brings up about the reliability of the Ethernet
is true, but the emphasis on three causes of lost packets in an
Ethernet fits logical arguments, not practical experience.  Most
experience with Ethernet implementations indicates that a temporarily
inactive receiver is the most common cause of lost packets.  Other
errors are so rare as to be inconsequential (i.e., their probability is
similar to the probability that the higher level software is not

Another way of saying the above is that network congestion at the
ethernet interface is the real impediment to reliability, not
transmission errors.  This is significant, since it means that tight
timeouts are not the best way to acheive acceptable reliability, and
that loose timeouts will only rarely adversely affect a connection.

The extra costs to acheive reliability in network communications are
several, but the most significant of these are:
	- software complexity (extra state and asynchrony to be dealt with)
	- extra packets transmitted 
In a properly designed product, the software complexity does not lead
to noticeably slower code, since the extra `clauses' of software to
handle lost data are not used for most packets.  The number of extra
packets transmitted can also be held to a minimum using very loose
timeouts, in accordance with the above discussion.

The conclusion is that on an Ethernet, or similarly reliable network,
the only real cost of a reliable protocol over an unreliable one is
the non-recurring engineering cost of producing a good implementation.

Get to work, unix wizards!

- Geof
Date:      Monday, 28 Jan 1985 09:20-PST
To:        shasta!tcp-ip@SRI-NIC.ARPA
Subject:   Re: CPU use by TCP/IP

Everyone out there seems to believe that the real advantage of an
Ethernet terminal concentrator is that it decreases the number of
packets sent across the network, by combining the characters for many
terminal sessions.  This sounds reasonable.  If you want to add
reliability, however, I don't think you have to reinvent the wheel.
Why not define a terminal multiplexing protocol on top of TCP?

- Geof

Date:      Mon, 28 Jan 85 8:39:37 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
Subject:   Cabernet?
OK, I give.  What's a Cabernet?  I always thought it was red wine.

Date:      28 Jan 85 22:09:10 PST
To:        Dennis Rockwell <drockwel@CSNET-SH.ARPA>
Subject:   Re: milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
> My question is, why do you throw away the information that CSS gives
you and
> keep trying BBN-MILNET-GW?  You have found a stable route through
> CSNET-RELAY (which certainly doesn't tell you to go anywhere else).

Beats me. How about the other question: Why is BBN-MILNET-GW telling me
to use CSS-GATEWAY? I'm reasonably sure that I didn't send BBN-MILNET-GW
(at least not directly) any packets once CSS-GATEWAY and/or CSNET-RELAY
told me they wanted my business.

I've seen small spurts of printout lots of times, but they can normally
be explained by some handwaving along the lines of several packets were
in transit before my routing tables got switched to the right place. The
session that included the small sample in my previous message lasted for
minutes, and that was with the "printout" running several messages a

Is there some likely bug in my code? Short of randomly trashing my
routing table I can't think of anything likely. The nasty sessions
stopped after a while, and my machine was healthy enough to run for a
while longer so I'm not very suspicious of either my software or

Date:      29 Jan 1985 0618-PST (Tuesday)
From:      (Van Jacobson) van@lbl-csam
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
If Xerox is running EGP, it might be causing the route switching.  Until
we did a bit of hacking on Paul Kirton's EGP, we were averaging 14 ICMP
redirects a minute with normal daytime traffic.

Currently there are very few "core" EGP gateways (2 for MILNET).  The
routes you get from the EGP gateway reflect its position in the
network, not yours.  If you're far from the EGP gateway, using those
routes will generate redirects.

For example, we're on net 26.  Say we want to talk to the Berkeley campus
net (128.32) which is gatewayed off net 10.  The route we get from Aero,
our EGP gateway, is "128.32 via BBN-Milnet-Gwy" because BBN-Milnet-Gwy is
the best way for Aero to get to net 10.  Our EGP installs that route, we
ship off a packet, and BBN-Milnet-Gwy (which is running GGP & knows the
Arpanet/Milnet connectivity) says "No, fool. Use the LBL-Milnet-Gwy that's
right next door." via an ICMP redirect.  EGP picks up the redirect &
installs the new route but, 2 minutes later, Aero ships out an EGP update
that switches back to the BBN route.  This continues as long as there's
traffic on the route.

The eventual fix is probably to have all the Arpanet/Milnet gateways
speak EGP.  Until this or some other solution surfaces, a quick work-around
is not to pass ICMP redirects to your local EGP process.

-Van Jacobson, LBL
Date:      29 Jan 1985 21:05-PST
From:      the tty of Geoffrey S. Goodfellow <Geoff@SRI-CSL.ARPA>
To:        ron@BRL-TGR
Subject:   Re:  Cabernet?
Obviously part of Xerox's corporate Grapevine!

Date:      Wed 30 Jan 85 18:05:04-EST
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Slight bug in RFC917, 'Internet Subnets'
	I'd like to warn people about a bug in the pseudo-code in the
subnet RFC. On page 8, where it talks about the changes necessary,
there is the line of code:

send_pkt_locally(packet, gateway_to(bitwise_and(packet.ip_dest, my_ip_mask)))

which is not correct. It should be simply:

send_pkt_locally(packet, gateway_to(packet.ip_dest))

since the my_ip_mask could be a small as 8 bits for a host on a
non-subnetted class A net, and the detination could well be a class C net.
The routing code will clearly need the whole field. If it is a host using a
routing cache based directly on host numbers (which I claim ought to be
recommended as the easiest and most flexible for hosts) it will need the
whole field; if a regular routing table you might pass in just the network

Date:      Thu, 31 Jan 85 12:22:14 EST
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        van@LBL-CSAM.ARPA
Cc:, tcp-ip@SRI-NIC.ARPA
Subject:   Re:  milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
Of course, this is not a problem unless your gateway is also a host.
Gateways don't source packets, so they don't get redirects.

Date:      31 Jan 85 23:13:46 PST
To:        (Van Jacobson) van@LBL-CSAM.ARPA
Cc:, tcp-ip@SRI-NIC.ARPA
Subject:   Re: milnet/arpanet connectivity, 23-jan-84 at 17:00 PST
Thanks for the suggestion, but no, we are just a normal host.

I had one idea about how our routing table could get "smashed". It's a
fixed size LRU. If we had too many connections open to too many
differnet nets, interesting ones might get kicked out of the table while
they were still in use. Unfortunatley, that doesn't fit the data since
the table would get initialized with either SRI-MILNET-GW or
LBL-MILNET-GW (our assigned gateways) rather than BBN-MILNET-GW which
was sending the first redirect.

Oh well, if it ever happens again I'll grab some more data.