The 'Security Digest' Archives (TM)

Archive: About | Browse | Search | Contributions | Feedback
Site: Help | Index | Search | Contact | Notices | Changes

ARCHIVE: TCP-IP Distribution List - Archives (1984)
DOCUMENT: TCP-IP Distribution List for July 1984 (46 messages, 18842 bytes)
NOTICE: recognises the rights of all third-party works.


Date:      Sun 1 Jul 84 22:33:00-PDT
From:      Philip Almquist <ALMQUIST@SU-SCORE.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   ICMP questions

1) What should an EGP-speaking gateway do if it receives an ICMP
   redirect message?

2) I am a bit puzzled about how to handle ICMP timestamp messages.
   If I do the obvious modular implementation the timestamps will
   reflect the times at which the ICMP process does things rather
   than the times at which the packets enter and leave the gateway.
   In particular, the difference between the receive timestamp and
   transmit timestamp will reflect the amount of time needed to
   convert the receive time from internal format into the ICMP
   representation, which is a poor proxy for the amount of time it
   takes the gateway to turn around the timestamp request.  Does
   this matter?  What is this option used for (besides fuzzballs)?
   What have other implementors done?

Date:      Mon, 2 Jul 84  7:57:03 EDT
From:      Mike Brescia <brescia@BBNCCQ.ARPA>
To:        Philip Almquist <>
Subject:   Re: ICMP questions

1) Current definition of ICMP redirect is that it should only be sent to
the IP source of a mis-routed packet; it is a kind of routing update for
gateways to send to hosts.  As an logical extension, a gateway should only
send ICMP redirect to a host on the same net as itself, since a remote 
host would not be able to change the routing decision made by intervening

	SOURCE - net a -> GATE A - net b -> GATE B ->...

Source host has made the decision to send to gateway A, so gateway A can
advise it with ICMP redirects.  Gateway A decides the routing to B, so
there should be some way that B tells A about routing changes (gateway
GGP, or exterior EGP).

Discussion has touched on (raged over?) whether gateways should advise 
other gateways using ICMP redirects.  The core gateways currently do not
send redirects to gateways, nor do they act on ones received.  The reason
was that the regular routing would catch up after a while.

2) Timestamp - I haven't heard of any use of timestamp since it was
introduced in core gateways.  Since there was a 32 bit time kept
internally, the time is just stuffed in the timestamp with the
'non-standard' bit set, and no attempt is made to time the arrival and
departure of the packet.  I suspect that if you don't have a use for
timestamps, your motivation will be low for doing detailed

Date:      2 Jul 1984 11:45:30 EDT
Subject:   Re: ICMP questions
In response to the message sent  Sun 1 Jul 84 22:33:00-PDT from ALMQUIST@SU-SCORE.ARPA


It is not clear whether the core gateways will send redirects to EGP
gateways. A good case to do this can be made in view of the rather
poor responsiveness of EGP to connectivity changes. In any case, if the
redirect is addressed to the gateway itself, it should be believed. There
is some controversy on whether redirects should be sent to the gateway
if a routing error is discovered for a packet originating behind the
gateway (i.e. from a local-net host). In order to figure out which
gateway sent a misrouted packet, the receiving gateway has to peek
at the ARPANET leader, which causes awkward layering problems.

If you think of timestamps as parameters passed between protocol layers,
elegance is maintained even if the receiving timestamp is read in the
device driver, which is how the fuzzballs do it. Producing the
transmit timestamp in the output driver and sticking it in the ICMP header
is just to crude to be thinkable. The fuzzies do it at the ICMP layer when
the header is munged. Thus, the transmit time is off by the time spent in
the output queue.

I don't know of any host or gateway, other than the fuzzy hosts and gateways,
that implement the ICMP timestamp option. It would be very useful when
conducting network experiments if other hosts and gateways included this
option. However, it makes sense only if there is an independent way to
synchronize the clocks, such as a radio clock or timekeeping protocol such
as used in the fuzzies. Barring such features, I suggest you treat ICMP
timestamps just like ICMP echoes, viz., interchange source and destination
addresses and simply scoot them back. Note that this makes sense when only
a roundtrip-delay calculation is involved.

Date:      Mon 2 Jul 84 14:28:21-EDT
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
Cc:        JNC@MIT-XX.ARPA
Subject:   Re: ICMP questions
	An additional problem with sending an ICMP packet to a gateway that
was not the original source of the packet is that there is nothing in the
packet at the IP level (in most cases) to tell you which gateway the packet
came from to you. Depending on what hardware the directly connected net uses,
you may or not be able to determine the 'previous hop'. Since there is
no general guarantee of this, that is why gateways generally don't get (or
handle) ICMP messages.
Date:      Tue, 3 Jul 84 14:45:21 edt
From:      cu-arpa.bill@Cornell.ARPA (Bill Nesheim)
To:        tcp-ip@sri-nic.ARPA
Subject:   1822 question

I have been having some trouble lately with "no buffer space available"
on Arpanet connections.  I find that the RFNM count for that connection
goes up to 8 and stays there "forever".  This always seems to happen
after we get a 1822 "interface reset" message from the IMP.
(By the way, we are a VDH to an imp in Deleware, of all places).

Will I be violating protocol by zeroing the outstanding RFNM count
in the imp host table when I get the "reset" message?  That seems
to be the only solution, as now I have to reboot my machine to
be able to access these hosts.  It is especially a pain with
so many users trying to access MILNET from here, as once the RFNM count
goes to 8 for arpa-milnet-gw, we're inaccessable to/from the MILNET.

		Bill Nesheim
		Cornell U. Dept of Computer Science
		Ithaca, NY 14853

ARPA: bill@Cornell.ARPA		BITNET: bill@CRNLCS	UUCP: ihnp4!cornell!bill
Date:      Tue, 3 Jul 84 16:22:55 EDT
From:      Jonathan Dreyer <jdreyer@BBNCCQ.ARPA>
To:        Philip Almquist <>
Cc:, erosen@BBNCCQ.ARPA
Subject:   Re: ICMP questions
       Number: 2  Length: 1100 bytes
    Received: from SRI-NIC.ARPA by BBN-UNIX ; 2 Jul 84 01:37:34 EDT
    Received: from SU-SCORE.ARPA by SRI-NIC.ARPA with TCP; Sun 1 Jul 84 22:34:26-PDT
    Date: Sun 1 Jul 84 22:33:00-PDT
    From: Philip Almquist <ALMQUIST@SU-SCORE.ARPA>
    Subject: ICMP questions
    To: tcp-ip@SRI-NIC.ARPA
    1) What should an EGP-speaking gateway do if it receives an ICMP
       redirect message?

There was a fight about this a while ago on this list.  I think
that few would argue against using the ICMP redirect (or any
other ICMP information) to help the gateway route packets.  The
more difficult question is whether you can use information you
got from ICMP (e.g. Destination Net Unreachable) in subsequent
EGP packets.  Although it might seem convienient to make use of
such information, it's a bad idea and makes EGP less robust,
since now for EGP to work ICMP must work too.  Let's say my
gateway has a broken ICMP and sends you a bogus Net Unreachable.
You can now propagate this misinformation throughout the Internet
with EGP.  To avoid such a situation, you should only trust your
own IGP (if any) and EGP when sending EGP information. 
    2) I am a bit puzzled about how to handle ICMP timestamp messages.
       If I do the obvious modular implementation the timestamps will
       reflect the times at which the ICMP process does things rather
       than the times at which the packets enter and leave the gateway.
       In particular, the difference between the receive timestamp and
       transmit timestamp will reflect the amount of time needed to
       convert the receive time from internal format into the ICMP
       representation, which is a poor proxy for the amount of time it
       takes the gateway to turn around the timestamp request.  Does
       this matter?  What is this option used for (besides fuzzballs)?
       What have other implementors done?

I was considering the same question for the BBN Butterfly
Gateway.  It is certainly possible to do something other than the
"obvious modular implementation." For example, your ICMP process
might merely mark the packet saying that when the lowest level is
about to start sending the packet, it should call a well-known
routine which stamps the time and recomputes the ICMP checksum.
However, this opens a can of worms.  For one, you would
conceptually like to do this when your "output ready" interrupt
occurs, but for many reasons you might not want to put this kind
of stuff into an interrupt routine.  Also, what happens if
transmitting a packet means nothing more than sending the packet
to a front-end?  How long is the front-end going to hold on to
it?  Further, what do you do if you have to retransmit the
packet?  You cannot recompute the timestamp because then you
would be transmitting a different packet, not retransmitting the
same packet.  In a really bizarre situation, your packet might be
broken up into smaller frames at a lower level, some of which
might have to be retransmitted, and the timestamp may even cross
a frame boundary!  (Don't ask me for examples of this one.) So
while you might be able to come up with a later (hence "more
accurate") timestamp if you try something clever, you cannot have
a mechanism that guarantees a timestamp that is within epsilon of
being right.  Hence I don't think it's worth worrying about.
There are better ways of determining how long a packet stays in a

While we are on the subject of useless features of ICMP, let's
look at the "Information Request" message.  The idea of this is
that a host which doesn't know what net it is on sends an
information request, with its local address (without net number)
in the source address field, to some ICMP agent.  The ICMP agent
swaps addresses and fills in the net number fields.  Now I can
believe that there might be hosts that don't know what net they
are on, and I can even believe that some of them might know the
address of an "information request server" if there were such a
thing.  (Is there?) But I cannot believe that there is a host
that knows its local address but not its net number, for if you
know your local address you know the class of your net number,
and if you know that much you have to be pretty ornery to not
know your net number.  This might have made sense back in the
days when all nets were class A arpanets, but not now; I don't
think this feature is worth implementing. RFC 903 solves a
similar, but more general and more difficult problem: how to find
out your entire internet address based on your "hardware"


Date:      Tue 3 Jul 84 20:38:06-EDT
From:      J. Noel Chiappa <JNC@MIT-XX.ARPA>
To:        cu-arpa.bill@CORNELL.ARPA, tcp-ip@SRI-NIC.ARPA
Cc:        JNC@MIT-XX.ARPA
Subject:   Re: 1822 question
	The 'interface reset' means (among other things) that the IMP
has thrown out all messages you had in transit. The fact that you never
get RFNM's for them is thus not a bug, but a feature. You should flush
your 'RFNM-waiting' table when you get an 'Interface Reset' (type 10)
Date:      3 Jul 1984 21:05:45 EDT
Subject:   Re: ICMP questions
In response to the message sent  Tue, 3 Jul 84 16:22:55 EDT from jdreyer@BBNCCQ.ARPA

Jon & Co.,

The original case for using ICMP to modulate EGP, made some time ago, was
limited to amending the local routing tables between EGP Updates. Since
in the present model the gateway originates only EGP commands and responses,
an ICMP unreachable message would imply one of its neighbors was kaput
Timely information about loss of net reachability due a bum neighbor should
be distributed "soon". I think there is no argument on that.

However, in the model some of us have been bickering about for a long while
a gateway may receive gratuitous information about packets it did not
originate, but did relay. This may be really important for those gateways on
VAN nets that want to sleep most of the time and sputter EGP packets very
seldom. While it is true that ICMP spoofers can readily destabilize things,
this is at present the only way reachability information can be obtained
about ordinary traffic. A gateway could, of course, independently verify
gratuitous information by originating its own test probes, even
using ICMP pings (if the BBN convention about sending error messages in
response to ICMP pings was universally followed).

Date:      3 Jul 84 23:26:46 EDT
From:      Charles Hedrick <HEDRICK@RUTGERS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   odd network address
We are finally on the way with our Ethernet TCP.  We now have to 4.2
hosts talking to each other, a Pyramid and a SUN.  (We will be adding
new hosts at a high rate.)  I haven't seen any problems on the Pyramid
yet, but when I telnet from the Pyramid to the Sun, a WHO on the SUN
claims that the connection is coming from net address The
actual address is  If I close the connection and make
another one, the SUN claims it is coming from  It continues
to increment, apparently following planned BSD release numbers.  Does
anybody have any idea what is going on?  As we have other more serious
problems, I haven't tried to figure out what, if any, tools 4.2 has to
help me diagnose this sort of problem.
Date:      Wed, 4 Jul 84 0:22:28 EDT
From:      Mike Muuss <mike@BRL-TGR.ARPA>
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Re:  odd network address
When your connection is in progress, try running

	netstat -a
	netstat -n

on both ends.  You may also be interested in a copy of the ARPTAB
program to look at the ARP tables.
Date:      Wed,  4 Jul 84 09:20:10 CDT
From:      Paul Milazzo <milazzo@rice.ARPA>
To:        Charles Hedrick <HEDRICK@RUTGERS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re: odd network address
> [...] when I telnet from the Pyramid to the Sun, a WHO on the SUN
> claims that the connection is coming from net address The
> actual address is
>			- Charles Hedrick <HEDRICK@RUTGERS.ARPA>

Yes, this seems to be a bug in the SUN /etc/telnetd, since the problem
does not occur when you use rlogin instead of TELNET.  I've been
meaning to track this one down for several weeks now.  I'll let you
know if I find anything (unless someone else already has?).

				Paul G. Milazzo <milazzo@rice.ARPA>
				Dept. of Computer Science
				Rice University, Houston, TX
Date:      Thu, 5 Jul 84 14:42:33 EDT
From:      Mike Brescia <brescia@BBNCCQ.ARPA>
Subject:   Re: ICMP questions
Unless a gateway wire-taps for passing ICMP messages, the only ICMP message
under discussion was the ICMP redirect, and whether a gateway should send
one to another gateway if the source host was not on the same net as the 
input interface on this gateway.

The ICMP net unreachable is directed to the source host, and the assumption
is that 1) the source is a person, and the user-program (e.g. telnet)
	prints out a message 'net unreachable', which the user can wait for
	or abandon
    or  2) the source is automatic (e.g. mailer) and the program will abandon
	after a short time and 'try again later'.

Other ICMP error messages are also sent to the source host (various packet
format errors, and other variations on destination unreachable).

The VAN ($$$) concerns about EGP should perhaps have EGP peers be willing to
open a connection and post an NR message ('routing update') only if a net
becomes reachable.  At other times, with the connection closed, any routing
info is uninteresting until the time comes when some user data (tcp...) 
connection attempts to open in either direction.  At that time, the user
data will either get through or elicit an ICPM net unreachable, and the 
EGP exchanges can piggyback on the connection to get current net reachable
info transferred.  The EGP exchanges would have to be completed within the 
1 or 2 minutes the VAN connection is held open when no user data is flowing.

Date:      5 Jul 1984 17:13:23 EDT
To:        brescia@BBNCCQ.ARPA
Subject:   Re: ICMP questions
In response to your message sent  Thu, 5 Jul 84 14:42:33 EDT


We agree that, unless a gateway wiretaps, the only ICMP messages directed to
it will be in response to EGP messages it sends. Thus, ICMP unreachables sent
to its local-net clients would not ordinarily be available to trigger an
EGP update cycle. If we are to assume your suggested gateway-update protocol,
which is not EGP in its present form, the gateway would have to wiretap at least
the unreachables.

EGP as presently conceived is a polling protocol with careful attention to
neighbor reachability, as well as network reachability issues. Therefore, there
has to be some mechanism to assure the gateway itself is up, as well as can
provide net-reachability information on demand (Poll). While it is certainly
possible to fiddle with the Hello/Poll parameters of EGP, not to mention abuse
the unsolicited-Update feature, it does not seem possible to stretch the
interpretation to embrace your suggested VAN procedures, which essentially
amount to a event-driven approach.

If we are to graft the EGP model onto the VAN constraints, I think a more
appropriate approach would be to provide a dynamically adjustable polling
frequency such as now used in the GGP system. We also need to come to real
grips on the wiretap issue.

Date:      Sat 7 Jul 84 16:54:04-PDT
From:      Michael A. Haberler <HABERLER@SU-SIERRA.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   Add to mailing list
Please add me to the TCP-IP-UNIX and TCP-IP-VMS mailing lists.

- mike
Date:      Tue 10 Jul 84 12:15:54-PDT
From:      Stewart French
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP/IP on IBM-PC like computers
We (at Texas Instruments) have hundreds (thousands?) of TI-PC's attached
to each other on an Ungerman-Bass ethernet LAN.  Has there been (or is it
even reasonable) an implementation of TCP/IP for the IBM-PC (or compatible)
preferably with 3-Com Ethernet board using DOS 2.0 or 2.1 ?

Any assistance would be appreciated.

Stewart French
sfrench @ usc-eclb
Date:      10 Jul 1984 14:14:51 PDT
Subject:   Re: TCP/IP on IBM-PC like computers
In response to the message sent  Tue 10 Jul 84 12:15:54-PDT from  SFRENCH@ECLB.#ECLnet

Stewart French:

Sure there is an implementation of IP for the IBM PC.  It works with the 3Com
Ethernet Interface.  It has TCP, Telnet, and TFTP.

Date:      Wed 11 Jul 84 18:12:46-PDT
From:      Michael A. Haberler <HABERLER@SU-SIERRA.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   TCP/IP on AOS/VS (Data General)
We are planning to bring up a Data General MV/10000 running AOS/VS on
the Ethernet using TCP/IP. 
Do you know about:

a) a specific implementation for that machine/operating system, or

b) a reasonably portable implementaion in C which we could use as a starting

Any information would be appreciated.

- mike
Date:      Wed, 11 Jul 84 23:46:25 EDT
From:      Mike Muuss <mike@BRL-TGR.ARPA>
To:        tcp-ip@sri-nic.ARPA
Subject:   [rmc:  Compion -Wollogong]
Well, at least things arn't all one-sided.

----- Forwarded message # 1:

Received: from by BRL-TGR.ARPA id a000857; 11 Jul 84 22:00 EDT
Date: Wed, 11 Jul 84 21:56:59 EDT
From: rmc@BRL-LFD
Subject: Compion -Wollogong
To: Mike@brl-tgr

 As you know we have Access running here. It performs very well and
does not exhibit any of the problems described. I can receive 30 network
msg's a day and all seem intact. The operating system does not apear to be 
slowed by any of the access processes. From time to time Charlie Nietubicz
uses telnet to get to a CRAY in Cal and brings bask the output files with FTP.
We only have strong endorsements for the product
  Mike Cahoon

----- End of forwarded messages
Date:      Thu, 12 Jul 84 8:20:53 EDT
From:      Randy Sebra <randy@AMSAA.ARPA>
To:        Mike Muuss <mike@BRL-TGR.ARPA>
Cc:        randy@AMSAA.ARPA
Subject:   Re:  TCP -vs- VMS

     The VMS machine has not been up on the net very long, but we are using
the ACCESS-T software.  I think the guy from the Navy is probably a bit
of a reactionary.  Of course, we got the latest release of the software,
and have been using that instead of the (known)buggy previous version.
     I do, however, find several faults with the software, although I
don't believe they are serious.  In fact, the faults I find are not with
the TCP/IP end, which is what I believe you were after in the first place.
I frankly don't know how to gather performance figures on TCP, and I don't
think the gentleman who wrote the letter does either - he certainly didn't
give any figures, only perceived generalities.
     Bearing in mind that that is all I have, here goes.  The sending of
mail appears to go slowly.  This may be due to it being spooled up and sent
out at regular intervals - messages are sent and received in groups, just
like most mailers.  Rough figures I have are that on a good day it usually
takes about five minutes for a message to go out onto the net and get back
to the VMS machine - not really all that slow, and I have not had time to
gather any other figures.
     There are some features which are lacking which, after using BSD's
implementation, one misses.  One little one is that there is no dir command
for FTP.  The mail system does suck when compared to msg, but so do most
other mailers.
     The main irritant to me, so far, is downloading of the tables from
sri-nic.  It takes(LATE at night) about thirty minutes to an hour to do
the job.  It seems like some on-line processing of the tables is done
as it is being ftp'ed.  The real problem, though, is that is HAS to be
done late at night.  Invariably, whenever one tries to run the program
during normal hours, the connection will time out!  This is why I think
there is some on-line processing.  Even on loaded machines at prime time,
I have never had anything like that happen on the UNIX machine.
     The last problem that I have found is the manner in which connections
are opened after they have been closed.  There is a time-out of the 
AFTER the users dis-connects, and that port can not be used again until
the time-out is over.  It is adjustable and I believe it defaults to 30-60


P.S. Doesn't the Wollogong software depend on running under EUNICE, and 
wouldn't this merely add another needless layer of software?  At least
ACCESS does drive things directly from VMS.
Date:      12 Jul 84 10:01:56 EDT
From:      dca-pgs @ DDN1.ARPA
To:        tcp-ip @
Subject:   POC for Data General TCP/IP Effort

POC is:

James Hirni
Data General
4400 Computer Drive
Westboro, MA 

Pat Sullivan

Date:      12 Jul 84 15:47 PDT
From:      Tom Perrine <tom@logicon> <tom@LOGICON.ARPA>
To:        Postmaster@cisl, Postmaster@columbia-20
Cc:        Postmaster@logicon, tcp-ip@sri-nic, John Codd <john@logicon>
Subject:   Gateway problems?
Recently, we have had trouble with intermittent mail delivery from CISL
and Columbia.  Some of the problems (with CISL) ocurred when they were
receiving hard checksum errors when trying to connect to us.  It was
attributed to gateway problems.

Some sites have very few problems (some have never had *any*) reaching
us, others have almost continual problems.  Could some "feature" of a
specific gateway be exercising some magic property of our system?

My question is: What gateway do you use to connect to us (LOGICON.ARPA
on MILNET) ?

Thanks in advance,
Tom Perrine

Date:      Thu, 12 Jul 84 18:54 EDT
From:      "Richard Kovalcik, Jr." <Kovalcik@MIT-MULTICS.ARPA>
To:        tom@LOGICON.ARPA
Cc:        Postmaster@CISL-SERVICE-MULTICS.ARPA, Postmaster@COLUMBIA-20.ARPA, Postmaster@LOGICON.ARPA, tcp-ip@SRI-NIC.ARPA, John Codd <john@LOGICON.ARPA>
Subject:   Re: Gateway problems?
Usually BBN-Gateway.
Date:      Fri, 13 Jul 84  9:25:08 EDT
From:      Mike Brescia <brescia@BBNCCQ.ARPA>
Cc:        brescia@BBNCCQ.ARPA
Subject:   Re: Gateway problems?
As far as I know, there are six gateways between ARPA- and MILNET, and they
are identical hardware configurations, with identical software.  They connect
to different points of the nets, and are presented with different traffic
loads, but should not present vastly different levels of service to different

Columbia and CISL have their own gateways to their local nets.  I cannot
speak for them.

In a more general vein, you should be aware that
 - tcp-ip is specified to work over lossy network paths.
 - arpanets are not generally lossy, neither are ethernets.
 - most host implementations of tcp-ip are developed on arpanets or ethernets.
 - gateways ARE lossy (drop packets at the least sign of overload)
 - I hope you have tested your tcp-ip implementation over lossy paths.

Date:      Mon, 16 Jul 84  9:47:47 EDT
From:      Andrew L. Hogan <ahogan@DDN1.ARPA>
Cc:        dca-pgs@DDN1.ARPA,,
Subject:   TCP/IP for Supermicro's
UniSoft, Inc. of Berkeley, CA makes a business of
porting Berkeley Unix to MC68000-based systems.
They refer to this product as UniPlus. I think this
product is actually some combination of System V
and Berk 4.2 (I seem to recall "System V with Berkeley
enhancements"). Anyway, it includes TCP/IP, Telnet, etc.

UniSoft's sales package includes a (long) list of systems
that they have provided this package for. You might want
to obtain this.

Look into Excelan Inc., Pronet, 3COM, and Fusion.

Pat Sullivan

Date:      Mon, 16-Jul-84 15:44:24 EDT
From:      davel@dciem.UUCP (Dave Legg)
To:        net.wanted,net.dcom,net.lan,net.unix,fa.tcp-ip
Subject:   RT/11 TCP/IP implementation wanted.

I need to find an TCP/IP implementation for RT/11 compatable with Berkeley's
We intend to do bi-directional file transfer via an Ethernet to a Vax Unix 
4.2 system.  Would anyone with a lead to such a system please send me mail.
Thanks in advance.
Dave Legg, DCIEM, Toronto, Ont. Canada.    (416) 635-2065

Date:      Tue, 17 Jul 84 21:42:19 EDT
From:      Mike Muuss <mike@BRL-TGR.ARPA>
To:        Tom Dunigan <dunigan@ORNL-MSR.ARPA>
Cc:        Unix-Wizards@BRL-TGR.ARPA, tcp-ip@sri-nic.ARPA
Subject:   Re:  ecu IIs
Error threshold switches should be UP (set to 15) if your ECU--ECU
link is faster than 19.2K;  for slower links you might wish to
use a threshold of about 4.  Owning ECUs is silly if you don't use
their error correction feature, and setting the error threshold to
zero (all down) disables it.

The theory here is that it's better for your ECU's to do a retransmission
locally wherever possible, rather than stomping on the packet and
having TCP do an end-to-end retransmission across the network.
(MILNET and ARPANET trunks are only 56 Kbps;  best to conserve their
bandwidth when it's easy to do so).

As for the override switch on the ECU, the setting of this switch
really depends on two factors:  whether you are interested in
hearing about hard-retransmission errors, and whether your host
is doing RFNM counting.  If your host is doing RFNM counting,
this switch MUST be down, or your network service will
get worse, every time a RFNM from the IMP to your host is dropped.
(VAX 4.2 UNIX does do RFNM counting, I have changes which make it
selectable).  With the switch down, you will also get an

imp0:  interface reset

message (4.2 UNIX) every time you get a hard error on your ECU link.
In addition to clearing the RFNM counters, this also gives you
a record of how bad your ECU--ECU line is.  If your host is not doing
RFNM counting and does not log IMP interface resets, you might as well
leave the switch up.  (Some dumb gateways fit this description).


PS:  We use a 480,000 bps modem between our main gateway and our MILNET IMP,
so that no matter how noisy the line gets, we can always be sure that
the ECU's are not the limiting factor in our gateway's performance.
Date:      Thursday, 19 Jul 1984 08:10-EDT
From:      sra@Mitre-Bedford
Cc:        sra@Mitre-Bedford
Subject:   TCP/IP benchmark programs for UNIX 4.2
We are about to begin an examination of different configurations of SUN
computers to determine the best network configuration for our application.  As
part of this performance examination we intend to check our the SUN's TCP/IP

Are there existing UNIX TCP/IP benchmark programs that we could use?
If so I would appreciate the name and network address of someone to contact
about obtaining a copy.

Stan Ames
Date:      Thu, 19 Jul 84 20:54:15 EDT
From:      Steve Dyer <sdyer@bbncca.ARPA>
Subject:   Inquiry: ACC IF-11/1822
Does anyone out there have very much knowledge or experience with the
new ACC UNIBUS ARPAnet/1822 interface called the "IF-11/1822"?
Note this is NOT their HDH interface.

It would appear to be one of their UMC/Z80 boards with special hardware
to perform both the logical and physical 1822 protocols.  It is physically
much smaller than the older LH/DH-11, taking up two hex slots in the
UNIBUS backplane, but without the 19" washing machine which often requires
another VAX cabinet.

Can it run as fast as a LH/DH-11 between a host and a C/30?  Is it reliable?
Has anyone written a 4.[12] network driver for it yet?  Its size alone makes
it sound attractive, if it works as well as the old, faithful LH/DH-11.
/Steve Dyer
Date:      Thu, 19 Jul 84 23:23:31 EDT
From:      Mike Muuss <mike@BRL-TGR.ARPA>
Cc:        tcp-ip@sri-nic.ARPA
Subject:   Re:  TCP/IP benchmark programs for UNIX 4.2
I have a pair of programs to zap TCP data from one machine to another,
with no disk I/O or other confounding factors.

 -Mike Muuss

  AV  283-6678
  FTS 939-6678

ArpaNet:  Mike @ BRL
UUCP:     ...!{decvax,cbosgd}!brl-bmd!mike
  Mike Muuss
  Leader, Advanced Computer Systems Team
  Computer Techniques and Analysis Branch
  Systems Engineering and Concepts Analysis Division
  U.S. Army Ballistic Research Laboratory
  Attn: DRXBR-SECAD (Muuss)
  APG, MD  21005
Date:      Fri 20 Jul 84 07:16:49-PDT
From:      The Mailer Daemon <Mailer@SRI-NIC.ARPA>
To:        mike@BRL-TGR.ARPA
Subject:   Message of 18-Jul-84 07:06:16
Message undelivered after 2 days -- will try for another 1 day:
Lou@AEROSPACE.ARPA: Cannot connect to host
Received: from BRL-TGR by SRI-NIC.ARPA with TCP; Wed 18 Jul 84 07:06:18-PDT
Date:     Tue, 17 Jul 84 21:42:19 EDT
From:     Mike Muuss <mike@BRL-TGR.ARPA>
To:       Tom Dunigan <dunigan@ORNL-MSR.ARPA>
cc:       Unix-Wizards@BRL-TGR.ARPA, tcp-ip@sri-nic.ARPA
Subject:  Re:  ecu IIs

Date:      Fri, 20 Jul 84 13:48:03 EDT
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Steve Dyer <sdyer@BBNCCA.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  Inquiry: ACC IF-11/1822
IF-11's are depraved.  They work in dubious ways on machines with UBA's
or unibus maps.

Date:      20 Jul 1984 1721-EST
From:      Aram Boornazian <ABOORNAZIAN at BBNV1>
To:        <TCP-IP@SRI-NIC>
Subject:   Wollongong vs. Access
I find I must reply to those who have sent reviews of Compion Corporation's
Access software to the TCP-IP list.  Several statements in those reviews are
factually inaccurate when compared to our own experience with the product.  I
will describe what we have found in the year and a half we used Access and the
two months we have used The Wollongong Group's TCP/IP implementation and leave
the conclusions to the reader.

1.  Performance - Access

    Under the best of circumstances, files transferred at the rate of about
    1000 characters per second between a VAX 780 to or from other systems.  The
    path went from a VAX 780 through an Ethernet to a gateway and from there to
    a nearby Arpanet host.  The "best of circumstances" means that the hosts
    and the gateway were lightly loaded at the time.  The 1822 version of
    Access was even slower.  Between an Access host and a UNIX host on the same
    Ethernet, transfer speed was between 1000 and 8000 characters per second
    depending on who was transmitting.  

    TELNET could achieve, at best, about 400 characters per second, not
    counting characters that were dropped.  When half a dozen TELNET
    connections were in active use, the processes that supported the traffic
    used from 30% to 60% of the CPU as measured by MONITOR.

    Mail was not reliable and the reason it was seems slow is that it is slow.
    When a user sends network mail, the mailsender is awakened and attempts to
    send the message.  If, for whatever reason, several messages are queued up,
    only the newest message is sent with each execution of the malisender.
    Half an hour later later, when it wakes up automatically, it sends one more
    and so on.

2.  Performance - Wollongong

    Under the worst of circumstances, files transfer at the the rate of about
    2000 characters per second between a VAX 780 to or from other systems.  The
    path went from a VAX 780 through an Ethernet to a gateway and from there to
    a nearby Arpanet host.  The "worst of circumstances" means that the hosts
    and the gateway were heavily loaded at the time.  Between an Wollongong host
    and a UNIX host on the same Ethernet, transfer speed is above 10000
    characters per second most of the time.

    Wollongong's TELNET drives our terminals fast enough that local flow
    control is a necessity even when the system is loaded.  System overhead is
    less than that of DZ-11's.  This was measured with the PC sampler in VAX-11
    SPM.  We have made the TELNET path the default for terminal access.
    MONITOR cannot even find the overhead.

    Wollongong's network mail is both sent and read by the VMS MAIL utility
    and, so far as I have been able to determine, it is reliable.  The interval
    at which the mailsender wakes up is adjustable.

3.  Wollongong's TCP/IP and Eunice

    Eunice is not required for TCP/IP use.  Wollongong used Eunice to build
    their implementation.  If Eunice is available, one can modify the user
    interface to TCP/IP to a certain extent, and Eunice makes extensions, such
    as experimental protocols, easier to work with.

4.  Support

    Compion proved unable, even after a year's time, to fix some serious bugs
    that we uncovered in Access.  So far, Wollongong's turnaround time for
    bug-fixes has been a matter of days.
Date:      Fri 20 Jul 84 21:02:25-PDT
From:      The Mailer Daemon <Mailer@SRI-NIC.ARPA>
To:        mike@BRL-TGR.ARPA
Subject:   Message of 19-Jul-84 20:41:20
Message undelivered after 1 day -- will try for another 2 days:
Ron@NOSC.ARPA: Cannot connect to host
hutton@NOSC.ARPA: Cannot connect to host
bam@NOSC.ARPA: Cannot connect to host
Lou@AEROSPACE.ARPA: Cannot connect to host
Received: from BRL-TGR by SRI-NIC.ARPA with TCP; Thu 19 Jul 84 20:41:21-PDT
Date:     Thu, 19 Jul 84 23:23:31 EDT
From:     Mike Muuss <mike@BRL-TGR.ARPA>
cc:       tcp-ip@sri-nic.ARPA
Subject:  Re:  TCP/IP benchmark programs for UNIX 4.2

Date:      24 Jul 1984 09:39:19 PDT
From:      Vicki Gordon <VGordon@USC-ISIC.ARPA>
Subject:   Re: {8407.0434} BBN-UNIX host address change


I will make sure that our host table is updated to reflect the host 
address change by 0700 on Wednesday, July 25th.

Date:      24 Jul 1984 1003-PDT
From:      Art Berggreen <ART at ACC>
To:        <tcp-ip@sri-nic>
Subject:   IF-11/1822
The ACC IF-11/1822 is a UMC based, two hex board system, which has been
programmed to emulate an LH-DH/11 as closely as possible.  Existing
LH-DH/11 device drivers should be able to be adapted with little work.
The IF-11/1822 was developed as part of a system for a DoD customer
and is NOT supported outside their system (at least today).  If you
must know more, contact Gary Krall at ACC (, or, or 805-963-9431).

The preferred C30 IMP connection is IF-11/HDH (1822 HDLC Host protocol).
Support exists for 4.1BSD and I'm finishing up a 4.2BSD driver.  We will
soon be supporting HDH under Wollongong's TCP-IP and plan on also supporting
Compion's (now Gould Software) ACCESS TCP-IP.

Also, to clear up some possible confusion, most IF-11/xxx products (but
not IF-11/1822) have a host to front-end driver protocol called MCX/MCD.
This protocol is used to pass buffer descriptions to the front-end's
processor which can (from the host's viewpoint) reference all buffers
simultaneously. This has significance for VAXs which have mapping between
the UNIBUS and memory.  Currently, this means fully mapping all buffers
passed to the front-end.  Future products coming down the pipe will have
better hooks for notifying the host of data and allowing dynamic allocation
and mapping of buffers.
					Art Berggreen
					Advanced Computer Communications
Date:      Tue 24 Jul 84 11:58:27-PDT
From:      Susan Romano <SUE@SRI-NIC.ARPA>
To:        ron@BRL-TGR.ARPA
Subject:   Re:  {8407.0434} BBN-UNIX host address change

The NIC host table will be changed tonight, 24 Jul 84, and the
address change for BBN-UNIX will be included in this new version.

Date:      24 Jul 1984 10:54-EDT
To:        tcp-ip@SRI-NIC.ARPA
Subject:   BBN-UNIX host address change
Wednesday morning, 25-July at 0700, the BBN-UNIX host computer will
have it's ARPANET host address changed from to
This host is one of the main mail entry points into Bolt Beranek
and Newman. Any help in insuring that host tables get updated
promptly would be appreciate.

/Steve Chipman
Date:      Tue, 24 Jul 84 14:40:38 EDT
From:      Ron Natalie <ron@BRL-TGR.ARPA>
To:        Vicki Gordon <VGordon@USC-ISIC.ARPA>
Subject:   Re:  {8407.0434} BBN-UNIX host address change
Our host tables are loaded from the NIC nightly.  You should convince
them to release a new table the night you are changing.

Date:      Tue, 24 Jul 84 19:37 EDT
From:      "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
To:        tcp-ip@SRI-NIC.ARPA
Subject:   HDH
Anyone who calls HDH "preferred" has, pardoning the expression, their
head screwed on sideways.

Half the protocol is not yet implemented in the IMP, and the half that
is has some truly entertaining bugs.  Consult Schiller@MIT-Multics for
horror stories.

(By half the protocol, I mean the support for the reasonable packet
encapsulation technique).

Date:      Tue, 24 Jul 84 20:37:26 EDT
From:      Doug Kingston <dpk@BRL-TGR.ARPA>
To:        "Benson I. Margulies" <Margulies@CISL-SERVICE-MULTICS.ARPA>
Cc:        tcp-ip@SRI-NIC.ARPA
Subject:   Re:  HDH
Benson is right here, the preferred interface is described in
BBN Report #1822 and is therefore refered to as a "1822 interface".

Date:      25 Jul 1984 0927-PDT
From:      Art Berggreen <ART at ACC>
To:        <tcp-ip@sri-nic>
Subject:   More about HDH
In regard to HDH being the "preferred" C30 IMP interface, I was stating
DCA's position NOT ACC's.  We also find the current status of HDH to
be less than perfect.  But if anyone has watched the recent DDN procurements
they will see that HDH and/or X.25 are the specified interfaces.  Since
the X.25 support in the IMP is not complete and will not yet allow
internet accesses to non X.25 hosts we feel, of the two, that HDH is
currently the better.
						Art Berggreen

Date:      25 Jul 84 09:43:47 EDT
From:      corrigan @ DDN1.ARPA
To:        tcp-ip @
Cc:        ddn-pmo-mgrs @ DDN1.ARPA
Subject:   HDH
The preferred variation of 1822 on the MILNET is HDH (see the DDN Sub-
scriber Interface Guide).  The preferred interface is X.25 (see USD (C3I)
Memorandum, 14 May 1984, Subject: DoD Policy on Defense Data Network (DDN)
Protocols).  The rationale for preferring the 1822 HDH interface is that in
general hosts will not be collocated with IMPs in the DDN, therefore an 
interface designed to operate either with a local or remote connection to 
an IMP is preferred.  The problems with the HDH interface are recognized and
are being worked as fast as possible.  The rationale for preferring X.25 is
given in the referenced memorandum.  It is worth mentioning that there are
a number of projects underway to optimize  network performance for the
X.25 protocol, that the referenced memorandum requires TCP and IP at the 
transport and internet levels, and that the relevant X.25 spec is the 
DDN X.25 Host Interface Specification which you should read before going
off the deep end about this message.

Mike Corrigan
Technical Manager

Date:      Wed Jul 25 17:12:20 1984
From:      cadmus!schoff@bbncca
To:        bbncca!
Subject:   RE:  HDH
Preference depends on your point of view.  If you are a HARRIS, DG, ICL,
NIXDORF, PRIME.............  you probably already have the ability to do
HDLC which gets you aways towards HDH, who wants to build an 1822
interface?  1822 has been specified for a number of years and look at
the plethoria of interfaces.


...decvax!wivax!cadmus!schoff  (usenet)
decvax!wivax!cadmus!  (arpa too) 

Date:      26 Jul 1984 10:21-PDT
From:      Geoffrey C. Mulligan (AFDSC, The Pentagon) <GEOFFM@SRI-CSL>
To:        dpk@BRL-TGR
Cc:        Margulies@CISL-SERVICE-MULTICS, tcp-ip@SRI-NIC
Subject:   Re:  HDH
The preferred interface to IMPs is HDH and X.25!  This is a
switch from DCA's previous position, but it is the present word
from DCA.

Date:      Fri, 27 Jul 84 16:50:26 EDT
From:      dca-pgs <dca-pgs@DDN1.ARPA>
Subject:   IF-11/HDH - Who's Got It?
Would anybody on the net with the IF-11/HDH
contact me? Performance reports/opinions would
be welcome, too.

Thanks for all info,
Pat Sullivan

Date:      27 Jul 1984 18:01:44 EDT
To:        dca-pgs@DDN1.ARPA, tcp-ip@SRI-NIC.ARPA, info-vax@SRI-CSL.ARPA
Subject:   Re: IF-11/HDH - Who's Got It?
In response to the message sent  Fri, 27 Jul 84 16:50:26 EDT from dca-pgs@DDN1.ARPA


We recently went through a round of grief building a driver for the Q-bus
version of the IF-11/HDH, one of which is now running on a fuzzball
connected to a MINET IMP. The low-level HDLC code for HDH is apparently
an erlier version of the HDLC code used for the X.25 interface, which
coming up after a line outage or IMP restart, for example. ACC is
aware of at least one cause of these problems and promisses to fix
it soon.

With the exception of the above problems, we have found the HDH interface
to work well, at least at speeds to 56 Kbps. Users should be cautioned,
however, that there is a monstrous amount of buffering inside the thing
(several thousand octets), which can cause real problems with retransmission-
timeout calculations and large windows. The problems would be expected to
impact Berkeley 4.2 systems rather severely, since these systems are known
to use an unrealistically small initial retransmission timeout.